All the experts consulted agree: ChatGPT and OpenAI flagrantly violate the GDPR. Following Italy’s decision, the Spanish Data Protection Agency has officially opened an investigation into the popular artificial intelligence tool.
But Spain is not alone, Europe is also investigating the fit of ChatGPT. The task force was just formed this week but a lot has to change before action is taken.
At Xataka we have consulted with several lawyers specialized in data protection to learn about the legal aspects of this matter. OpenAI is a rocket that is reaching very high, but if it is not put on the brakes it could end up exploding.
The AEPD can block ChatGPT. Another matter is that he decides so. The Spanish agency has the legal capacity to block ChatGPT if it deems it appropriate. Borja Adsuaraa professor at the Complutense University of Madrid and an expert in digital law, explains that what the agency would do “is not to prohibit”, but rather to block.
A close example is with intellectual property and download pages outside of Spain. “If there is illegal content, the administrative authorities can request that this content be deleted or request that it be closed. If it is outside of Spain (as in the case of OpenAI), the telcos can be asked to block access.”
Samuel Parra, director of the Data Protection area at ÉGIDA, believes that it is not necessary to go that far: “Spain can prohibit OpenAI from collecting data from Spaniards”, but in many cases the will of the affected company is sufficient. . Faced with the prospect of exposing itself to additional sanctions and measures, OpenAI stops offering the service.
Italy has gone ahead of everyone. The first country to report the problem with ChatGPT was Italy. Jorge Garcia HerreroData Protection delegate, explains that “the Italian authority asked OpenAI for measures on the treatment of the data of Italians. And the company’s response was to stop the service.”
In order for ChatGPT to comply with Italian regulations, the regulator has requested a series of requirements, in line with the General Data Protection Regulation (GDPR). If you implement them you will be able to return.
Enrique Puertas, professor of Artificial Intelligence and Big Data at the European University, explains that “Italy has not banned ChatGPT. What they want is for the company to clarify what type of data is being collected. They want clarification.” Something very similar to what is sought from Spain, exposes Puertas.
Nine measures until April 30. OpenAI has less than a month to apply a series of changes if it wants to continue in Italy. In this short period of time we can expect the ChatGPT company to make a decision. Nine changes are requested, the last two having a longer time frame.
A privacy policy describing what data is used. A tool for users to request the deletion of their personal data. A tool to correct incorrect personal data. A link to user information at the time of registration. Change the legal basis of algorithmic training on data without consent. A tool to exercise the right to delete personal data when training the algorithm. Filter by age when registering. Control by age for children under 13 and under 18 years of age. Information campaign in the media explaining the use of personal data used in the tool.
OpenAI has gone from academic startup to multinational too quickly. “The only basis they have is legitimate interest,” explains Herrero, who recalls OpenAI’s academic origins. “Your AI has been trained with a lot of datasets with different origins, almost all for research purposes. The possibilities that the law leaves for scientific research are very high, but these have moved very quickly to something else. You have to measure what is going to be your treatment and it is something that you must have done before. Now they will be working on it. They have to give, as Google does, the possibility that people can correct or delete that data. And that part will have to be seen if they are capable of implement it”.
“This is not optional, it is the law. If you are processing personal data, you should consider data protection obligations from the start. Taking that approach by design and by default,” summarizes Stephen Almond, UK Data Protection Commissioner, in a statement.
Spain will wait for the rest. Based on the decisions taken in the past, all the experts agree in predicting that the AEPD will wait for a European decision. “The most proactive are the Germans. From my point of view I don’t remember that the Spanish agency is one of the most innovative,” explains Herrero. “In Spain they are more than socializing the problem.”
“The note was very concise, but I have the feeling that Spain is not going to stand out,” says Parra. “I doubt that there will be an attempt to block its use, but rather demand some guarantees,” says the expert.
“They will not say it publicly, but the AEPD considers that Italy has behaved inconsistently and almost unfairly. If each one wages war on their own, why do we have a European regulation?” Adsuara details.
OpenAI has a representative, but it is not yet in Europe. “OpenAI cannot be fined because it is not in Europe,” explains Adsuara. “Come here, like Microsoft, Meta or Google, and then we’ll talk. They’ve gone very fast. This is all a growth crisis.”
The movements that are happening around ChatGPT is because OpenAI has not opened a subsidiary in Europe. Instead of fining you for not complying with the GDPR, you should specifically ask them to comply with a series of measures or face the consequences.
Although, in their privacy policy they do point to VeraSafe in Ireland as their data protection representative for the European Union and the United Kingdom. “If OpenAI has already designated a representative in Europe and in its privacy policy it does recognize the GDPR, what it tells us is that they at least intend to comply with it,” says Parra.
There will be an important announcement soon. “I would be concerned if the advance of this technology was put on hold due to compliance with the Regulation. Surely there will be a commitment from OpenAI within a specific period. A commitment to ensure that the data will not be used for purposes other than those exposed or for unfair treatment,” says Parra.
It is not known who will move first. If OpenAI opens a branch in Europe and announces important improvements in privacy or if Europe will publish its statement with different measures.
“This is an envelope,” describes Adsuara. “On the one hand you open a file and on the other hand the lobbies get involved to ensure that everyone gets along. Soon OpenAI will announce that it opens an establishment in Europe and that they want to comply”
“They are going to do a cosmetic thing and then it will be seen if Europe wants to play strong or not. My intuition is that there will be a consensus. Europe will offer a longer term than Italy. I doubt that the different countries will come up, because there would be a mess important,” says Herrero. “These agencies have no choice but to warn that OpenAI is in breach of the Regulation. They don’t have many other options as an authority,” he concludes.
Parra also believes so. “Europe will give wider margins, with more intense measures. Perhaps placing some temporary measure for a more imminent term.”
OpenAI does not have European servers. If already with Facebook or Google the sending of transoceanic data can end up putting them in a compromise, with ChatGPT the case is even more evident. The company does not have European servers, which means that every time we use ChatGPT we are sending data to the United States. The two powers reached a new ‘Privacy Shield’ agreement in October 2022, but the European courts have yet to ratify it.
Microsoft and all companies that work with ChatGPT in the spotlight. “You can’t get your hands on OpenAI yet, but what about Microsoft?” reflects Adsuara. GPT-4 is integrated into Microsoft Bing and the same requirements apply to it. With one difference, as a company hosted in Europe they must comply with the GDPR or expose themselves to fines.
In this case, the position of the different experts is more prudent, trusting that Microsoft’s lawyers will have implemented the privacy policy well.
Another more problematic case is the one that Herrero points out: “OpenAI has released a multipurpose tool and much smaller companies offer it through specific apps. All possible errors in ChatGPT will be claimed from those startups.”
And in the background, the European regulation on AI. The arrival of ChatGPT is a challenge for European regulations. Not only for the General Data Protection Regulation, but for the regulation on artificial intelligence that was already very advanced and has seen how ChatGPT has broken the schemes.
“It has blocked it. A few months ago there was a round table to see how to fit the LLMs and many drafts have become outdated. The risk categories and the different managers do not fit well with the new paradigm,” explains Herrero. “We’re going to see significant delays here.”
OpenAI has advanced very quickly, but the laws are there to comply with them. The type of fit that ChatGPT has in Europe will depend on the extent to which those involved make their part more flexible. Everyone seems to agree on two things: the enormous potential of AI, and its myriad risks if not applied correctly.
In Xataka | Europe embraces Artificial Intelligence: this is the pioneering agreement that will exploit (and limit) its possibilities