OpenAI’s ChatGPT faces regulatory hurdles in Europe over GDPR violations, giving a glimpse into the future of AI services under global data privacy laws.

Krishna Kumar
3 min readMay 6, 2023
ChatGPT

OpenAI’s regulatory troubles are only just beginning as the European Union’s fight with ChatGPT is a glimpse into what’s to come for AI services. The effective ban of ChatGPT in Italy due to violating EU data protection rules highlights the need for AI services to adhere to the General Data Protection Regulation (GDPR), one of the world’s strongest legal privacy frameworks, with similar investigations launched in Germany, France, Spain, and Canada.

ChatGPT is one of the most popular examples of generative AI, covering tools that produce text, image, video, and audio based on user prompts. Critics have highlighted ChatGPT’s unreliable output, confusing copyright issues, and murky data protection practices. Italy noted four ways OpenAI was breaking GDPR: allowing ChatGPT to provide inaccurate or misleading information, failing to notify users of its data collection practices, failing to meet any of the six possible legal justifications for processing personal data, and failing to adequately prevent children under 13 years old from using the service. It ordered OpenAI to stop using personal information collected from Italian citizens in its training data for ChatGPT.

OpenAI is cagey about what training text is used but says it draws on “a variety of licensed, created, and publicly available data sources, which may include publicly available personal information.” This potentially poses huge problems under GDPR, which requires companies to have explicit consent before collecting personal data, to have legal justification for why it’s being collected, and to be transparent about how it’s being used and stored. European regulators claim that the secrecy around OpenAI’s training data means there’s no way to confirm if the personal information swept into it was initially given with user consent. OpenAI preemptively updated its privacy policy to facilitate requests to be forgotten but there’s been debate about whether it’s technically possible to handle them, given how complex it can be to separate specific data once it’s churned into these large language models.

OpenAI also gathers information directly from users. It collects standard user data and records interactions users have with ChatGPT, which can be reviewed by OpenAI’s employees and used to train future versions of its model. Given the intimate questions people ask ChatGPT, such as using the bot as a therapist or a doctor, this means the company is scooping up sensitive data. At least some of this data may have been collected from minors as OpenAI’s policy states it “does not knowingly collect personal information from children under the age of 13,” but there’s no strict age verification gate.

If regulatory agencies demand changes from OpenAI, they could affect how the service runs for users worldwide. The GDPR’s reach is global, covering any service that collects or processes data from EU citizens, regardless of the organization’s location. Lawmakers in the bloc are putting together a law that will address AI specifically, likely ushering in a new era of regulation for systems like ChatGPT. The concerns can be broadly split into two categories: where ChatGPT’s training data comes from and how OpenAI is delivering information to its users.

In conclusion, OpenAI’s regulatory woes in the EU, coupled with the GDPR’s global reach and new regulations on AI being developed in the bloc, are indications of what’s to come for AI services. Companies will need to adapt and adhere to these regulations to avoid similar legal issues, and the AI industry can expect more scrutiny from regulators in the future.

--

--

Krishna Kumar

Hi, I am Krishna from INDIA, WHITE HAT SEO Specialist more than 4 years experience in this industry and till the date.