The EU AI Act

A Real-Time Experiment to Regulate Generative AI

Samson Esayas
Berkman Klein Center Collection

--

Flags of the member states of the European Union in front of the European Commission building in Brussels.
Photo by Christian Lue on Unsplash.

The public release of ChatGPT in November 2022 represents a significant breakthrough in generative AI — systems that craft synthetic content based on patterns learned from extensive datasets. This development has heightened concerns about AI’s impact on individuals and society at large. In the brief period since this breakthrough, there has been a surge in lawsuits pertaining to copyright and privacy violations, as well as defamation. One lawyer learned a hard lesson about the dangers of AI “hallucination” after citing seemingly genuine but bogus judicial precedents generated by ChatGPT in a legal brief submitted to court. There are even reports that such systems have been implicated in an individual’s decision to commit suicide.

Given these concerns, there is a growing demand for regulatory action. OpenAI’s CEO, Sam Altman, addressed the US Congress in May and called upon legislators to act.

The EU has taken the lead in legislative endeavors. In April 2021, the European Commission proposed a Regulation on AI (AI Act), marking the first step toward a comprehensive global legal framework on AI. This landmark legislation aims to foster a human-centric AI, directing its development in a way that respects human dignity, safeguards fundamental rights, and guarantees the security and trustworthiness of AI systems.

The agile innovation process that permeates the software world, where distributed technologies are frequently released in early stages and iteratively refined based on usage data, necessitates a regulatory system that is designed to learn and adapt.

The proposed AI Act adopts a risk-based approach, categorizing AI systems into three main risk levels: unacceptable risk, high risk, and limited risk. This classification depends on the potential risk posed to health, safety, and fundamental rights. Certain AI systems such as those that generate “trustworthiness” scores, akin to the Chinese Social Credit System, are considered to present unacceptable risks and are completely prohibited. AI systems used in hiring processes and welfare benefit decisions fall into the high-risk category and are subject to stringent obligations. These include conducting a conformity assessment and adhering to certain data quality and transparency requirements. Meanwhile, chatbots and deepfakes are considered limited risk, subject to relatively minimal transparency requirements.

Shortly after the proposal was drafted, and after the release of ChatGPT, it became clear that the Commission’s draft contained a significant hole: it did not address general purpose AI or “foundational models” like Open AI’s GPT-n series, which underpins ChatGPT. Fortunately, due to the EU’s multistage legislative process, the release of ChatGPT occurred while the European Parliament was deliberating on the AI Act. This provided a timely opportunity to include new provisions specifically targeting foundational models and generative AI.

Under an amendment adopted by the European Parliament in June, providers of foundational models would be required to identify and reduce risks to health, safety, and fundamental rights through proper design and testing before placing their models on the market. They must also implement measures to ensure appropriate levels of performance and adopt strategies to minimize energy and resource usage. Moreover, these AI systems must be registered in an EU database, with details on their capabilities, foreseeable risks, and measures taken to mitigate these risks, including an account of risks that remain unaddressed. The amendment would impose additional obligations on foundational models employed in generative AI. These obligations include transparency requirements, ensuring users are aware that content is machine generated, and implementing adequate safeguards against the generation of unlawful content. Providers must also publish a detailed summary of copyrighted content used to train their systems.

While the final version of the AI Act will be determined by the trilogue among the European Commission, European Parliament, and European Council, its current form already marks an ambitious and real-time attempt to regulate generative AI, highlighting the challenges of regulating a rapidly evolving target.

On this occasion, the EU’s legislative process kept pace with the latest advancements before the laws were set in stone. However, it raises the question: how often can we count on such fortunate timing, and what proactive measures should be taken?

We must embed flexibility into such laws. Indeed, the EU has taken some steps in this direction, granting the Commission the authority to adapt the law by adding new use cases into the risk categories. Yet, considering previous experiences with the Commission’s implementation of delegated acts, it’s debatable whether such mechanisms alone can keep up with the rapid pace of AI development.

The agile innovation process that permeates the software world, where distributed technologies are frequently released in early stages and iteratively refined based on usage data, necessitates a regulatory system that is designed to learn and adapt.

It is important to embrace a variety of techniques for adaptive regulation, such as regulatory experimentation through pilot projects and embedding systematic and periodic review and revision mechanisms into legislation. Adaptive regulation further necessitates openness to a diversity of approaches across jurisdictions. It encourages learning from one another, which implies that the EU should resist its inclination to solely dictate global standards for AI regulation, and instead regard its efforts as contributions to a collective pool of learning resources.

While adaptive regulation does come with its own costs, clinging to static regulation designed for a hardware world with fully-formed products manufactured in centralized facilities, could potentially prove to be even more costly in the face of rapidly advancing technology.

Simultaneously, the amendment has significantly broadened the Act’s scope. While the Commission’s draft focused on mitigating harms to health, safety, and fundamental rights, the European Parliament’s version extends these concerns to include democracy, the rule of law, and environmental protection. Consequently, providers of high-risk AI systems and foundational models are required to manage risks associated with all these areas. However, this raises concerns that the Act might transform into a catch-all regulation with diluted impact, thereby creating a considerable burden on providers to translate these broad goals into concrete guardrails.

This amendment has exacerbated existing concerns that these broad requirements and accompanying compliance costs might stifle innovation. In an open letter to EU authorities, over 150 executives from companies including Siemens, Airbus, Deutsche Telekom, and Renault criticized the AI Act for its potential to “undermine Europe’s competitiveness and technological autonomy.” One of the significant concerns raised by these companies relates to the legislation’s strict requirements aimed at generative AI systems and foundational models. The letter equates the importance of generative AI with the invention of the internet, considering its potential to shape not only the economy but also culture and politics. The signatories caution that the compliance costs and risks embedded in the AI Act could “result in highly innovative companies relocating their operations overseas, investors retracting their capital from the development of European foundational models, and European AI in general.”

OpenAI has already warned about potentially exiting the EU if the conditions of the AI Act prove too restrictive. There are also indications that even major players are cautious when rolling out their latest services. The launch of Google Bard was delayed in the EU by two months due to compliance concerns with the General Data Protection Regulation. However, it was ultimately introduced with improved privacy safeguards, highlighting the EU’s role in shaping global data policies of such organizations.

For its part, the EU contends that the AI Act is designed to stimulate AI innovation and underscores key enabling measures included in the Act. These encompass regulatory sandboxes, which serve as test beds for AI experimentation and development, an industry-led process for defining standards that assist with compliance, and safe harbors for AI research.

Of course, the concerns from the industry about the AI Act’s impact on innovation, as well as the EU’s responses to these matters, represent an essential part of balancing the inevitable trade-offs inherent in regulating any emerging technology, and time will tell which direction the pendulum swings. During the trilogue negotiations, it is likely that the European Council will push back on some of the amendments from the Parliament. Indeed, there is merit in carefully weighing the benefits of introducing broad objectives such as democracy and the rule of law without concrete measures in place to support these goals. One might argue that efforts are better spent strengthening the safeguards for fundamental rights, which is crucial for safeguarding both democracy and the rule of law. Numerous civil society organizations have already emphasized the need for incorporating fundamental rights impact assessments and empowering individuals and public interest organizations to file complaints and seek redress for harms inflicted by AI.

Moreover, it would be beneficial to concentrate on tangible guardrails, such as facilitating researchers’ access to foundational models, data, and parameters. This approach is likely to be more effective in promoting accountability, democracy, and the rule of law compared to a general requirement to conduct risk assessments based on such broad concepts.

Regardless of the final form of the text, the AI Act is poised to significantly shape AI development and the regulatory landscape in the EU and beyond. Therefore, the AI community must prepare for its impact.

This essay is part of the Co-Designing Generative Futures series, a collection of multidisciplinary and transnational reflections and speculations about the transformative shifts brought on by generative artificial intelligence. These articles are authored by members of the Berkman Klein Center community and expand on discussions that began at the Co-Designing Generative Futures conference in May 2023. All opinions expressed are solely those of the author.

--

--

Samson Esayas
Berkman Klein Center Collection

Dr. Esayas is an associate professor at BI Norwegian Business School, and researches the interplay between law, technology, & markets as regulatory instruments.