The EU AI Act — A Complete Guide to Laws and Regulations on AI

Francesco Casalegno
6 min readJul 14, 2024

--

Photo by Christian Lue on Unsplash

The EU AI Act — Who and What is Concerned?

From voice assistants to augmented medical diagnostics, AI systems have seen a massive increase in adoption in our daily lives. But with this rise in AI applications, the European Union (EU) has also raised concerns about the potential risks that AI systems may pose.

In May 2024, the EU finally adopted the AI Act Act to ensure

a high level of protection of health, safety, fundamental rights as enshrined in the Charter of Fundamental Rights of the European Union, including democracy, the rule of law and environmental protection, to protect against the harmful effects of AI systems in the Union, and to support innovation

👉 But who is actually impacted by the EU AI Act, and what does it imply?

EU vs. Extra-EU

The EU AI Act applies to all AI systems providers that are based in the EU or provide their services in the EU, regardless of their location. So, even if a company is based e.g. in the USA, it must comply with the AI Act.

In this sense, the EU AI Act is crucial element to consider for any AI developer or business. As the “Brussels Effect” suggests, it is likely that extra-EU countries will adopt similar regulations in a near future, as it happened with the General Data Protection Regulation (GDPR).

Developers vs. Users vs. End Users

The EU AI Act distinguishes between 3 different categories of actors:

  • Providers / Developers create and provide AI systems to the market.
  • Users / Deployers deploy AI systems in a professional capacity.
  • End Users are affected by the AI system by using a service or a product.

The EU AI Act imposes strict obligations on Providers and less strict oblications on Users, but no oblications to End Users.

Restrictions and Obligations

We have now clarified who is concerned by the EU AI Act.

👉 But what are exactly the restrictions that the EU AI Act imposes?

The EU AI Act categorizes AI systems into 4 different levels of risk, and applies specific regulations to each level. Moreover, General Purpose AI (GPAI) systems are considered in a separate, dedicated category.

AI Risk Categories

The EU AI Act defines 4 categories of risk, and corresponding restrictions. (Image by Author)

The EU AI Act defines 4 levels of risk for AI systems.

  1. Unacceptable Risk. These systems are prohibited, as they are considered to pose a critical threat.
  2. High Risk. These systems must pass an evaluation both before market placement and throughout their life cycle. Providers must establish and obtain approval for risk management systems, data governance, technical documentation, and quality management systems. Deployers must ensure human oversight and compliance.
  3. Limited Risk. These systems must only ensure transparency, i.e. Developers and Deployers must clearly inform End Users that they are interacting with an AI system.
  4. Minimal Risk. These systems are not subject to restrictions.

Let us now see which AI applications fall into each of these risk categories.

Unacceptable Risk

Unacceptable Risk AI systems. (Image by Author)

The EU AI Act classifies as Unacceptable Risk AI systems, those that pose a critical threat to human rights and safety.

  • Systems using subliminal or manipulative techniques to influence behavior and affect decision-making.
  • Systems exploiting human vulnerabilities (age, disability, …) to influence behavior.
  • Systems using real-time remote biometric identification for law enforcement in public spaces.
  • Systems allowing to identify sensitive information (ethnicity, sexual orientation, …).
  • Systems using social scoring (e.g. Social Credit System) evaluating behavior and personality of individuals to determine access to services.
  • Systems estimating criminal offense probability based on personality.
  • Systems allowing to identify emotions at work or in educational places.

However, exceptions are made to this list of Unacceptable Risk AI systems when they are used for critical situations, such as searching for kidnapped persons or military applications.

High Risk

High Risk AI systems. (Image by Author)

The EU AI Act classifies as High Risk AI systems, those that may pose threats to health and safety if not properly managed.

  • Systems regulating access to critical infrastructure—water, gas, …
  • Systems regulating access to education and evaluation of students.
  • Systems used in job recruitment and selection as well as evaluation of employees based on personality and behavior.
  • Systems used in justice processes and law enforcement to determine evidence reliability, risk of recidivism, ….
  • Systems used for border and immigration control to determine visa and asylum eligibility.

Limited Risk

Limited Risk AI systems. (Image by Author)

The EU AI Act classifies as Limited Risk AI systems those that may have an impact if End Users are not aware of interacting with an AI system.

  • Systems using chatbots and voice assistants.
  • Systems using deepfakes and AI generation of images, sounds, …

Minimal Risk

Minimal Risk AI systems. (Image by Author)

The EU AI Act classifies as Minimal Risk AI systems, those that do not fall into any of the previous categories. This includes any AI system that is not mentioned in the previous categories, so the Act only gives few examples.

  • Systems used in video games.
  • Systems used in spam filters.

General Purpose AI

ChatGPT is an example of GPAI. (Screenshot of ChatGPT by Author)

Finally, the EU AI Act includes a separate category for General Purpose AI (GPAI) systems. This category was introduced to address concerns raised by Generative AI—in particular Large Language Models (LLMs) such as ChatGPT—and covers models characterized by the following elements.

  • Capability to perform a variety of tasks — e.g. ChatGPT can be used for conversation, image generation, code generation and execution, …
  • Training on large datasets — e.g. GPT-3 was trained on ~500 B words.
  • Training with high computational resources — e.g. GPT-4 was trained using 21 billion petaFLOPS.

GPAI Providers are subject to specific obligations including technical documentation on training and testing data and process, as well as compliance with Copyright Directive.

The Timeline of the EU AI Act

👉 How did the EU came to adopt the AI Act, and since when does it apply?

Timeline of AI Act adoption and implementation. (Image by Author)

In response to concerns about potential threats coming from AI systems, the European Commission drafted a proposal to regulate AI in April 2021.

After years of discussions — in particular due to the emergence of unprecedented capabilities demonstrated by novel models like ChatGPT — the European Council formally approved the AI Act in May 2024.

The AI Act officially enters into force in August 2024, 20 days after its publication in the Official Journal. But the restrictions and obligations on AI Systems take place after 6 or 12 months, depending on the risk level.

  • February 2025: Prohibition on Unacceptable Risk AI start to apply
  • August 2025: Restrictions and obligations on High-Risk, Limited-Risk, and General Purpose AI start to apply

References & Further Readings

--

--