The new EU AI Act: Contents, Technologies, Penalties, and What it Means for Companies

Updated April, 13, 2024: EU AI Act passed by EU parliament.

Maximilian Vogel
11 min readFeb 23, 2024


The EU Parliament has just passed the new EU AI Act. I took a look at the act that will regulate the entire field of machine learning in the EU in the future:

  • Which companies and players will be affected?
  • Which AI technologies will be regulated and how?
  • How will generative AI be dealt with?
  • What does this mean for the European AI scene? Is this the kick-off for a flourishing AI ecosystem in Europe or is the EU killing the new technology softly?

What is it all about?

The EU AI Act, which has been discussed in various preliminary drafts in recent weeks, some of which have been leaked via LinkedIn with gigantic revision tables, is now a law:

  • What does the Act regulate?
  • Which companies are affected?
  • Which technologies are covered?
  • How bad is it? Is the EU once again regulating a digital industry to death?

Content and scope

The Act is the key set of rules on AI, even if other rules such as the GDPR, Digital Markets Act, Digital Services Act, Data Governance Act, and Data Act also apply or will apply to AI applications. The Act is a paper of more than one hundred pages with a very detailed regulation of AI and is intended to be a central pillar of a framework for the regulation of everything digital in the EU through its newly created institutions.

EU digital regulatory framework: AI Act, Data Act, Data Governance Act, EHDS, Digital Markets Act, Digital Services Act.


April 21, 2021: First draft of the act by the EU Commission

December 6, 2022: General approach of the Act is determined by the EU Council (representatives of the states)

December 9, 2023: Council and Parliament agree on the text proposal

January 22, 2024: The almost final text with the draft changes from all sides was leaked

February 2, 2024: France and Germany, the only two countries with significant AI startups in the EU, are giving up most of their reservations, a few questions are still open

March 2024: Final act proposal

April 13, 2024: Adoption in the EU Parliament

April / May 2024: Publication of the law in the Official Journal of the EU

2024-2025: The EU will set up an authority system (AI Office, AI Board) that will try to enforce and further shape the rules

Q4 2024: The regulations on prohibited AI applications come into force

Q2 2025: The regulations on generative AI come into force

Q2 2026: The regulations on high-risk systems come into force

(all dates in the future are estimated)

To which technologies does the Act apply?

The scope of application was clearly defined in Annex I of the original proposal: Among others, deep, supervised, and unsupervised learning are listed. This means that practically all generative AI would fall under the definition of the Act (e.g., ChatGPT, other LLMs, image generation models such as Stable Diffusion or Dall-E, etc.). In the latest version, this was replaced by the wider formulation, “AI is a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate output.” This applies to any sufficiently complex system. However, it should still refer to AI as it is generally understood.

To which areas of application does the Act apply?

  • For AI placed on the market in the EU
  • For AI used by EU citizens
  • For AI results (whatever that may be) that are disseminated in the EU

Out of scope of the Act

  • Not for pure research and development projects
  • Not for the publication of open source models or software, but only for their use (i.e., “placing on the market”); however, there are also a few regulations here that also cover open source.
  • Not for the military and all areas of application in the field of national security
  • There are exceptions for AI for law enforcement.

Why do we need it?

We don’t.
AI regulation is important, and many governments are working on it. To determine who is responsible for a result of AI, who is liable, who is the owner and so on. Unfortunately, this is not part of the AI Act. Apart from the EU leaders, nobody has a real need for this EU AI Act. The leaders simply wanted to be the first to make their mark with AI regulation and have put themselves under pressure. This has also led to the document being hastily drafted in parts, lacking practical relevance, and often not being state-of-the art. What the EU colleagues cite as the motivation for the Act is a hodgepodge of different, certainly well-meant objectives.

Is this the death knell for the European AI industry?

Many practitioners and some politicians (including French President Macron) have criticized the AI Act for slowing down innovation in Europe and massively harming European AI founders.

We can decide to regulate much faster and more strongly than our major competitors. But we will regulate things that we will no longer produce or invent. That’s not a good idea.
Emmanuel Macron

Macron is right; it is an additional, heavy layer of bureaucracy and restrictions for some applications, which makes it more difficult for companies and especially startups to work in the field of generative AI. In my opinion, however, the main reason why no ecosystem of AI players comparable to the USA is developing in Europe is due to the lack of access to efficient capital markets for start-ups and technology companies in Europe. With or without the AI Act, Mistral and Aleph Alpha will probably continue to exist, but despite great ideas and products, European startups will find it very difficult to compete with extremely well-funded American companies in the long term, except, perhaps in certain niches. This means they share the fate of almost the entire European digital industry.

Is this a helpful framework for the industry?

Margrethe Vestager, the “Executive Vice President of the European Commission for A Europe Fit for the Digital Age” (wow — I want a job title like this as well), says that the Act does not hinder innovation but promotes it and creates clear guidelines.
Ahh, no. It does not do that in its current form. It does not create a framework because it is more of a collection of different, often overlapping regulations and lacks a structured view of the field. The developers and users of AI do not benefit; there is hardly a single paragraph in the Act that makes it easier for companies to develop, test, use, and roll out AI than before the Act. However, there are still (see below) several areas of application in which the situation will not deteriorate compared to the current status quo.

Open Source

It runs through the entire document. For the producers of open-source models and software, there are significant exceptions to the relatively comprehensive documentation, reporting, and verification obligations in many areas.

Generative AI / Foundation Models

Generative AI, which also appeared on the EU’s radar after the ChatGPT release, was only integrated in the last revisions of the draft in December. Generative AI is referred to as “general-purpose AI” in the EU. This primarily regulates the providers of the models, e.g., companies such as OpenAI, Meta, Google, or, in some cases, scientific institutions, but not the application developers.

Risk-based approach

There are very few provisions in the Act that apply to all AI systems and models. Almost all provisions, regulations, and, in some cases, penalties are related to the risk classes of AI, so it is extremely important to understand them.

Risk categories in the EU AI Act: Prohibited, high risk, systemic risk, limited risk, minimal risk

Based on the risk assessment, what does a company need to do when developing AI solutions?

There are many regulations here, most of which are listed under high-risk applications:

EU AI Act: What regulations exist for which risk categories?

Promoting Innovation

The EU has also defined so-called “measures to support innovation” in the law: Firstly, sandboxes are to be set up, in which companies can experiment with AI and real users with guidance by the authorities, but without being allowed to market the AI. The basic idea of the sandbox is very good; it will be the implementation that counts.

Secondly, start-ups will not be exempt from the jumble of rules, but national governments will be obliged to communicate the AI Act to smaller companies in a comprehensible form (how is detailed on several pages): This is not a promotion of innovation but an admission of failure in drafting the text in an understandable and structured way.

Overall, of course, this is disappointingly little in the way of innovation promotion. It would have been exciting not only to create bureaucracy but also to reduce it, for example by exempting AI developers from certain product liability, data protection, or copyright rules that slow down the development of applications and models.

AI Authorities

The EU is setting up an extensive structure of authorities to enforce compliance with the regulations. There will be:

  • a new EU AI Office to monitor the rules and sanction infringements
  • an AI Board: political representatives of the member states who communicate with the Office, the Commission, and the experts
  • various expert panels (advising the authorities on AI)
  • stakeholder forums for those affected by AI — that’s another interesting idea — who is not affected here?
  • and a competent AI authority for each member state, although there may be more than one


The penalties for violations of the AI Act are up to 7% of a company’s annual turnover. Oops, quite hefty for the fact that the AI guys are not running nuclear plants or selling firearms. In the event of an infringement, typically nothing happens, but a documentation obligation is breached. There are also categories of infringements where the penalties are only 1% or 3% of annual turnover.

The best thing about the AI Act

The best thing is what is not explicitly stated in the AI Act: a very large part of AI is not regulated at all or very weakly (duty to provide information), as it is not critical and cannot be regulated or restricted (so easily) by member states.


Overall, the act is unfortunately only mediocre. From the perspective of someone who works with AI it would be school grade C — improvement needed, but not a complete fail. It is to the EU’s credit that it is still a little early to have sensible, effective, but also pragmatic legislation on the subject.

Positive for all those who work with AI and develop solutions:

  1. The risk-based approach in the Act is smart, as is the exemption of non-risky AI applications from regulations.
  2. The prohibited areas of application have been chosen correctly. The only question here is why they appear in an AI Act: social scoring, biometry based total surveillance, and manipulation software should surely be just as prohibited if I manage to implement them in a classic (rule-based) system.
  3. Sandboxes for experimentation with a little less of bureaucracy — this is another clever approach that can be applied to many other technology-related regulatory fields.
  4. The weakening of documentation and many other requirements for open-source software is groundbreaking.
  5. The mitigation of requirements for start-ups and SMEs — even if this is hardly relevant in real live because it is not very extensive — at least the direction is right here.


  1. It is unclear what the EU wants to achieve with this: extremely heterogeneous objectives are listed, from health to democracy, human-centric AI, and the exchange of goods to the prevention of over-regulation. This act certainly cannot achieve these ambitious goals. And it would probably be difficult for a good act to do so either.
  2. It regulates hypothetical risks: There are few examples where people have been seriously harmed or even killed by AI, compared to daily life risks such as road traffic (tens of thousands of deaths and millions of injuries in EU road traffic per year), alcohol, cigarettes, other drugs, hospital germs, or unhealthy food. Almost the entire EU AI Act addresses hypothetical risks that are elaborately regulated, but it is unclear whether and how they will manifest themselves in reality.
  3. Lack of focus: A large number of heterogeneous individual points have been included in the Act instead of deciding what is important or not important; as a result, the Act is already over 100 pages of vague legalese without the annexes. This makes it impractical for the day-to-day work of companies, who should be the addressees. The consequence is foreseeable: Either you deal with the Act intensively and with a lot of resources (large companies), or you ignore it for the time being (start-ups), or you keep your hands off the topic of AI for the time being (probably large parts of the SME sector). The nice gesture in the law of obliging national governments to communicate the AI Act well cannot hide the failure of the legislators at this point.
  4. Paper instead of problem-solving: When it comes to mitigating risks, too much reliance is placed on comprehensive documentation, specification, notification, certification, and registration processes, plus reporting obligations. It’s all very time-consuming paperwork, but it’s unclear what makes a system safer. If I transferred the EU’s approach to road traffic regulations, I would be allowed to drive through a residential neighborhood at 3 mph or 250 mph, but only if I documented the reasons why the speed was appropriate in each case. Oh yes — and if I have informed the authorities and residents beforehand. Impractical on the one hand, but also quite risky on the other.

Unfortunately, the EU has missed the opportunity to give the European AI industry a significant boost through simple, facilitating, innovation-promoting regulation. Those responsible in the EU never tire of emphasizing that they want to promote high technology in Europe, but in this case, unfortunately, Ihave to say, “This act speaks louder than words.”

And now?

Everybody working with AI in the EU must prepare for the law now. For most applications, the regulations are not very onerous, or there are none. So, dear fellow European AI aficionados, take a look at the Act, but please get on with development and don’t get lost in red tape and “we can’t” or “we are not allowed”!

However, for AI in EU risk areas such as vehicles, the accompanying bureaucracy will be a time-waster or a brake on development, which is on top of the overall, not very lean, current regulation.



Maximilian Vogel

Machine learning, generative AI aficionado and speaker. Co-founder BIG PICTURE.