Regulating Intelligence: An Overview of the EU AI Act

Nemo Leerink
Sogeti Data | Netherlands
12 min readMar 13, 2024
Sang Hyun Cho, CC0, via Wikimedia Commons

The European Union (EU) has recently achieved a significant milestone in AI governance by introducing the EU AI Act, aiming to establish a global standard for AI regulation. Although the Act has not been fully published yet, the leak of its near-final version has provided a comprehensive insight into its contents and potential impact. On March 13th, the Act was approved by the European Parliament. This development has dispelled much of the uncertainty surrounding the Act, offering businesses a clearer blueprint for aligning their operations to meet compliance requirements. This initiative represents a big move in AI regulation, especially considering the lack of extensive regulatory frameworks in this area before. Consequently, businesses within the EU or those servicing to EU customers are now assessing the impact of this act on their operations. They are keenly looking into what changes it enforces and what steps they need to take to comply. In this article, we will provide an ov erview of the EU AI Act, focusing on its key elements relevant to businesses. We’ll explore how the Act categorizes AI systems based on their risk levels, the various roles stakeholders may assume under the Act, and how to identify your role. We will also discuss the timeline for implementing these changes.

EU AI Act: An Overview

The Act defines AI systems as “machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments” [1]. This wide-ranging definition ensures the Act covers a broad spectrum of systems, securing its relevance and long-term applicability. The primary aim of the AI Act is to foster the development of trustworthy AI, achieved by establishing specific requirements for AI systems. For consumers, the Act promises assurance that AI systems adhere to European laws and values, minimizing the risk of safety violations or breaches of fundamental rights.

To support these goals without hindering progress and innovation, the Act exempts AI systems intended for research and development (R&D) and military use from its obligations. This approach strikes a balance between encouraging technological progress and ensuring necessary regulatory oversight and consumer protection. Additionally, open-source AI systems are exempt unless they fall into the unacceptable- or high-risk categories. This exemption is particularly beneficial for businesses, as it supports the continuation of open-source innovation — a key driver for business growth and technological progress. However, businesses should proceed with caution, maintaining responsibility for assessing the risk level of open-source models they incorporate to ensure compliance and mitigate potential liabilities.

Risk Levels

The AI Act defines four levels of risk. Each level has its own set of obligations that it must meet. It is critical that businesses invest in understanding these risk levels, since they are obliged by the AI Act to categorize their AI systems.

1. Unacceptable risk

2. High risk

3. Limited risk

4. Minimal risk

Unacceptable risk

The unacceptable risk (or prohibited) AI systems are limited to a small number of use cases. They include systems that pose an unacceptable threat to the health and safety or fundamental rights of individuals, as well as risks that can negatively impact the environment. This are the prohibited AI systems:

1. Manipulative techniques intended to distort human behaviour in such a way that it may cause physical or psychological harm, by using subliminal techniques impairing the person’s ability to make an informed decision. An example can be using AI for targeted advertising that can influence users’ opinions or behavior without them being fully aware of it.

2. Systems that exploit vulnerabilities systems that target vulnerable groups, such as children or the disabled, to exploit their vulnerabilities with the objective to materially distorting the behaviour of that person.

3. Social scoring systems to evaluate or classify persons based on their social behaviour or personality characteristics when the scoring leads to detrimental or unfavourable treatment under certain conditions. You can think of China’s Establishment of the Social Credit System law which leads to penalizing people with a low credit score with travel bans and even slow internet?! [2].

4. Categorization based on sensitive characteristics systems that categorize natural persons based on their biometric data (e.g., facial pictures or DNA) to deduce or infer their race, political opinions, religious beliefs, etc. This does not apply to (i) pure supportive features intrinsically linked to another commercial service (e.g., a virtual fitting room in an online clothing store. In this service, users can upload a photo of themselves to virtually try on different clothing items. The system uses biometric data, like body shape and size, from the uploaded photo to recommend clothing sizes or styles that would best fit the user’s body type.) or (ii) labelling, filtering and categorizing of lawfully acquired biometric data (e.g., images), specifically and exclusively in law enforcement context [3].

5. Biometric identification systems real-time remote biometric identification in publicly accessible spaces for the purpose of law enforcement, except if used for: activities related to specific crimes, locating missing victims or prevention of terrorist attacks AND safeguards apply (judicial authorization except in case of justified urgency). Non-real-time is not prohibited, but for law enforcement prior judicial approval is needed. This prohibition does not apply to AI systems used to support the human assessment of the involvement of a person in a criminal activity.

6. Facial recognition database that creates or expand facial recognition databases through the untargeted scraping of facial images from the internet or camera footage like allegedly created by Clearview AI which led to a noteworthy lawsuit [4].

7. Emotion recognition systems identifying or inferring emotions or intentions of natural persons based on biometric data in workplace and education institutions. Does not apply if the intended use is medical or safety related (e.g., check if the pilot is still awake).

It is reasonable to conclude that banning these systems is advantageous for all EU citizens because they pose a significant risk to their rights and the core values of the EU. The prohibition of many of these use cases is straightforward, as they are either already illegal under existing laws or are highly discouraged by current social norms.

High risk

High-risk AI systems are defined as AI systems that pose a significant risk of harm to the health and safety or the fundamental rights of individuals. Within the high-risk category, there are two subcategories [5]. The first subcategory encompasses AI systems intended to be used as a safety component of a product or is itself a product covered by certain EU legislation (e.g., aviation, toys, transport, medical devices etc.) and currently is subject to third-party conformity testing related to health and safety risks. If your existing systems are not already undergoing third-party conformity assessments, it’s likely that your AI systems do not belong to this subcategory. The other subcategory involves AI systems identified on a pre-defined list. The list of high-risk AI systems that are currently listed in the act is not exhaustive and can be expanded based on emerging uses and applications of AI [6] [7].

High-risk AI systems according to the EU AI Act proposal include:

1. Biometric and biometrics-based systems: Biometric identification of natural persons, including emotion recognition systems (excluding prohibited AI systems based on article 5). E.g., a secure facility uses a biometric access control system that scans fingerprints of individuals attempting to enter. This system verifies the scanned fingerprints against a pre-authorized list to determine if the individual has clearance to access the facility.

2. Critical Infrastructure or emergency services: Systems that function as safety components in the operation of (i) road, rail and air traffic (unless already regulated) and (ii) the supply of water, gas, heating, electricity and critical digital infrastructure. This could be for example object detection models used by autonomous trains or an AI system that balances the load of the electric net.

3. Education and vocational training: Systems that determine access to education, assessing level of education, monitoring, and detecting prohibited behaviour.

4. Employment, workers management and access to self-employment: Recruitment and selection tools, systems that make or facilitate promotion or termination decisions, evaluate performance, or allocate tasks according to individual behaviour and personality.

5. Essential private and public services: Systems related to evaluation of eligibility for public assistance benefits and services (housing, electricity etc.), credit scoring (exception for AI systems used to detect financial fraud), health and life insurance eligibility, evaluation, and classification of emergency calls.

6. Law enforcement: Systems supporting law enforcement authorities: polygraphs, reliability of evidence, profiling, crime analytics.

7. Migration, asylum, and border control management: Systems supporting these authorities: polygraphs, (security, health) risk assessment, verification of documents and evidence, detecting and monitoring natural persons and trend prediction.

8. Administration of justice and democratic processes: Systems related to researching and interpreting facts and the law, influencing outcome of elections, or voting behaviour, andsystems used by large social media platforms (see Digital Services Act) in their recommender systems.

Exceptions are made to this second subcategory of high-risk systems in the following cases [8]:

1. Systems that make improvements over previously completed human activities.

2. Preparatory steps (e.g., data cleaning) in a risk-assessment process.

3. Detecting patterns without replacing human assessments.

4. Performing narrow procedural tasks.

The high-risk systems category presents the greatest challenge for companies because it is complex, applies to many existing AI systems, and comes with a significantly longer list of obligations compared to those in the limited- or minimal-risk categories.

Limited risk

Limited risk in refers to AI systems that do not pose major risks. These systems have a minimal harmful impact and do not require the same level of regulation as high-risk AI systems. It concerns all AI systems in which a user interacts with the system and the system does not qualify as high risk or unacceptable risk. Examples are chatbots and systems generating deepfakes.

Minimal risk

All remaining AI-systems are classified as minimal risk systems. These systems don’t have any requirements and concern systems such as spam filters, inventory management systems, and AI in video games.

General-Purpose AI systems

Foundation models, also known as General-Purpose AI systems, are large-scale AI models developed by organizations like OpenAI and Meta to perform a wide range of tasks. Foundation models under the EU AI Act are divided into two categories: standard and systemic risk, each with specific responsibilities to ensure transparency and compliance. These models have varied obligations. Standard models are required to describe their training data, confirming compliance with EU copyright laws for ethical use and transparency, as well as making available technical documentation. Systemic risk models are subjected to more stringent conditions, including standardized model evaluations of systemic risks, adversarial testing, and efficiency assessments, as well as an obligation to report incidents. This approach seeks to ensure a balance between innovation and accountability, requiring foundational technologies, like those by OpenAI, to transparently document and share the copyrighted content used in training, thereby protecting intellectual property, and building trust in AI applications. Foundation model providers bear the primary responsibility for ensuring regulatory compliance. Nevertheless, businesses employing these models must exercise caution regarding their application. In instances where issues arise due to the way a business utilizes the model, the accountability lies with the business itself [9].

Different Roles and Their Obligations

Distinct roles are defined to outline different responsibilities and obligations to stakeholders that are involved in the development, distribution, and use of AI systems.

1. Provider: Refers to any legal entity (person, company, organisation) that creates and deploys an AI system on the market under its own name or trademark, either as a paid service or for free. Their responsibilities include conducting a risk assessment to determine its potential, quality control by implementing a quality management system, transparency by providing clear documentation and continuous monitoring to ensure the system remains compliant [10].

2. Deployer: The legal entity that uses an AI system is a “deployer”, excluding users that engage with the system in a purely personal or non-professional setting. Responsibilities include using AI systems as intended by the provider, providing feedback to the provider in case issues arise, informing users that they are interacting with an AI system, and compliance with local regulations [11].

3. Importer: Refers to a legal entity based in the EU that introduces an AI system to the market or puts it into its service, where the system is associated with a brand or trademark of a non-EU entity.

4. Distributor: Is any legal entity, other than provider or importer, who makes AI systems available within the EU market without altering its properties.

5. Third Party Supplier: Are all relevant third parties, that are involved with the development, sale and supply of software tools, components, pre-trained models or data used in the AI systems.

Something that is important to note, is that any role shall become the Provider if they place a high-risk AI system on the market under their own name or make substantial modifications to it.

Keeping a close eye on your role is crucial, given the specific responsibilities each role carries. For example: imagine a tech company that develops an innovative AI tool for financial forecasting. If this tool, initially branded under the non-EU parent company, is brought to the EU market by an EU-based subsidiary, the subsidiary steps into the shoes of an “importer,” navigating the nuances of introducing a foreign AI product to Europe. It gets more complicated within larger organizations, where different arms can wear different hats, depending on the task at hand. Imagine the R&D team creating an AI system for internal use, say, to streamline manufacturing. While this AI system isn’t used in production and is still in de R&D phase, the system is exempt from any restrictions. As soon as they deploy the system internally, now fill the role of “deployer”. Meanwhile, another department might take this in-house tool, brand it, and sell it, morphing into a “provider”.

Consider a European distributor bringing in a high-risk AI system from inside the EU without altering it. They’re a straightforward “distributor,” right? But if they decide to tailor this system for a niche market, suddenly, they’re not just distributing; they’re providing, complete with all the responsibilities that come with high-risk AI modifications. And don’t forget the third-party suppliers. They might start off in a support role, supplying the parts of AI systems. Yet, if they significantly tweak the system’s functionality or risk profile, they could find themselves stepping into a provider’s shoes, especially if those tweaks introduce new risks or capabilities. These scenarios show just how dynamic and interconnected these roles can be. It’s a reminder for organizations to stay alert and well-informed, ensuring they meet their obligations and are aware of their role.

Enforcement and Adoption Timeline

Following its political agreement, the EU AI Act is on the verge of formal approval by the EU Parliament and Council, with a vote set for March. The AI Act is expected to be formally published in May and will come into force 20 days following its publication in the Official Journal of the EU. A grace period of two years will be granted for compliance with most of its provisions. However, prohibitions will become enforceable within just six months, while rules with regards to General Purpose AI (GPAI) models will take effect after 12 months, or 24 months for models that are already available on the market. The obligations for the first subcategory of high-risk systems will take effect after 36 months. The enforcement of these regulations will be the responsibility of national market surveillance bodies alongside the newly formed European AI Office, which will oversee coordination and standard setting at the EU level. Penalties for failing to comply will vary, with fines up to 35 million euros or 7% of global annual revenue for the most serious violations. Smaller enterprises can expect more proportionate fines [12][13].

The AI Act is expected to be formally published in May and will come into force 20 days following its publication in the Official Journal of the EU.

Conclusion

In this article, we’ve provided an overview of the EU AI Act, focusing on the categorization of AI systems by risk levels, the different roles stakeholders may assume, and the implementation timeline. As we journey towards the successful roll-out of the AI Act, the anticipation builds while we await the concrete requirements of the Act. Once approved, a grace period will commence, offering businesses essential time to realign their practices with the forthcoming regulations. During this transition, maintaining engagement with AI technologies and a thorough understanding of the associated risk levels of AI systems will be crucial for businesses aiming to maintain a competitive edge. The AI Act represents Europe’s leadership in the responsible development and deployment of AI systems on an international scale, reinforcing a commitment to fostering an ecosystem where innovation and consumer protection go hand in hand. This approach not only sets a global benchmark for AI governance but also emphasizes the EU’s dedication to safeguarding the digital rights and safety of its citizens.

[1] https://commission.europa.eu/system/files/2024-01/EN%20Artificial%20Intelligence%20in%20the%20European%20Commission.PDF

[2] Explained: China Social Credit System, Punishments, Rewards (businessinsider.com)

[3] https://www.stibbe.com/publications-and-insights/the-eu-artificial-intelligence-act-our-16-key-takeaways

[4] Clearview AI agrees to restrict use of face database | US news | The Guardian

[5] https://cset.georgetown.edu/article/the-eu-ai-act-a-primer

[6] EU AI Act proposal: 5.2.3. HIGH-RISK AI SYSTEMS (TITLE III)

[7] EU AI Act proposal annex: ANNEX III HIGH-RISK AI SYSTEMS REFERRED TO IN ARTICLE 6(2)

[8] https://www.whitecase.com/insight-alert/pre-final-text-eus-ai-act-leaked-online

[9] EU AI Act proposal: Amendment 99, Amendment 101

[10] https://sparkle.consulting/how-euaiact-will-impact-your-business/

[11] https://sparkle.consulting/how-euaiact-will-impact-your-business/

[12] https://www.whitecase.com/insight-alert/dawn-eus-ai-act-political-agreement-reached-worlds-first-comprehensive-horizontal-ai

[13] https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai

--

--