The EU’s New AI Regulation Framework

Elena Ponte
Zumo Labs
Published in
7 min readMay 11, 2021

At the end of last month, the European Commission put forth a setting out the first ever legal framework for AI regulation. Specifically, a framework for secure, trustworthy, and ethical AI: it sets out required conformity assessments that so-categorized “high risk” AI systems must meet before they can be offered on the market or put into service.

First, a bit of history. This Proposal has been in the works for a while. Very aptly named the “Proposal for a Regulation laying down harmonized rules on artificial intelligence,” the Proposal builds on the White Paper on AI published by the Commission back in February 2020 [1]. The February 2020 White Paper set out policy options on how to achieve two objectives: (1) “promoting the uptake of AI”; and (2) addressing the risks associated with certain uses of such technology. The White Paper itself was built in response to requests from the European Parliament (who adopted a swath of resolutions related to AI in October 2020 [2–4]) and the European Council (who called for solutions in 2017 [5] and put forth “Conclusion on the Coordinated Plan on the development and use of AI Made in Europe” back in 2019 [6]) for legislative action to ensure a well-functioning internal market for AI systems where both benefits and risks of AI are adequately address at the Union level.

So let’s get back to the actual Proposal. This post will answer four questions:

  1. What does the Proposal aim to do?
  2. How does the Proposal work?
  3. Who is affected? and
  4. What’s next?

1. What does the Proposal aim to do?

The Proposal sets out a regulatory structure that bans some uses of AI, heavily regulates high risk uses, and lightly regulates less risky AI systems. The Proposal has four stated specific objectives:

  • Ensure that AI systems placed on the Union market and used are safe and respect existing law on fundamental rights and Union values;
  • Ensure legal certainty to facilitate investment and innovation in AI;
  • Enhance governance and effective enforcement of existing law on fundamental rights and safety requirements applicable to AI systems;
  • Facilitate the development of a single market for lawful, safe and trustworthy AI applications and prevent market fragmentation.

The big guns at the EU also offer their two cents on this: Margarethe Vestager, Executive Vice-President for a Europe fit for the Digital Age said: “With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted. By setting the standards, we can pave the way to ethical technology worldwide and ensure that the EU remains competitive along the way……our rules will intervene where strictly needed: when the safety and fundamental rights of EU citizens are at stake.” [7]

The commissioner for the Internal Market, Thierry Breton, said that the “proposals aim to strengthen Europe’s position as a global hub of excellence in AI from the lab to the market, ensure that AI in Europe respects our values and rules, and harness the potential of AI for industrial use”. [7]

On the other side of the pond, Jake Sullivan, White House National Security Adviser, tweeted, “The United States welcomes the EU’s new initiatives on artificial intelligence. We will work with our friends and allies to foster trustworthy AI that reflects our shared values and commitment to protecting the rights and dignity of all our citizens.” [8]

2. How does the Proposal work?

The proposal categorizes AI systems according to the risk of potential harm to individuals: (A) unacceptable risk; (B) high risk; (C ) limited risk; and (D) minimal risk. Depending on the category it is sorted into, the AI system is subject to certain obligations.

AI systems that are “considered to be a clear threat to the safety, livelihoods and rights of people” present an unacceptable risk and thus are banned. Some examples of such systems include AI systems or applications that allow “social scoring” by governments or “manipulate human behavior to circumvent users’ free will.”

A. Unacceptable Risk.

AI systems that pose “significant risks” to the health and safety or fundamental rights of individuals fall in the high risk category. All remote biometric identification systems fall in the high risk category, as well as AI systems that determine access to education and professional course of someone’s life.

B. High Risk.

Before they can be put on the market, high risk AI systems will be subject to strict obligations. Providers of high risk AI systems must establish “appropriate data governance and management practices.” High risk AI systems must:

  • use datasets that are “relevant, free of errors and complete”;
  • come with detailed documentation that provides the information necessary on the system and its purpose, including information about operation and metrics of accuracy, so that authorities can assess its compliance;
  • be “sufficiently transparent to enable users to understand and control how the high risk AI system produces its output”;
  • allow users to “oversee” the system in order to prevent or minimize “potential risks” (i.e., a human should be able to use a stop button);
  • meet high levels of accuracy, robustness, and security, and disclose the documentation showing compliance with these standards.

Certain high risk AI systems will need to be registered in a newly created public EU database.

C. Limited and Minimal Risk

Certain AI systems that pose lower risk, especially those where there is a clear risk of consumer manipulation (think chatbots or deep fakes), will also be required to meet new obligations. Providers and users of such AI systems will need to meet transparency standards that aim to ensure a human using the AI is aware they are in fact interacting with an AI.

3. Who is affected?

Short answer: Europeans. Nuanced answer: the Proposal is intended to have extraterritorial scope. Specifically, the Proposal says it will apply to, “providers and users of AI systems that are established in a third country, to the extent the output produced by those systems is used in the Union.” So there is an intended extraterritorial reach (like, for example, we see in the GDPR).

More generally, the framework sets out regulations for “providers” of AI systems, meaning those that develop an AI system, or have an AI system developed and placed on the market. However, if any user, distributor, or other third party modifies the intended purpose of a high risk AI system, then they may be considered a “provider” and so be subject to the obligations imposed on providers.

4. What’s next?

The rules in the Proposal will be enforced through a governance system at the Member States level. The first step will be the establishment of a European Artificial Intelligence Board. The European Parliament and each member state, through their own national legislative procedures, will then need to adopt the Proposal.

Finally, some additional thoughts (on data!)…

The Proposal highlights the importance of training data in producing ethical AI systems. Nothing new to us, but it is refreshing to see regulators starting to pick up on this issue. As above, one of the requirements for high risk AI systems is that they must use datasets that are “relevant, free of errors and complete.”

However, the implementation of these data ideals still has some ways to go. To this point, the recitals accompanying the regulations are worth a read. For starters, they reference concerns about the risks of algorithmic bias. But it is what is lacking that is striking: despite a lot of talk about ensuring “bias monitoring, detection, and correction,” in AI systems that use data concerning sensitive properties such as race, gender, and ethnicity, to guard against “possibly biased outputs,” there is no specifically required impact assessment. Further, the documentation that arguably must contain a bias assessment does not need to be provided to users, the public or those potentially affected by discriminatory algorithms. It is available only to regulators upon request. That’s a general issue with the Proposal: the regulation is very light on the information that must be disclosed to people affected by AI systems.

The next few months will be interesting to follow as more experts chime in on AI regulation and the national adoption gets underway. Synthetic data will be an easy way to achieve compliance on a short timeline, and we expect it’s a solution many teams will pursue. If we can be a resource on that front, even just in conversation, please reach out.

REFERENCES:

[1] European Commission, White Paper on Artificial Intelligence — A European approach to excellence and trust, COM(2020) 65 final, 2020.

[2] European Parliament resolution of 20 October 2020 on a framework of ethical aspects of artificial intelligence, robotics and related technologies,.

[3] European Parliament resolution of 20 October 2020 on a civil liability regime for artificial intelligence,.

[4] European Parliament resolution of 20 October 2020 on intellectual property rights for the development of artificial intelligence technologies,.

[5] European Council, European Council meeting (19 October 2017) — Conclusion EUCO 14/17, 2017, p. 8.

[6] Council of the European Union, Artificial intelligence b) Conclusions on the coordinated plan on artificial intelligence-Adoption 6177/19, 2019.

[7] Press Release, “Europe fit for the Digital Age: Commission proposes new rules and actions for excellence and trust in Artificial Intelligence,” 21 April 2021.

Originally published at https://www.zumolabs.ai on May 11, 2021.

--

--