The Pyramid of AI Risks

The Regulatory Technologist
5 min readAug 28, 2023

What is in this blog?: The European Union has formally defined a pyramid of risks associated with AI. We delve into what those are

Why might I be interested?: You probably have considered how AI can be leveraged within your life / business to increase productivity. This will help guide what should be a hand in hand question — what risks are associated?

We’re back on the topic of AI today. With so much of our daily news and chatter going to what is possible from a productivity perspective, it is easy to overlook the increasing loom of regulation in the background.

The European Union has a significant piece of legislation in progress around AI, The Artificial Intelligence Act. The use of AI in the EU will be directly impacted and regulated on the principles this rule lands on.

The Pyramid

Perhaps most interestingly the EU has identified a tiered “Pyramid of Risks” in AI. This is information that those operating Risk and Compliance frameworks should have at their fingertips, and top of their agenda’s with Technology and Enterprise wide governance partners.

Four categories have been highlighted, but a vital phrase in this document (which is not finalized in law yet) is as follows:

The use of AI, with its specific characteristics (e.g. opacity, complexity, dependency on data, autonomous behaviour), can adversely affect a number of fundamental rights and users’ safety. To address those concerns, the draft AI act follows a risk-based approach whereby legal intervention is tailored to concrete level of risk

Risk-based approach. How will this impact entities risk frameworks? Might they take the same approach that Meta is? Nick Clegg boasts that over 1000 employees at Meta are working on the EUs Digital Transparency Act, which has overlap with the Artificial Intelligence Act.

Category 1 — Unacceptable Risk AI Systems

The proposed rule (Article 5) explicitly bans harmful AI practices, those that considered to be a clear threat to safety, livelihoods and rights. Examples:

  • AI systems that deploy harmful manipulative ‘subliminal techniques’
  • AI systems that exploit specific vulnerable groups (physical or mental disability)
  • AI systems used by public authorities, or on their behalf, for social scoring purposes
  • ‘Real-time’ remote biometric identification systems in publicly accessible spaces for law enforcement purposes, except in a limited number of cases

The last bullet includes Facial Recognition Technologies (FRTs). There is a more relaxed approach in the draft rule where this technology is leveraged for searching for victims of crime / missing children, and terror prevention.

Category 2 — High Risk AI Systems

In general, the proposed regulation outlines specific rules for AI systems considered to create a high risk to the health and safety or fundamental rights.

“High-risk” AI systems will be permitted on the European market subject to compliance with certain mandatory requirements (detailed below).

Interestingly, the classification of an AI system as “high-risk” is based on the intended purpose of the AI system, in line with existing product safety legislation.

Therefore, assessment of risk with an AI system will consider:

  • The function performed by the AI system
  • The specific purpose / use case for which that system is used; AND
  • Current safety legislation for existing products meeting that purpose

The draft text further distinguishes between two categories of high-risk AI systems:

  1. Systems used as a safety component of a product or falling under EU health and safety harmonization legislation (e.g. toys, aviation, cars, medical devices, lifts).
  2. Systems deployed in eight specific areas identified by the EU commission
    o Biometric identification and categorization of natural persons;
    o Management and operation of critical infrastructure;
    o Education and vocational training;
    o Employment, worker management and access to self-employment;
    o Access to and enjoyment of essential private services and public services and benefits;
    o Law enforcement;
    o Migration, asylum and border control management;
    o Administration of justice and democratic processes.

Why this list of services? These are areas where the EU feel risks have already materialized or are likely to materialize in the near future. More food for thought for operators / innovators in these spaces.

What are the High risk System Requirements?

  • Registration on a EU Commission owned EU-wide database
  • Conforming with existing product safety legislation, where an AI systems falls into one
  • Performance of a Self-assessment, where AI systems not fall under existing product safety legislation
  • Minimum requirements on Data governance, documentation / record keeping, transparency, oversight, testing and security

Category 3 — Limited Risk AI Systems

Finally the Limited Risk AI Systems category is all focused on transparency. They apply to AI systems that

  • Interact with humans
  • Used to detect emotions or determine association with categories
  • generate or manipulate content (e.g. ‘deep fakes’)

The draft rule indicates when persons interact with an AI system or their emotions or characteristics are recognized through automated means, that must be transparent to the end user.

Further, if an AI system is used to generate or manipulate image, audio or video content that appreciably resembles authentic content, there should be an obligation to disclose that the content is generated through automated means. The goal, to allow the user to make an informed choice on the subject.

We have just started to see the tip of the iceberg here. One could pick any subsection of each of these categories, and apply across many industries:

How far do the Facial Recognition Technology exemptions on unacceptable risks go, and how will those be monitored?

What support and guidance will be available for Self assessments, how will legislation ensure the assessment is relevant on an ongoing basis?

How will companies be tested on transparency? How will this intersect with third party use of products?

Lots to consider.

Thanks for reading! Get in touch if you’d like to hear more on these topics, discuss questions 1:1, or chat over a coffee.

regtechnologist@gmail.com

--

--

The Regulatory Technologist

Financial Regulation Expert. 12 + years transforming Tier 1 banks. Comments on Financial Regulation trends / developments.