The EU Artificial Intelligence Act — An Overview

Vittorio Furlan
DigitalForay
Published in
3 min readDec 20, 2023

The European Union is attempting to pave the way for a future of responsible AI with its ambitious Artificial Intelligence Act (AIA). Although the act has not yet been finalised, a recent provisional agreement represents a significant milestone in shaping the regulatory landscape for AI in Europe and possibly beyond.

General AI Systems need transparency:

One of the AIA’s core elements is the establishment of stringent transparency requirements for general-purpose AI (GPAI) systems, such as large language models (LLMs) capable of generating text, translating languages, writing different kinds of creative content, and answering your questions in an informative way. These AI systems now require detailed technical documentation, adherence to EU copyright laws, and comprehensive summaries of training content to ensure ethical and legal operation. Moreover, high-impact GPAI models face additional obligations, including rigorous risk assessments, adversarial testing, and reporting on incidents, cybersecurity, and energy efficiency, fostering a transparent and responsible AI ecosystem.

AI Systems are classified based on Risk.

A core element of the AIA is its risk-based categorisation of AI systems, which groups them into four categories:

  • Unacceptable Risk: This category prohibits AI systems from posing significant threats to human safety, rights, and freedoms. Examples include systems designed to manipulate human behaviour or exploit vulnerabilities.
  • High-Risk AI: This category covers AI systems integral to critical infrastructure or significantly impacting fundamental rights. Examples include AI in recruitment processes, criminal justice systems, and social welfare programs. These systems are subject to stringent compliance requirements, including independent conformity assessments before deployment.
  • Generative AI: This category encompasses AI systems capable of generating new content, such as text, images, and music. The regulatory specifics for this category are still evolving to accommodate the rapidly changing landscape of generative AI technology.
  • Limited Risk: This category includes AI systems posing minimal risks to users. While subject to fewer regulatory requirements, they must stilladhere to user safety and privacy standards.

How should businesses prepare for it?

Consider these actions to ready yourself for AIA:

  • Empower Responsible AI Leadership: Establish a dedicated leadership role, such as a Chief AI Ethics Officer, to oversee AI initiatives. This role is vital for integrating policy, technical, and business aspects into a cohesive AI strategy that aligns with organisational values and complies with regulations like the AIA. This leader should collaborate across functions, including legal, HR, IT, and marketing, to implement AI programs across the organisation.
  • Develop an Ethical AI Framework: Create a set of principles and policies that form an ethical AI framework. This framework helps organisations meet new regulatory requirements, like those in the AIA, and adapt to different jurisdictions. It should promote investment in bias mitigation, strong privacy protection, clear documentation, and other measures.
  • Create a Comprehensive AI Risk Management: Initiate a comprehensive AI risk management program that includes an inventory of all AI systems, a risk classification system, risk mitigation measures, independent audits, data risk management processes, and an AI governance structure. This foundational step is crucial for organisations to understand and comply with existing laws and forthcoming regulations.
  • Implement a Model Management and Repository: For organisations new to model management, gaining an overview and building a repository of all AI models is essential. Implementing a model management system is a proactive step to prepare for the upcoming regulations.

Prepare for Global Impact: The EU AI Act has extraterritorial reach, meaning it applies to any AI system providing output within the EU, regardless of the provider’s location. Companies globally should prepare for its implications, considering the hefty fines for non-compliance, which can be up to €30 million or 6% of global revenue.

--

--