The EU is so close to regulating AI — but what does an EU AI Act actually mean?

Maham Saleem
5 min readDec 3, 2023

--

The EU AI Act is one of the first landmark pieces of legislation with the sole purpose of regulating AI systems. With the region being such a major market, EU legislation, particularly of the tech sector, has sway over regulation and corporate practice across the rest of the world. You don’t have to look much further than the GDPR, a quintessential example of the ‘Brussels effect’ where the stringency and comprehensiveness of the legislation did not, as threatened, cause all Big Tech to flee — excluding 500mn of the world’s richest people from your services is costly — rather, they adapted practices to ensure compliance with the new rules. The same doesn’t hold for the UK, with a footprint small enough that messenger apps like WhatsApp and Signal threatened to pull services if new online safety regulation sanctioned unreasonable infringements of people’s privacy by public authorities.

The regulation follows a tiered risk-based approach, as opposed to a sectoral approach (which the UK will likely adopt in the coming years), or a voluntary labelling scheme or blanket requirements for all AI systems irrespective of the risk the pose — all three of which were options considered and discarded by the European Commission (the institution where EU policy is first initiated or ‘commissioned’) as these regulatory frameworks wouldn’t meet the Commission’s stated objectives of safety, legal certainty, innovation, effective governance and prevention of market fragmentation.

The risk-based approach categorises AI systems into unacceptable-risk, high-risk, limited-risk and minimal-risk. Practices that fall under ‘unacceptable risk’ involve those with potential to subliminally manipulate people or exploit vulnerable groups and are effectively banned. Bans in the text include emotion recognition systems in the workplace and educational institutions, the ominous-sounding AI-based social scoring by public authorities, predictive policing for individuals, and, controversially, the use of ‘real time’ remote biometric identification systems in publicly accessible spaces by law enforcement (except when in the public interest, like searching for a missing child or preventing and imminent terrorist threat). The Act text was agreed on by European Parliament and is currently in its final stage ‘trilogues’ — a three-way negotiation between the Commission, Parliament (elected MEPs), and the Council (ministerial representatives of each member state) which has until Wednesday, 6th December to reach a deal. Many of the prohibitions in the text agreed by Parliament have been watered down or removed in the Council’s mandate — all 681 pages of angry edits can be read here — who have been particularly heavy handed in laying out exceptions to the rules for law enforcement and for member states on national security grounds. If the parties fail to reach an agreement, the legislation will likely be delayed until after the election next summer, but a final deal in some form is expected by the 6th.

The Spanish presidency (oversight over EU affairs rotates every 6-months) has been desperate to get a deal done on the text before they hand the reins over to Belgium, and a consequence of that haste is that most prohibitions will likely remain in the text but with room for negotiation in the new year, and the AI uses that don’t remain will be moved to the ‘high-risk’ category. High-risk uses already constitute use of AI systems in critical infrastructure, education, employment and management of workers (think CV-sorting or Amazon’s style of worker management in warehouses), essential services like credit-scoring for loans, migration and border control management, justice, and democratic processes. Any use of AI systems that falls under ‘high-risk’ usage faces stringent risk assessments, transparency, and activity-logging requirements, some of which will likely leave the company or public authority using AI models open to legal challenges.

Another challenge that emerged during the trilogue was whether the onus of legal and ethical use of AI should fall on the developers of the most powerful foundation models (e.g. OpenAI’s GPT4). The original text outlined horizontal rules for all foundational models but during the trilogues, France, Germany and Italy have led the charge in favour of self-regulation by foundation model developers — backed fiercely by local French and German AI startups, who fear the AI Act could set them behind US and Chinese counterparts. The most likely compromise in the final deal will be that the harshest rules will only apply to the models which carry ‘systemic risk’ and smaller European AI companies will thus be exempt from obligations — an outcome which will spark anger in Washington as the only companies that can really be defined as carrying systemic risk are *American*.

The AI Act is not standalone legislation though — separate algorithmic transparency and accountability requirements exist in the Digital Services Act (roughly equivalent to the UK’s Online Safety Act) for very large online platforms/search engines, including rules concerning self-preferencing practices and interoperability of services for ‘gatekeeper’ platforms. There is also the tricky issue of liability (who is to blame when something goes terribly wrong?). The 2022 AI Liability Directive provides more legal clarity and protection for those who want to bring claims for harms caused by AI. The key sticking point here is that the court must apply a presumption of causality, i.e. assume that the action of the AI system was caused by the AI developer unless proved otherwise. Although this directive has been criticised on the basis that, currently, even the machine learning scientists who develop AI models aren’t sure how the most advanced ‘frontier models’ work, it will be interesting to see how this plays out when someone inevitably faces the very palpable human consequences of automated decision-making going awry.

We’ll find out by Wednesday what will actually be constituted in the AI Act deal — if there is a final agreement on the text at all. But even if to the Spanish presidency’s relief a deal passes, the Act won’t enter into force until late 2025 at the earliest so we still have a few years before we know whether this is has been a successful effort to create a more ‘human-centric’ AI environment or an EU attempt at maintaining its dominance as the blueprint techno-regulator.

--

--

Maham Saleem

Hi! Follow me to read about random policy related topics I find interesting