Navigating the EU AI Act: A Concise Guide for Businesses

Alexandra Khomenok
Tovie AI
Published in
3 min readApr 24, 2024

The European Union’s AI Act, approved in March 2024, is a crucial regulation that demands businesses’ attention. With the expanding use of artificial intelligence across sectors, the Act aims to protect EU citizens from potential privacy and misinformation risks while establishing new standards for AI applications. This article will explore what companies need to know about the new regulations.

Why is it important?

In essence, it is akin to the General Data Protection Regulation (GDPR) but specifically for AI.

This legislation establishes rules for companies that create or use AI within the European Union, and failure to comply carries severe consequences.

Fines for non-compliance range from 7.5 million euros or 1% of a company’s turnover to 35 million euros or up to 7% of its global turnover, depending on the violation and company size. Penalties apply to various violations, such as using AI manipulatively or leveraging biometric data to uncover private information.

Any company operating in the EU needs to ensure compliance with the new AI Act. Ignoring it is not an option!

What is the EU AI Act in a nutshell?

The European Union AI Act is the world’s first law on artificial intelligence designed to address associated risks. It focuses on providing clear guidelines for developers and users while safeguarding fundamental rights such as privacy and non-discrimination.

Under the Act, AI systems must meet several requirements:

1. Transparency: AI systems, like chatbots, should inform users when interacting with AI.

2. Labelling: AI-generated content, such as deepfakes, must be clearly marked.

3. Impact assessment: companies using AI, especially in crucial areas like banking and insurance, must check how AI affects people’s rights.

The European AI Act also introduces a risk-based approach, meaning that the level of regulation depends on the potential risks associated with the AI system and its applications.

Transparency requirements for AI: What you need to know

Under the EU AI Act, AI systems, including advanced ones like ChatGPT, must meet these transparency rules:

1. Clearly inform users when they interact with AI-generated content.

2. Ensure AI models are designed to block the creation of illegal content.

3. AI training must list copyrighted data sources.

4. Serious issues with top-tier AI (e.g., GPT-4) require an immediate report to the European Commission.

In summary, AI that produces synthetic content must be marked as such. Systems that recognize emotions or physical traits must inform users and respect privacy. Creating or altering content, like deepfakes, must be openly stated, except for legal exemptions or creative purposes.

The European Commission guide for providers of high-risk AI systems

What must businesses do to comply with the EU AI Act?

To comply with the EU AI Act, businesses must follow several essential steps.

Firstly, note that the Act applies to those who create and use AI systems, whether they’re based in the EU or their AI systems are available in the European market. This also includes businesses outside the EU if their AI is used within the union.

Start by using the Compliance Checker to determine if the new rules cover your AI system. This tool helps you understand what you need to do to comply.

To learn more about how businesses can assess compliance with the Act and when it will come into full force, visit our blog.

--

--