Tame the beast?! The European Union AI Act. — Part I
- AI Act: EU-Parlament will KI-Gesetz nachschärfen — DER SPIEGEL, June 14, 2023
- Europe is leading the race to regulate AI. Here’s what you need to know — CNN Business, June 15, 2023
- EU: AI Act at risk as European Parliament may legitimize abusive technologies — AMNESTY INTERNATIONAL, June 13, 2023
- E.U. Takes Major Step Toward Regulating A.I. — The New York Times, June 14, 2023
The last days there have been some political developments in the European Union, that might have lasting impact on the use of artificial intelligence in Europe.
First of all, what is the hotly debated European AI Act? … In one sentence: It is the first major law worldwide (even it is still in draft stage), that directly aims to regulate AI. … It builds on top of other important laws in the EU, that already have impact on AI, such as the General Data Protection Rule (GDPR), which has been applicable as of May 25th, 2018.
How does it affect people?
Whereas big parts in the GDPR focus on rules regarding the usage of personal data and the need for a data protection officer designated by a controller and processor, the EU AI Act is as of now according to the Future of Life Institute a proposed law, that
- assigns AI application into three risk categories,
- aims to become a global standard for the regulation of AI,
- wants to determine to what extent AI has a positive rather than negative effect on your life wherever you may be,
- has several loopholes and exceptions, and is inflexible.
Just yesterday, the European Parliament fixed their position for upcoming negotiations with EU member countries. They propose the following risk levels of AI:
- limited risk,
- high risk (this might be discussed the most!), and
- unacceptable risk,
- plus with a extra role: Generative AI.
The parliament’s main goal is, that the AI systems used in the European Union are
- safe,
- transparent,
- traceable,
- non-discriminatory, and
- environmentally friendly.
Further, it wants, that AI systems should be observed/controlled by people in order to prevent harmful outcomes and also wants to establish a technology-neutral, uniform definition for AI that could be applied to future AI systems. Right now, the parliament offers the following blog paper to clarify the definition of AI, but do not give a legally valid definition: https://www.europarl.europa.eu/news/en/headlines/society/20200827STO85804/what-is-artificial-intelligence-and-how-is-it-used.
In my opinion, besides the definition the high risk level and the rules for Generative AI will be most important (if the law will apply from 2026).
The first (high risk AI) includes
- AI systems, that are used in products falling under the EU’s product safety legislation including toys, aviation, cars, medical devices and lifts.
- AI systems falling into eight specific areas that will have to be registered in an EU database:
- Biometric identification and categorisation of natural persons,
- Management and operation of critical infrastructure,
- Education and vocational training,
- Employment, worker management and access to self-employment,
- Access to and enjoyment of essential private services and public services and benefits,
- Law enforcement,
- Migration, asylum and border control management,
- Assistance in legal interpretation and application of the law.
The Parliaments states in their homepage for news: “All high-risk AI systems will be assessed before being put on the market and also throughout their lifecycle.”
The latter (Generative AI) calls for the following requirements:
- Disclosing that the content was generated by AI
- Designing the model to prevent it from generating illegal content
- Publishing summaries of copyrighted data used for training
IMHO especially the last point is useful and fosters research. I also support the plans in the first two dots. However, controlling generative AI might be infeasible. Open source models (that can be cloned from HuggingFace) can be easily adapted — to produce harmful content or to violate data compliance and copyrights. Some days ago, I wrote an article, where I tested how easy it is to produce own images with an open source stable-diffusion model without having exploding costs:
How does it affect companies?
- Prohibited AI practices could lead to fines. In the 2021 version, it states: “The following infringements shall be subject to administrative fines of up to 30 000 000 EUR or, if the offender is company, up to 6 % of its total worldwide annual turnover for the preceding financial year, whichever is higher)”
- The fines under the AI Act are more severe than those outlined in the General Data Protection Regulation (GDPR): “infringement: the possibilities include a reprimand, a temporary or definitive ban on processing and a fine of up to €20 million or 4% of the business’s total annual worldwide turnover.”
- The Act seeks to strike a balance between protecting citizens and promoting AI innovation. Start-ups and small-scale providers may receive some leniency in penalties.
- Regulatory sandboxes will be established for testing AI systems before deployment.
- The legislation will expand in scope as AI technology continues to advance.
— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —
Thanks for reading! Pls feel free to contact me or comment if there are questions, ideas or hints. My summary is not comprehensive, and I am happy to learn about additional aspects.
— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —
In a second article Tame the beast?! The European Union AI Act. — Part II, I plan to write the subsequent sections:
How does it affect research?
Whats meant by regulatory sandboxes.
Pros/Cons of the EU Act.