The AI Act Unfolded: A 7 Minute Briefing on Why it Matters to Your Business
When a new technology is invented, a new whole type of responsibilities is unlocked. We did not need the right to be forgotten, until computers could remember us forever — and that is when GDPR was needed. The same applies to Artificial Intelligence.
As it becomes more and more integrated into business operations and our everyday lives, AI has started to expand into applications that impact fundamental rights like healthcare, transportation, and finance.
The spotlight is now on lawmakers to regulate a field that requires a guarantee of responsible, fair, and transparent design and usage of AI systems across different levels of society.
The European Union’s AI Act (AIA) is a crucial step in this direction, aiming to be the first comprehensive regulatory framework for AI in the world. The AIA, however, isn’t just about regulating artificial intelligence within the EU’s own borders — it is about setting a global standard for responsible AI. Just as GDPR became a global benchmark for personal data protection, EU policymakers see it as a race to shape the future of AI regulation, and the AIA is poised to be a game-changer in this pursuit.
The Act is set to have significant implications for businesses, since it not only affects industry actors that develop and deploy AI, but also those who use AI systems.
This implies that the vast majority of companies able to adapt to this new business era will be subject to some degree of impact.
So as the European Parliament prepares to approve the draft on May 11th, now it is the time to begin understanding the Act’s requirements and companies’ obligations under it. This will help organizations not only comply with regulations, but also reduce reputational and financial risks associated with AI use, and ensure that their AI systems are being used in a fair and ethical manner.
In this article, we’ll take a look at the EU’s proposed AI regulation and identify some key implications for businesses.
The AI rulebook both consumer advocates and industry actors have been waiting for
The EU AI Act aims to regulate the development, deployment and use of AI systems in the European Union with a double-fold purpose:
- to protect citizens’ fundamental rights and freedoms, and
- to guide organizations in the development and deployment of a trustworthy, safe and ethical AI.
Unlike many other regulation packages in which the main goal is to help consumers -such as GDPR-, the AIA is a legislation that many businesses and AI actors have been actively lobbying for to limit the risks of this disruptive technology. AI is still uncharted waters, both for companies -who do not have the full view on potential undesired outcomes when implementing it-, and for the individuals -who would benefit from increased trust and transparency-.
So what is included in the AI Act?
Although challenging to summarize the whole proposal without leaving some applicable information, there are 4 key considerations included in the AI Act that need to be highlighted:
- An AI classification system based on risk: this classification is determined by the level of risk an AI technology could pose to the health and safety or fundamental rights of a person. The ones identified as unacceptable, such as government social scoring or biometric identification, will be almost entirely prohibited. See exhibit 1 below.
- A risk-management framework for the classification: the Act mainly focuses on AI systems considered as high-risk, such as autonomous vehicles, recruitment processes or medical diagnosis. For this category, organizations will need to comply with requirements such as thorough testing, proper documentation of data quality, and an accountability framework that outlines human oversight. On the other side of the spectrum, AI systems with limited and minimal risk -like spam filters or video games- are allowed to be used with merely transparency obligations.
- Measures to support innovation and competition: certain mechanisms have been defined in order to fuel the AI innovation and increase efficiencies for small players, with the aim of building a more competitive AI market within the EU.
- Generative AI specific considerations: the Act also addresses specific considerations for generative AI tools, such as ChatGPT, DALL-E, or Midjourney, which can create new text, music, images and other types of content without explicit programming.
The latter has been a new, late addition, and one of the most debated issues in the proposal, due to the quick-paced technological breakthroughs of this past year and the concerns of potential misuse for malicious purposes such as identity theft, misinformation or financial fraud.
Will your organization be impacted?
The impact of the AI Act will be felt worldwide, as it will be applicable to organizations that provide or use AI systems in the EU, as well as providers or users of AI systems located in a third country, if the output produced by those AI systems is used in the EU.
This applies to the use of AI both in businesses and in the public sector.
The direct implications on organizations will depend on the risk category of the AI systems being used or provided.
Based on that classification, organizations will need to comply with different requirements related to transparency, accountability, and risk management. This may include measures such as conducting risk assessments, ensuring the traceability of AI systems, and providing clear and accurate information to users. See exhibit 2 below.
Specifically, companies deploying generative AI tools will be heavily affected since they will also have to disclose any copyrighted material used to develop their systems. Also require documentation that justifies the non-mitigable risks in AI models, and the reasons why they were not addressed.
Non-compliance with the AIA regulation may result in significant fines for organizations, amounting up to 6% of the company’s global turnover or €30 million, whichever is higher.
For further information on impact and requirements, an article will be posted next week covering specific key actions for those companies who want to know how to stay ahead of the game and ensure their organization is well-prepared for the future of AI.
Countdown to Compliance: When is the AI Act expected to take effect?
The long-awaited EU AI Act has been a topic of debate since April 2021, but it is finally moving to the trilogue stage, where representatives of the Parliament, the Council and the Commission will vote on May 11, 2023. With no possibility of alternative amendments, all groups will have to vote on the compromise.
Afterwards, lawmakers will continue to discuss the details of the act later this year, and a standardization process will be implemented to bridge the gap between the law and execution. Companies may not need to comply with the Act until 2025 or later.
Despite progress, challenges and uncertainties still exist for businesses in implementing the AI Act.
These challenges include the lack of clarity on provisions such as criteria for categorizing AI systems as “high-risk” and compliance requirements for organizations. Moreover, differences in implementation between EU member states could lead to inconsistencies and additional costs for businesses operating across borders.
Additionally, organizations already need to comply with existing laws and regulations that are applicable to AI usage, even if some of these laws are specific to certain industries and don’t explicitly mention AI. For instance, the EU’s GDPR mandates that individuals provide explicit consent before decisions are made solely by automated processing.
Regardless of these challenges, it is essential for businesses to start preparing for the implementation of the AIA. By doing so, they can ensure that they are compliant with the new regulations and benefit from the potential advantages of AI regulation, such as increased consumer trust and a level playing field.
Preparing for the future
In conclusion, the EU AI Act is a crucial step in regulating the development, deployment, and use of AI systems with the goal of protecting citizens’ fundamental rights and freedoms while guiding organizations towards a trustworthy, safe, and ethical AI. The Act will have significant implications for businesses, as it applies to providers and users of AI systems both within and outside of the EU. Compliance requirements will vary based on the risk category of the AI system being used, but all businesses need to be aware of existing laws and regulations applicable to AI usage, such as GDPR.
Although the Act may not take effect until 2025 or later, it is important for organizations to start preparing early to reduce reputational and financial risks associated with AI use, and ensure their AI systems are being used ethically. Despite some challenges and uncertainties, businesses can benefit from the potential advantages of AI regulation, such as increased consumer trust and a level playing field.
The EU AI Act sets a global standard for responsible AI and is poised to be a game-changer in the pursuit of a fair and transparent AI.
Stay tuned for the upcoming article The AI Act unfolded (II), which will lay down key actions for companies to undertake to comply with the AI Act. Don’t miss out on the essential information you need to ensure your business is prepared for the future of AI regulation!
Compliance with the EU AI Act may seem like a daunting task, but it doesn’t have to be.
Don’t let the complexity of the regulation hold your business back from leveraging the power of AI. With my expertise, you can create a successful AI strategy that aligns with your business objectives and meets the new regulations.
Please feel free to reach out here for an initial, no-commitment consultation. We can discuss your specific requirements and explore how my expertise can assist you in staying ahead in today’s competitive landscape.