AI Governance
Does your AI-powered application violate the EU AI Act?
AI significantly impacts our world in various fields, from healthcare to automobiles. However, its rapid development has raised concerns about ethical and legal issues, leading the European Union (EU) to take steps to regulate it. Opinions on the AI Act vary within the tech industry; some support it, while others worry it may slow innovation. This article will explore the EU Artificial Intelligence Act (AI Act) and the critical factors to consider when developing and launching AI applications in the market.
The motivation
Digital transformation is crucial for future economic development in today's world. Europe has also set targets for digital transformation by 2030 in four key areas: skills, infrastructure, businesses, and public services [1]. The initiative is in line with core EU values and fundamental rights, ensuring that digital transformation is conducted with a focus on security, safety, sustainability, and the rights of people.
AI will significantly impact digital transformation, potentially reshaping industries, redefining business processes, and enhancing economic and societal landscapes. However, several challenges must be addressed, such as ethical considerations, data privacy, and workforce reskilling.
The EU AI Act in a nutshell
The EU has presented the AI Act, which aims to harmonize rules for the development, use, and adoption of AI while addressing the risks posed by the technology. It proposes a risk classification system and is a significant step towards addressing AI's societal and ethical implications. The finalized text is expected to be released in early 2024, with a two to three-year implementation period after that.
The EU AI Act provides clear criteria to distinguish AI systems from classical software systems. It also defines AI lifecycle and general-purpose AI systems as shown below:
The EU AI Act also clearly defines the different persona involved in AI, as shown below:
The EU AI Act suggests a risk-based methodology to evaluate and mitigate the influence of AI systems on fundamental human rights and user safety. The proposed approach involves distinct prerequisites for each tier. Four categories of risk are identified for various AI practices, as outlined below:
The committee has defined GPAI and foundation models in their article, which is noteworthy.
- A 'foundation model' is an AI model trained on large and diverse data, is designed to yield general outputs, and can be adapted to various specific tasks.
- 'General-purpose AI system' refers to an AI system that can be used in various applications beyond its original design.
The EU AI Act has proposed specific regulations for GPAI and foundation models to ensure transparency throughout the value chain. For models that could pose systemic risks, additional binding obligations will be imposed to manage risks, monitor serious incidents, and conduct model evaluation and adversarial testing.
Companies that fail to comply with the regulations will be penalized. The fines for violations of prohibited AI applications will range from €35 million or 7% of the global annual turnover (whichever is greater), while the penalties for breaches of other obligations will be €15 million or 3%. Companies guilty of providing incorrect information will also be fined €7.5 million or 1.5% [5].
Future-proofing to ensure compliance
New standards are being developed to ensure that AI systems are secure, safe, private, fair, transparent, and have high-quality data throughout their lifecycle. ISO/IEC 42001 is one such standard under development. It has been designed to integrate with other existing ISO, such as ISO 27001 for information security, ISO 27701 for privacy, and ISO 9001 for quality. Organizations need a well-planned AI governance strategy to comply with the complex AI compliance landscape. Here are some key areas that organizations should consider to ensure effective AI governance:
Building an AI system in a large organization can be made more efficient by following a framework that adheres to ISO standards that align with the EU AI Act. This framework should incorporate an AI center-of-excellence (CoE), including data governance, legal, enterprise architecture, platform, and data science experts. The CoE should have a comprehensive understanding of use cases and should be able to participate in the decision-making process, thereby simplifying it.
If the application is considered high-risk, it must meet the necessary conformity assessment and regulatory requirements before being released to the market. On the other hand, applications that fall in the "Unacceptable" category should not be pursued. As mentioned earlier, disregarding these guidelines violates the EU AI Act and could result in severe penalties. This is where the AI governance framework can minimize the risk.
References
[1] "Path to the Digital Decade": the EU's plan to achieve a digital Europe by 2030. (2023, November 24). European Council. https://www.consilium.europa.eu/en/infographics/digital-decade/
[2] Europe's digital decade: 2030 targets | European Commission. (n.d.). European Commission. https://commission.europa.eu/strategy-and-policy/priorities-2019-2024/europe-fit-digital-age/europes-digital-decade-digital-targets-2030_en
[3] Your life online: How is the EU making it easier and safer for you? (n.d.). https://www.consilium.europa.eu/en/your-online-life-and-the-eu/
[4] Sathe, M., & Ruloff, K. (2023, July 28). The EU AI Act: What it means for your business. EY — Switzerland. https://www.ey.com/en_ch/forensic-integrity-services/the-eu-ai-act-what-it-means-for-your-business
[5] White & Case LLP. (2023, December 14). Dawn of the EU's AI Act: political agreement reached on the world's first comprehensive horizontal AI regulation. https://www.whitecase.com/insight-alert/dawn-eus-ai-act-political-agreement-reached-worlds-first-comprehensive-horizontal-ai