How the EU’s Artificial Intelligence Act Could Impact Your Business

Nigel Douglas
3 min readApr 8, 2024

On March 13, 2024, the European Parliament marked a significant milestone by adopting the Artificial Intelligence Act (AI Act), setting a precedent with the world’s first extensive horizontal legal framework dedicated to AI.

Encompassing EU-wide regulations on data quality, transparency, human oversight, and accountability, the AI Act introduces stringent requirements that carry significant extraterritorial impacts and potential fines of up to €35 million or 7% of global annual revenue, whichever is greater. This landmark legislation is poised to influence a vast array of companies engaged in the EU market. The official document of the AI Act adopted by the European Parliament can be found here.

Originating from a proposal by the European Commission in April 2021, the AI Act underwent extensive negotiations, culminating in a political agreement in December 2023, as detailed here. With the European Parliament’s approval, the AI Act is on the cusp of becoming enforceable, initiating a crucial preparatory phase for organizations to align with its provisions.

The AI Act targets a broad range of entities, including AI system providers, importers, distributors, and deployers, emphasizing a risk-based regulatory approach. It distinguishes between AI applications by the level of risk they pose, from unacceptable and high-risk categories that demand stringent compliance, to limited and minimal-risk applications with fewer restrictions. The EU’s AI Act website provides an interactive tool to determine whether or not your AI systems are subject to some of the newly-introduced obligations. The tool is called the EU AI Act Compliance Checker.

Key to the AI Act’s approach is the differentiation of AI systems based on risk categories, introducing specific prohibitions for AI practices deemed unacceptable due to their threat to fundamental rights. High-risk AI systems are subject to comprehensive requirements aimed at ensuring safety, accuracy, and cybersecurity. The act also addresses the emergent field of generative AI, introducing categories for general-purpose AI models based on their risk and impact.

General-purpose AI systems are versatile platforms designed to perform a broad array of tasks across multiple domains, from coding to proofreading, often requiring minimal modifications or fine-tuning for specific applications. Their commercial viability is on the rise, fueled by the expanding availability of computational resources and innovative approaches to leverage them. However, there is a critical need for regulatory frameworks to ensure these systems do not inadvertently process sensitive business information, potentially violating existing data protection laws.

Thankfully, this pioneering legislation does not stand in isolation but operates in conjunction with existing EU laws on data protection and privacy, including the GDPR and the ePrivacy Directive. The AI Act’s enactment represents a critical step toward establishing a balanced framework that encourages innovation while protecting the fundamental rights of European citizens while fostering trust in AI technologies.

For organizations, particularly cybersecurity teams, adapting to the AI Act involves more than mere compliance; it’s about embracing a culture of transparency, responsibility, and continuous risk assessment. To navigate this new legal landscape effectively, organizations should consider conducting thorough audits of their AI systems, investing in AI literacy and ethical AI practices, and establishing robust governance frameworks to manage AI risks proactively. Engaging in dialogue with regulators, participating in industry consortiums, and adopting best practices for AI security and ethics will be instrumental in ensuring that organizations not only comply with the AI Act but also contribute to a trustworthy AI ecosystem.

--

--

Nigel Douglas

Developer Relations | OSS Falco | Cloud-Native | Threat Detection & Response