How the EU’s Artificial Intelligence Act Advances Equity in AI(or Doesn’t)

Dr. Veale.

Dr. Michael Veale, associate professor at University College London, shared his concerns and hopes for the proposed Artificial Intelligence Act (AIA) which appears to be headed to enactment in the European Union. On April 6th, 2022, Veale was virtually welcomed as a speaker for the AI, Equity, and Law Series hosted by Santa Clara Law and organized by Professor Colleen Chien. Irina Raicu, director of the Internet Ethics Program at the Markkula Center for Applied Ethics, moderated the talk. A recording can be found here.

In his presentation, Veale delved into the massive regulatory framework that would be established by the AIA, which was unveiled to the world on April 21, 2021, when the European Commission published its proposal to regulate “high risk” AI.

The Act is likely to become law, Veale said, at which point there will be a two-year transition period as regulators set up standards to enforce it.

But upon closer look, as Veale explained, the Act falls short of addressing equity issues and guaranteeing that AI systems do not trample on basic rights and liberties.

“The AIA has a flavor of fundamental rights, but it is packed into a product standards lens,” Veale said, touching on the crux of the law’s limited scope.

“[It] is ultimately about the free movement of goods across borders.”

Veale’s work has previously examined how the law applies to machine learning techniques in practice, how civil servants grapple with issues of algorithmic discrimination, and the limits of data rights. Dr. Veale’s thoughts on the Act are also available in a Computer Law Review International article titled Demystifying the Draft EU Artificial Intelligence Act.

How the EU Artificial Intelligence Act (AIA) Works

The EU Artificial Intelligence Act categorizes the use of AI as falling into one of four levels of risk: unacceptable risk, high risk, limited risk, and minimal risk. It is a simple approach: the higher the risk category, the more regulated the AI. While this sounds powerful on paper, there are serious limitations to the effectiveness of each category in practice, according to Veale.

For example, the AIA fails to expand the scope of what AI systems are defined as “unacceptable risk” beyond existing regulations. Three types of AI are deemed “unacceptable,” and thus banned, under the Act: manipulation, social scoring, and facial recognition used by law enforcement. However, the statutory language limits these uses to very few circumstances. Manipulation, for example, is limited to AI systems deploying subliminal techniques beyond a person’s consciousness or exploiting the vulnerabilities of a specific group of persons in order to materially distort their behavior in a manner that causes physical or psychological harm (Title II, Art. 5). Veale explained that this highly specific language prohibits little more than what is already banned under the Unfair Commercial Practices Directive.

The next category of “high risk” faces similar limitations, albeit for different reasons. This category of uses of AI includes biometric identification, employment recruitment, credit scoring, and uses in law enforcement, migration, and education.

Equity through Certification?

The framework for “high risk” replicates the New Legislative Framework, first adopted 40 years ago as the New Approach — a system for certifying goods and services as being in conformity with EU laws. The basic principles of this framework consist of mandating private standardization bodies to set harmonized standards and grant certification to conforming products — commonly recognized by a “CE” mark. The goal is to promote the free movement of goods.

Veale identified several concerns with the commercial-certification approach to AI regulation. First, the certification bodies do not perform actual product testing before granting certification. Second, because the standards are set by the private sector, Veale is concerned about the legitimacy and accountability of these bodies — they are not government agencies, and the EU does not have much control over them.

“They regulate users by stretching product certification legislation and that’s not going to cut it, I think. We can’t bring all those issues of equity into that old lens, and that’s where the Act is fatally flawed,” Veale said.

Third, these bodies in the past have been narrowly confined to regulating specific industries, whereas AI is a tool that can be used in a wide range of industries. As a result, harmonizing AI standards will be an incredible challenge, and in order to effectively do their job, these bodies will require immense resources — which Veale is skeptical they will be able to obtain. Fourth, Veale worries that this certification framework will remove protections established by national governments. Fifth, general-purpose AI models have no regulatory barriers under the AIA, as the regulation instead targets small companies that use the general-purpose models for a specific ends. Veale believes that this will maintain large wealthy companies’ power over AI markets. All of these concerns amount to one big question: how effective will the AIA be at regulating high risk AI in the European Union?

The limited risk category mainly consists of transparency obligations for AI used in bots, deep fake technologies, and AI emotions. While these obligations seem straightforward at first glance, on closer inspection, they do not make much sense. For example, bot disclosure is put upon providers of the AI, rather than the users, who are often the ones purveying the service to consumers. Once a bot provider hands off their bot to a user, it is typically out of their control. Logically, users should be required to make bot disclosures once they put it out to consumers interfacing with the bot. Veale found that the obligations set out in the AIA for limited risk AI are simply not realistic and will not be effective in implementation.

Finally, minimal risk has the lowest level of regulation, imposing only voluntary codes of conduct. A voluntary code of conduct approach is the most cost-effective way to regulate this AI, given that it could apply to almost any use of AI. The drawback is the lack of accountability. But, the AIA’s primary concern is higher risk AI and it would not be feasible to allocate more resources to regulate minimal risk AI.

Big Data Inequity: A Bigger Problem

Lastly, Veale explained that the AIA does not account for the power imbalance between Big Tech companies and small entrepreneurs. A small startup simply cannot afford the computer storage to compete with the general-purpose machine learning power of “huge platformized AI,” the professor said.

“These companies know they have a natural monopoly when they build these huge, huge systems.”

Veale said this was a major oversight because “we can’t talk very well about regulating systems in relation to individuals” unless the dominant market power of Big Tech is addressed. “One thing the platforms are really good at is constructing an ecosystem where no one is responsible.”

Conclusion

In conclusion, Veale found that by trying to regulate all four risk areas at once, the AIA fails to comprehensively and strongly regulate any one of them. The statutory language prohibits the act from regulating AI more broadly, and the reliance on private bodies to create standards raises questions about the legitimacy of standards and effectiveness of enforcement. Had the regulation focused on “high risk” AI only, then perhaps it would have succeeded in forming a model for the world on how to contain AI-related risks. However, as Veale explained, the real purpose of the Act was to harmonize standards in order to promote the free movement of goods across Europe. As a result, the AIA framework failed to break substantive new ground.

Where the AIA does try to take equity into consideration is in its concern for users, provisions exempting smaller players, and expanding the scope of product safety. This, however, in Veale’s opinion, is insufficient as there are many equity issues that fall outside the scope of product safety.

A recording of Veale’s talk can be found here.

The AI, Equity, and Law Speaker and Blog Series covers developments in AI regulation at the local, state, national, and international levels and is curated by Professor Colleen Chien. Blog summaries in the series are written by Santa Clara Law students in Professor Chien’s AI class and include links to recordings of the public talks. For updates, follow @colleen_chien or @iethics.

--

--