Centering fundamental rights in global AI governance and standardisation processes
Last Friday the European Council gave the final check required for the EU AI Act to soon be published, and it is expected to enter into force in April 2024. The Act will be applicable to prohibited systems within 6 months after entry into force, and the rules on general-purpose AI enter into application 12 months later.
I am at the ETSI Artificial Intelligence Conference this week, which focuses on “Standards” that apply to AI. Yesterday’s speakers from several different jurisdictions discussed how their legislation and regulations (mostly soft law) align or not with the EU AI Act, and how these will be used as the foundation of harmonised national and global standards.
Antoine-Alexandre Andre, from DG CNECT explained that despite the latest changes to the EU AI Act the main logic of the Act was maintained. The approach is based on product safety legislation, with AI systems being framed as products, with the addition of a fundamental rights layer, which is a new thing — products safety legislation had never previously looked at fundamental rights specifically. The Act is risk-based, and concentrates on AI systems that may pose specific risks to health, safety, and fundamental rights.
Fundamental rights in the EU legal system flow from international human rights law. The EU approach of including fundamental rights in recent digital legislation is still uncommon around the world but should be an example of how this can be done in global harmonisation processes related to AI regulation and standardisation. It is important in this standardisation process that the terms and definitions used do actually reflect international legal human rights definitions and norms, those having been agreed in treaties widely ratified worldwide.
Standards are used as one way to meet the requirements under the EU AI Act, and will likely also be a crucial tool for companies to evidence their compliance. The first EU Commission standardisation request includes obligations related to fundamental rights and data protection, as well as energy consumption. In the meantime, companies can already demonstrate their commitment to the Act through ‘company pledges’ following the AI Pact.
The Commission has been actively participating in AI fora at European and International level. At the ETSI Conference we heard from speakers from China, India, the UK, the US, and Japan. I am interested in how these different jurisdictions will approach harmonisation of AI regulation, in particular when it comes to fundamental rights.
Betty Xu from SESEC reported on China’s perspective on AI and AI regulation which, as expected, is significantly different from the European approach with the EU AI Act. There are several existing laws and regulations in China that already apply to AI, including the New Generation of AI Ethics Norms introduced in 2021, and China’s Global AI Governance Initiative which also prioritises ‘ethics first’. Details of these ethical norms were not presented, but it would be worth breaking down how these map against fundamental rights.
In India, Dinesh Chand Sharma from the EU Project SESEI explained that India is expected to have a unified Digital India Act heavily influenced by the EU AI Act. Although Sharma did not mention fundamental rights in his presentation, fundamental rights are enshrined in India’s Constitution, so it would be coherent with their current legal system for India to follow the EU AI Act approach.
The speaker from the UK ‘s DSIT focused on the cybersecurity angle– i.e. risks to the technology, and not about risks to human users of AI. In the US, AI policy requires consideration of impacts on individuals, groups, communities, organisations and society, beyond the expected end users. However, the US Bill of Rights does not map very coherently against EU fundamental rights, as the US Bill of Rights covers only civil rights, and not the full spectrum of human rights covered under international law.
Finally, Francois Ortolan from NEC Labs Europe explained the Japanese approach to AI governance which centres social principles, dignity and the creation of ‘human centric’ AI. It is not clear what ‘human centric’ or ‘dignity’ mean in the context of the Japanese legal systems — as Human Rights Watch has observed, Japan has no laws prohibiting discrimination, and no national human rights institution.