AI Law & Regulation: A Synopsis of the Global Perspective

Rosemary J Thomas, PhD
Version 1
Published in
8 min readSep 25, 2023
Source: qxcoding.com

Worldwide law and regulation implications on AI are the effects that different legal frameworks and policies have on the development, deployment, and use of artificial intelligence systems across the globe. AI regulation can have various goals, such as ensuring ethical, safe, and trustworthy AI, protecting human rights and privacy, promoting innovation and competitiveness, and fostering international cooperation and coordination. AI regulation can also pose various challenges, such as balancing the benefits and risks of AI, addressing the complexity and diversity of AI applications, adapting to the rapid and dynamic changes of AI technology, and harmonizing different national and regional approaches.

In this article briefly outlines some of the worldwide law and regulation implications on AI in the United Kingdom, European Union, United States and Asia.

The United Kingdom

The United Kingdom has no comprehensive AI legislation, but it has some guidelines and initiatives from various entities. AI in the UK operates within a framework of legal regulations aimed at ensuring responsible and ethical use. The Equality Act 2010 prohibits discrimination based on protected characteristics, while the UK General Data Protection Regulation ensures the fair processing of personal data. Product safety laws maintain industry standards, and product-specific legislation covers electronic equipment, medical devices, and toys to guarantee safety and compliance. Consumer rights laws protect consumers in the marketplace. In addition to these, other relevant laws such as the Human Rights Act 1998, the Public Sector Equality Duty, Data Protection Act 2018, and sector-specific fairness requirements, like those outlined in the Financial Conduct Authority handbook, contribute to a comprehensive legal foundation for governing AI and related technologies, addressing discrimination, data protection, and fairness concerns.

The UK Government has set out five cross-cutting principles that will underpin the UK’s AI regulatory approach:

1.Safety, Security, and Robustness: AI applications must be safe, secure, and robust, with managed risks throughout their lifecycle. Regulators should introduce measures to ensure AI systems’ security, assess and manage potential risks, and regularly test their functioning and security.

2. Appropriate Transparency and Explainability: AI innovators and enterprises must provide sufficient transparency and explainability about their AI systems’ decision-making processes and risks. Regulators may use product labelling and technical standards to gather necessary information and define the level of explainability required for specific AI technologies.

3.Fairness: AI should not discriminate against individuals or commercial outcomes and must uphold legal rights. Regulators may establish fairness standards that apply to AI systems within their jurisdiction, drawing from relevant laws and regulations such as the Equality Act 2010, Human Rights Act 1998 and so forth.

4. Accountability and Governance: Regulatory measures should hold relevant actors in the AI lifecycle accountable for AI outcomes. Clear expectations for regulatory compliance should be set, and governance procedures may be used to encourage compliance. Decisions regarding responsibility allocation should involve experts, technicians, and lawyers.

5.Contestability and Redress: Users and stakeholders should have accessible routes to dispute any harm caused by AI. Regulators should clarify existing dispute resolution processes and guide regulated entities in ensuring affected parties can contest harmful AI outcomes through formal or informal channels.

These principles aim to provide a framework for responsible AI development, deployment, and regulation, emphasizing safety, fairness, transparency, accountability, and dispute resolution.

The UK will host the world’s first summit on artificial intelligence safety in November 2023, with the event set to take place at Bletchley Park, a historic location in the field of computer science. This global gathering will bring together international governments, leading AI companies, and research experts to discuss and reach a consensus on ensuring the safety and security of cutting-edge AI technology. The UK Prime Minister, Rishi Sunak, emphasised the importance of securing international cooperation to ensure the safe and responsible development of AI.

A new blog will be written to address the outcomes of the summit.

European Union

The European Union is actively working on the EU AI Act, a significant piece of legislation aimed at defining AI, categorizing AI products by their risk level, and introducing corresponding regulations. This act also sets up the European Artificial Intelligence Board to oversee and enforce these regulations. The implications of European laws and regulations on AI are the profound effects they have on the development, deployment, and use of artificial intelligence systems throughout the EU. The AI Act’s overarching goals are to ensure the ethical, safe, and trustworthy use of AI, safeguard human rights and privacy, boost innovation and competitiveness, and promote international collaboration and coordination in the AI sector.

There are some main implications of the AI Act. Some AI systems presenting ‘unacceptable’ risks will be banned. These include AI systems that manipulate human behaviour, exploit vulnerabilities, or cause social scoring by governments. The AI Act also seeks to ban real-time facial recognition in public spaces. A wide range of ‘high-risk’ AI systems would be authorised, but subject to a set of requirements and obligations to gain access to the EU market. The requirements include data quality, transparency, human oversight, accuracy, robustness, and security. Those AI systems presenting only ‘limited risk’ would be subject to very light transparency obligations. These include AI systems (eg. Generative AI like ChatGPT) that generate or manipulate image, audio or video content. The AI Act would impose fines of up to 6% of global turnover for companies that breach the rules.

The AI Act is the first comprehensive set of regulations for the artificial intelligence industry in the world. It is expected to have a significant impact on the development and use of AI in Europe and beyond. However, it also faces some challenges and criticisms from various stakeholders, such as industry representatives, civil society groups, and academic experts. Some of the issues raised include the scope and definition of AI, the balance between innovation and regulation, the harmonisation and enforcement of the rules, and the global coordination and cooperation on AI governance.

The Act will become law when a mutually agreed version of the text is reached by the Council (27 EU Member States) and the European Parliament. It is anticipated to be concluded at some point in 2023, and maybe in effect as early as the first quarter of 2024.

United States

In the United States, the Congress has passed and is currently adding several legislative measures aimed at overseeing specific facets of AI. In January 2021, the National AI Initiative Act (U.S. AI Act) was enacted into law. This act established the National AI Initiative, which serves to boosts and synchronise AI-related research and development, across the country. Some state and local governments have enacted some laws and regulations related to specific aspects of AI, such as facial recognition or algorithmic accountability. The US also relies on existing regulatory bodies and some guidelines and initiatives from various agencies to address AI issues.

The Federal Trade Commission’s (FTC) efforts in law enforcement, research, and advisory documents underscore the importance of AI tools being characterized by transparency, explainability, fairness, empirical validity, and a commitment to accountability. They argue that their expertise and the framework of existing laws can provide valuable insights into how businesses can effectively address consumer protection concerns associated with AI and algorithms. This is along the lines with Risk Management Framework’s (RMF) seven key characteristics that define trustworthiness in AI systems.

Transparency: Making information about the AI system available to individuals interacting with it at all stages of its life cycle and establishing organizational practices and governance to minimize potential harms.

Explainability: Enabling a deep understanding of how AI systems function and how they generate their outputs, placing them in the appropriate context.

Fairness: Promoting fairness, equity, and equality in AI systems by addressing and mitigating systemic, computational, statistical, and human-cognitive biases.

Empirical Validity: Continuously testing and monitoring AI systems to confirm that they perform as intended and deliver reliable results.

Safety: Ensuring that AI systems have real-time monitoring and safeguards in place to prevent physical or psychological harm, as well as threats to human life, health, or property.

Security: Implementing protocols to prevent, defend against, or respond to attacks on AI systems and ensuring they can withstand adverse events.

Privacy: Safeguarding human autonomy by protecting anonymity, confidentiality, and control over personal data.

In summary, these characteristics enhance the trustworthiness of AI systems and reduce the potential harm they may cause across various domains.

In June 2023, U.S. senators have introduced two bipartisan AI bills in response to growing concerns about AI technology. One bill focuses on government transparency in AI interactions with people, requiring agencies to inform individuals when AI is used and establishing a process for appealing AI decisions. The second bill seeks to create an Office of Global Competition Analysis to maintain the U.S.’s competitive position in AI and other advanced technologies, aiming to prevent losing ground to global competitors like China. These bills reflect lawmakers’ recognition of the need for new regulations and strategies to address AI’s impact and potential challenges.

Asia

There is no overarching AI legislation in Asia, but few Asian countries have introduced various measures to regulate AI development and applications. China has comprehensive AI legislation that is aligned with its Next Generation Artificial Intelligence Development Plan, developed in June 2017. China also utilises new AI-specific regulatory bodies and existing regulatory bodies to oversee the implementation of its AI laws, regulations, and guidelines.

Japan’s Ministry of Economy, Trade and Industry (METI), working alongside the Expert Group on the Implementation of AI Principles, has formulated a document titled “Guidelines for Governing the Implementation of AI Principles Version 1.1” These guidelines provide a summary of the practical steps to follow in aligning with the Social Principles of Human-Centric AI, as determined by the Council for Integrated Innovation Strategy on March, 2019. The seven social principles for AI are human-centric, education or literacy, privacy protection, ensuring security, fair competition and fairness, accountability and transparency, and Innovation.

AI jurisdiction and the laws applies to companies operating globally are influenced by several factors including data collection, storage and processing, company’s legal registration, AI usage and explainability and collaborations. Transfers of personal information across borders may be prohibited and/ require certain types of data to be stored within the country by certain regulations. If a company operates globally, it may need to comply with the laws of multiple jurisdictions. The AI Act, a proposed European law on AI, assigns AI usages to three risk categories that groups systems from banned to unregulated. Some legal regulations govern how explainable AI should be as discussed above. Governments frequently collaborate with significant corporations when discussing AI regulation and implementation.

These are some examples of how different countries and regions are approaching AI regulation. There is no one-size-fits-all solution for regulating AI, as different contexts may require different strategies and priorities. It’s important to note that AI laws and regulations are still a new concept for most countries. However, there is also a need for global dialogue and collaboration to ensure that AI is developed and used in a responsible, ethical, and beneficial manner for all.

This article was written with the help of ChatGPT and Bing Chat.

About the Author:
Rosemary J Thomas, PhD, is a Senior Technical Researcher at the Version 1 AI Labs.

--

--