AI Governance Primer

Key Players in the Global AI Regulatory Landscape

StartingUpGood
StartingUpGood Magazine
6 min readMay 21, 2024

--

The ITU’s AI for Good Global Summit is kicking off with its first-ever AI Governance Day on Wednesday, May 29th. To prepare, we’ve been doing our due diligence in understanding the global AI regulatory landscape.

Stating the obvious: it’s complicated, and the stakes are high. Numerous UN, governmental, multilateral, nonprofit, academic, industry, national, regional, and international organizations — many with competing interests and contrary opinions — all want to weigh in on how to ensure that AI is safe and equally accessible without stifling innovation or advantaging any one group over another.

To help us better understand the landscape, we developed this primer that outlines key players, including:

While admittedly not an exhaustive list, we thought this information might also help get our audience up to speed on this important topic. Follow along as we share more insights leading up to our real-time AI for Good conference coverage.

Photo by NASA on Unsplash

United Nations (UN)

Several UN agencies and related organizations are actively involved in efforts towards global AI governance:

UN Secretary-General’s High-level Advisory Body on AI (HLAB on AI)

This multi-stakeholder body was convened in 2023 to undertake analysis and provide recommendations on the governance of AI. Its interim report, Governing AI for Humanity, released in December 2023, called for a global governance framework and identified seven layers of governance functions. The AI Advisory Body will publish its final report ahead of the Summit of the Future in the summer of 2024.

The HLAB on AI is housed in the Office of the Secretary-General’s Envoy on Technology, which also includes the Roadmap for Digital Cooperation, Global Digital Compact, Digital Public Infrastructure, and the Open Source Programme Offices (OSPOs) for Good.

UN Educational, Scientific and Cultural Organization (UNESCO)

UNESCO adopted the Recommendation on the Ethics of Artificial Intelligence in 2021, providing a framework to guide the development of AI technologies. It also has initiatives like the Global AI Ethics and Governance Observatory.

International Telecommunication Union (ITU)

The ITU, the UN specialized agency for information and communication technology, is developing technical standards related to AI. Since 2017, it has organized the AI for Good Global Summit in partnership with 40 UN sister agencies. The Summit is the leading action-oriented UN platform for promoting AI to support the SDGs.

UN Development Programme (UNDP)

The UNDP is working to ensure equitable access to and benefits of AI for developing countries through initiatives like the Digital Ecosystem for Ethical AI (DEEP).

UN Conference on Trade and Development (UNCTAD)

UNCTAD examines the impact of AI on trade and development, especially for developing countries.

Other Multi-Stakeholder Organizations

Organisation for Economic Co-operation and Development (OECD)

The OECD, comprising 38 advanced economies, has developed principles and recommendations for trustworthy AI, focusing on areas such as transparency, accountability, and human oversight.

Global Partnership on AI (GPAI)

GPAI is a multi-stakeholder initiative with 29 member countries and a network of experts from industry, civil society, and academia. It aims to bridge the gap between AI theory and practice by guiding the responsible development and use of AI through projects and real-world applications. GPAI is working on developing governance frameworks, tools, and guidance for trustworthy AI.

World Economic Forum (WEF)

The WEF has established the Global AI Action Alliance, a multi-stakeholder initiative that brings together industry leaders, governments, international organizations, academia, and civil society to accelerate the adoption of inclusive, trusted, and transformative AI. It focuses on developing governance frameworks, best practices, and policy recommendations.

Institute of Electrical and Electronics Engineers (IEEE)

The IEEE is a leading developer of technical standards for AI systems, including the IEEE Ethically Aligned Design standards. It has a multi-stakeholder approach, involving experts from industry, academia, and government in the development of these standards.

International Organization for Standardization (ISO)/ International Electrotechnical Commission (IEC)

The ISO is developing international standards for AI governance, risk management, and trustworthiness through the Joint Technical Committee of the ISO and the IEC (ISO/IEC JTC 1/SC 42 committee). This committee has a multi-stakeholder structure, with participation from national standards bodies, industry, and other stakeholders.

Council of Europe (CoE)

The CoE is working on an AI Convention based on human rights, democracy, and the rule of law. While an intergovernmental organization, it allows observer states and multi-stakeholder participation in the drafting process.

G7

The G7 has initiated the “Hiroshima AI Process” to develop an International Code of Conduct for organizations developing advanced AI systems.

Governments

While numerous national and regional governments are currently in the process of developing AI regulations, most people look to the EU, US, and China to lead standardization agreements. This recent Devex article (free sign in required to access) provides a good overview of how competing government interests are impacting progress towards AI governance standards.

European Union (EU)

The EU has proposed the AI Act, a comprehensive regulatory framework for AI systems. The EU’s AI Act has reached the final stages of the legislative process and is expected to be officially adopted and enter into force in May or June 2024. Most provisions of the Act will become applicable two years after its entry into force, allowing a transition period for compliance. However, provisions related to prohibited AI practices will apply after six months.

The AI Act classifies systems based on risk levels (unacceptable risk, high-risk, limited/minimal risk) and outlines corresponding requirements and obligations. High-risk AI systems face stringent requirements like risk management, use of high-quality data, human oversight, and documentation before market entry.

United States (US)

No comprehensive federal AI legislation has been passed yet. The US has taken a sectoral approach to AI governance, with various agencies and initiatives focused on specific domains like transportation (NHTSA) and healthcare (FDA). The sectoral approach allows agencies to leverage their existing authorities, but there are also calls for more overarching federal AI legislation to provide a cohesive national strategy and framework.

The Biden Administration’s Executive Order on AI aims to promote public engagement, protect rights and freedoms, and foster a global AI ecosystem. The lengthy EO, signed on October 30, 2023, contains over 100 directives and initiatives spread across 11 sections, covering various aspects of AI development, deployment, and governance.

China

China does not have a single overarching AI law, but a series of regulations targeting specific AI applications and risks. The Cyberspace Administration of China (CAC) is the central regulator enforcing data privacy, cybersecurity and ethical AI principles. Key regulations include those on AI-driven recommendation algorithms (user controls, anti-discrimination) and deep synthesis/generative AI tools (registration, labeling).

China has also introduced regulations around use of facial recognition technology by private entities. The National AI Development Plan outlines strategic goals for making China a global AI leader by 2030 through promotion and industry growth targets.

Private Industry and Academia

Many companies are publicly calling for clear AI guidance, but currently, most commitments are voluntary. For example, in July 2023, the White House launched the Voluntary AI Development and Use Principles, an industry-driven initiative where major tech companies like Microsoft, Google, Amazon, and OpenAI committed to implementing testing procedures, documentation standards, and human oversight for AI systems before release.

Companies also participate in AI conferences and multi-stakeholder groups, such as the Partnership on AI, which was established in 2016 by ‘big tech’ companies, civil society organizations and academic stakeholders to develop guidance and inform public policy.

Like private industry, many global universities (Stanford and Oxford to name a couple) regularly participate in multi-stakeholder groups and convenings. They are good sources of unbiased knowledge and research, but to date have been limited to making observations and recommendations for global AI regulations.

StartingUpGood supports fresh entrepreneurial approaches to social impact. FOLLOW US on social media:

Check out SDGCounting for the latest news on tracking the progress of the Sustainable Development Goals. #SDGs #GlobalGoals

Disclaimer: Generative AI tools such as OpenAI’s GPT and Perplexity were used in the creation of this article to assist with research, summarization, and proof reading.

--

--

StartingUpGood
StartingUpGood Magazine

Supporting fresh entrepreneurial approaches to do good in the world. Check out our magazine: https://medium.com/startingupgood