The Evolving AI Policy Landscape: Key Developments for Business Leaders
The 2024 AI Index Report from the Stanford Institute for Human-Centered Artificial Intelligence (HAI) provides a comprehensive overview of the AI landscape. In a series of articles, we highlight key findings of the report, focusing on trends and insights that are particularly relevant for business leaders.
In this article, we’ll explore the rapidly evolving AI policy landscape, with a special focus on the significant policy events of 2023 and the state of AI regulation in the United States and European Union. As AI technologies continue to advance and permeate nearly every sector of the economy, it is crucial for business leaders to stay informed about the policy developments shaping the future of AI governance.
Key AI Policy Developments in 2023
The year 2023 witnessed a flurry of AI policy activity across the globe, reflecting policymakers’ growing recognition of the need to regulate AI and harness its transformative potential. Some of the most notable policy events included:
- U.S. Executive Order on AI: In October 2023, President Biden issued an executive order establishing new benchmarks for AI safety, security, privacy protection, and the advancement of equity and civil rights. The order mandated the creation of guidelines and best practices to support the development and deployment of secure, reliable, and ethical AI.
- EU AI Act: In December 2023, European lawmakers reached a tentative deal on the AI Act, a landmark piece of legislation that establishes a risk-based regulatory framework for AI. The act prohibits systems with unacceptable risks, classifies high-risk systems, and subjects generative AI to transparency standards.
- China’s AI Regulations: China introduced regulations aimed at “deep synthesis” technology to tackle security issues related to the creation of realistic virtual entities and multimodal media. The country also updated its measures on the cyberspace administration of generative AI, adopting a more targeted regulatory approach.
- U.K. AI Safety Initiatives: The U.K. hosted the AI Safety Summit and announced the establishment of the world’s first government-supported AI Safety Institute. These initiatives aim to address AI risks, promote global cooperation, and position the U.K. as a leader in AI safety research.
The State of AI Regulation in the U.S. and EU
Both the United States and European Union have seen a significant increase in AI-related regulations in recent years. In the U.S., the number of AI regulations rose from just one in 2016 to 25 in 2023, with a 56.3% increase in the last year alone. Similarly, the EU passed 32 AI-related regulations in 2023, up from 22 in 2022.
In the U.S., AI regulations are increasingly being issued by a broader array of regulatory agencies. In 2023, 21 agencies issued AI regulations, compared to 17 in 2022. The agencies leading the charge include the Executive Office of the President, the Department of Commerce, the Department of Health and Human Services, and the Bureau of Industry and Security. Notably, there has been a shift toward more restrictive AI regulations in the U.S., with 10 restrictive regulations in 2023 compared to just three expansive ones.
In the EU, the Council of the European Union and the European Parliament have been the most active in issuing AI regulations. Unlike the U.S., the EU has seen a trend toward more expansive AI regulations, with 12 expansive regulations in 2023 compared to eight restrictive ones. The most common subject matters for EU AI regulations in 2023 were science, technology, and communications, followed by government operations and politics.
For business leaders, the increasing volume and complexity of AI regulations highlight the need for proactive engagement with policymakers and regulatory bodies. Businesses must closely monitor the regulatory landscape, provide input on proposed regulations, and ensure that their AI systems and practices align with emerging standards and guidelines. By staying ahead of the regulatory curve, businesses can not only mitigate compliance risks but also position themselves as leaders in responsible AI adoption.