AI global regulation: What to watch now

Jim Meszaros
Issues Decoded
Published in
6 min readAug 6, 2024
Photo by Igor Omilaev on Unsplash

Artificial intelligence (AI) raises profound legal and regulatory questions. And while AI promises huge benefits for society, it also poses major risks. The challenge for legislatures and regulators is getting the balance right between innovation and risk. Making rules adaptable for a technology that is likely to change rapidly is difficult. Regulators are currently focused on issues such as AI model safety, bias, transparency, data privacy, security, trust, copyright protections, content regulation, discrimination and economic impacts (job loss, work force adjustments, competition). For companies, a key area of focus will be the impact of regulation on innovation and productivity. The AI regulatory debate is taking place globally. Here is a summary of current regulatory initiatives in key global markets.

UNITED STATES

Biden administration

There is no comprehensive federal legislation or regulations in the U.S. that regulate the development of AI or specifically prohibit or restrict its use. However, there are existing federal laws that concern AI with limited applications. Federal agencies are currently implementing President Biden’s October 2023 Executive Order (EO) guiding the government’s use of AI and mandating some private sector requirements. The most significant agencies to watch include the Federal Communications Commission (FCC) rules on political advertising; Federal Trade Commission (FTC) rules on AI competition and data privacy; National Institute of Standards and Technology (NIST) rules on AI safety and security; Food and Drug Administration (FDA) rules on drug development; Department of Homeland Security (DHS) and Department of Justice (DOJ) guidelines on use of AI in law enforcement and homeland security and U.S. Copyright Office rules on intellectual property. Federal regulations could face “Chevron doctrine” legal challenges if mandated without congressional authority. The U.S. government will likely boost spending on AI research and development, including in defense and intelligence areas, using its buying power to help shape the market.

Congress

Senate legislation is being considered to provide $32 billion in funds to strengthen national security, address labor displacements, fund research and innovation, promote election transparency and ensure consumer protections. House legislation is also being considered to protect consumers from deceptive AI and manage federal governance.

States

Legislative initiatives have passed or are being considered in California, New York, Colorado and several other states.

2024 election

Vice President Harris led the U.S. delegation to the 2023 Global Summit on AI Safety in London. Absent action in Congress, a Harris administration would likely be limited to existing executive authorities and maintain many of the policies of the current administration. Former President Trump says if elected he will rescind and replace Biden’s EO.

Timelines

Biden’s EO mandates have several deadlines leading up to December 1, 2024 compliance. It is uncertain if any bipartisan AI legislation can be approved by both chambers by the end of 2024.

EUROPEAN UNION

EU AI Act

The recently approved Artificial Intelligence Act (AI Act or the Act) aims to create a secure and trustworthy environment for the development and use of AI in the European Union. The Act, which the European Council approved on May 21, 2024, is the first of its kind globally and may set global standards for AI regulation, much as the General Data Protection Regulation (GDPR) did for data privacy.

The goals of the Act include: promote AI public trust; provide strong protections for public health, safety and fundamental rights (including democracy, rule of law and environment); improve the functioning of the internal market and support innovation and AI-related investment.

The Act provides for a four-tiered risk-based classification of AI applications, with “unacceptable” risk uses prohibited and with obligations and requirements for “high” risk uses, transparency measures for “limited” risk uses and exemptions for “low” risk uses.

The EU AI Act applies to foreign companies doing business in the EU. The Member States will provide governance and enforcement. Non-compliance with the AI Act could result in substantial fines that vary based on the nature of the violation and the size of the organization. Infractions involving prohibited AI systems may incur fines of up to €35 million ($38.1 million) or 7% of global turnover.

Timelines

  • August 1, 2024: Entry into force.
  • February 1, 2025: Prohibition of all unacceptable risk practices.
  • June 1, 2025: Regulations for general-purpose AI will be enforced.
  • August 1, 2025: Phase in of provisions regarding governance, AI models that pose systemic risks, AI governance system in member states and EU and AI penalties.
  • August 1, 2026: Phase in of regulations for high-risk practices.

REST OF THE WORLD

China: Enacted regulations in July 2023 and May 2024 to govern AI content, with state-centric values to promote social harmony and stability. Developing more than 50 new national and industrial standards by 2026. AI companies must undergo government reviews to confirm their large language models reflect “core socialist values.” China’s regulatory landscape raises questions about the future of innovation in China, as laws might force companies to prioritize government compliance over creativity and technological progress.

Japan: Created an AI Strategy Council in May 2024 to develop a legal framework for AI.

South Korea: Created a Presidential AI Committee to develop the government’s AI approach in July 2024. Plans to establish an AI Safety Institute in late 2024.

Singapore: Published the Model AI Governance Framework for Generative AI in May 2024.

India: Ministry of Electronics and IT has begun drafting standalone law on AI with a focus on content moderation.

United Kingdom: The new Labour government plans to introduce AI regulation in targeted areas, including binding regulations on the “handful of companies developing the most powerful AI models” and prohibiting the creation of sexually explicit deepfakes.

France: Will host the second global AI Summit in February 2025 and propose a set of global governance standards.

Canada: The proposed Artificial Intelligence & Data Act (AIDA), introduced in 2022, aligns with the EU’s AI Act by taking a risk-based approach. Implementation may depend on the outcome of Canada’s 2025 election.

Brazil: Several AI bills are being considered in the Congress. The Brazilian government plans to launch a national plan for AI development. The strategy will emphasize the importance of transparency, accountability and inclusivity in AI development. President Lula also plans to present a global governance initiative at the UN General Assembly this fall.

OECD: An Organization for Economic Co-operation and Development (OECD) initiative, launched in May 2024 and supported by 49 countries and regions (primarily OECD members), aims to advance cooperation for global access to safe, secure and trustworthy generative AI.

G7: At their May summit in Italy, G7 leaders affirmed the importance of creating international partnerships to ensure all people can access the benefits of AI, recognizing the need to make sure it enables increased productivity, empowers workers and creates inclusiveness and equal opportunities.

UNITED NATIONS: In May, the UN introduced a draft resolution on AI encouraging Member States to implement national regulatory and governance approaches toward a global consensus on safe, secure and trustworthy AI systems. The UN does not have the ability to pass laws or regulations. However, the UN Charter gives the General Assembly the power to initiate studies and make recommendations to promote international law. In the future, the General Assembly may vote on AI resolutions, which are expressions of the Member States’ views but not legally binding.

###

Want to work with us? Reach out to Ellen DeMunter at EDeMunter@powelltate.com

About Weber Shandwick Public Affairs

Weber Shandwick is a global in-culture communications agency built to make brave ideas connect with people. The agency is led by world-class strategic and creative thinkers and activators and has won some of the most prestigious awards in the industry. Weber Shandwick was named to Ad Age’s A-List in 2020 and Best Places to Work in 2019. Weber Shandwick was also awarded PR Agency of the Year by Campaign US in 2021, honored as PRovoke’s Global Agency of the Decade in 2020 and PRWeek’s Global Agency of the Year in 2015, 2016, 2017 and 2018. The firm has earned more than 135 Lions at the Cannes Lions International Festival of Creativity, including 36 Lions in 2021 to become the most-awarded PR agency. Weber Shandwick also received Honorable Mention (and the only PR agency) on the Gartner Magic Quadrant for Global Marketing Agencies in 2021.

Weber Shandwick is part of the Interpublic Group (NYSE: IPG) and is the anchor agency within The Weber Shandwick Collective — a communications and consulting network built for the convergence of society, media, policy and technology.

For more information, visit: https://www.webershandwick.com/expertise/public-affairs/

Powell Tate is the Public Affairs Unit of the Weber Shandwick Collective. For more information, visit: www.powelltate.com

--

--

Jim Meszaros
Issues Decoded

Washington DC | International consultant to governments, multinational corporations and foundations on global economic, trade, development and climate issues