Global AI Regulation: Protecting Rights; Leveraging Collaboration

Elisabeth Sylvan, PhD
Berkman Klein Center Collection
8 min readJun 13, 2024

--

Policy experts from Africa, Europe, Latin America, and North America outlined next steps for global AI regimes and networked capacity building

Photo by NASA on Unsplash

By Lis Sylvan & Niharika Vattikonda

Nearly a year and a half after the introduction of ChatGPT, artificial intelligence remains in the regulatory hot seat. While the EU AI Act put the so-called Brussels Effect into play, more regions across the globe are now weighing risks, rights, economic opportunities, and regional needs. On May 28th, the Global Network of Centers of Internet & Society Research Centers (NoC) and the Berkman Klein Center for Internet & Society at Harvard University (BKC) hosted a group of policy experts from Africa, Latin America, the US, and the EU to discuss this state of global AI regulation and outline next steps for collaboration across continents.

Berkman Klein Center discussion video: The State of AI Regulation Across the Globe

Lis Sylvan, Senior Director of Strategy and Programming at BKC, moderated the discussion with Carlos Affonso de Souza (Director of the Institute of Technology and Society of Rio de Janeiro), Mason Kortz (Clinical Instructor at the Cyberlaw Clinic at BKC), Gabriele Mazzini (European Commission, chief architect of the EU AI Act), and Ridwan Oloyede (Certa Foundation, coauthor of their recent “State of AI Regulation in Africa” report), with NoC Executive Director Armando Guio providing behind-the-scenes support. The group delved into how governments are weighing sectoral versus horizontal regulatory approaches; the role of the administrative state and existing data protection and competition regulators; the new models of AI regulation in Rwanda and Brazil; the impact of the EU AI Act across all jurisdictions; and the potential for truly global governance.

Origins and Approaches

De Souza contextualized the current moment of global AI regulation as a decade-long journey of AI regulation that started with charters and declarations of governing principles from various governments and entities. Over time, those charters and principles were reflected in national AI strategies, which have been in the works for five years and can be seen as the precursor to AI regulation; Brazil’s AI regulatory evolution, for example, closely followed this time frame. De Souza highlighted the impact of the European Union’s General Data Protection Regulation (GDPR) on this evolution; after GDPR took effect, countries have established data protection authorities that have largely been the main point of contact for early AI governance. As a result of GDPR, he said, “data protection may be an accelerator, may be an entry point for countries in the majority world, because that’s the conversation that we have been having in the last decade, and that’s where resources [and] attention had been moving forward in those countries.” However, he cautioned against using data protection law as the sole basis of AI regulation, because the data protection framework does not necessarily address the full scope of challenges raised by the development of AI.

Mazzini explained that the technical discussions about the EU’s proposed AI legislation date back to 2019. One of the key concerns with a sectoral approach, he said, was the risk of privileging certain sectors over others. The horizontal approach, though, results in added complexity as regulators needed to find regulations that would work across sectors and avoid repetitions; moreover, the scope of EU legislation is limited by the exclusion of national security, military, and defense sectors. While the EU AI Act takes an omnibus approach, Mazzini said it did not make sense to regulate AI as its own technology but rather a general-purpose tool with a variety of applications.

“What was clear to me since the get-go is that It didn’t make sense to regulate AI as a technology as such, because indeed what we are dealing with is a general purpose technology that has a variety of applications that we don’t even foresee today…” said Mazzini, “…and therefore, from my perspective, the idea to establish rules for the technology as such, regardless of its use, didn’t make any sense…We came up with this approach of establishing rules depending on the specific use to which that account is put, with the greatest burden, from a regulatory point of view, being on the high risk,” which Mazzini outlined to include applications of the technology that are linked to health and safety, including medical devices, automated cars, and drones.

Sectoral and Regional Approaches

In the U.S. and in the African Union, regulatory agencies have found it more effective to apply existing laws — across data protection, competition, consumer protection, employment, and other sectors — to govern AI, often taking a sectoral approach. Oloyede said that data protection authorities and competition authorities have largely driven the initial AI regulatory agenda, as these authorities are best equipped to enforce consumer protection, data protection, intellectual property, and competition laws as the basis for national AI governance strategies. “We might see some sort of like a clearinghouse model image where not every country in Africa, for example, will try to come up with a specific AI regulation,” Oloyede said.

Oloyede indicated that the sector-based approach has been dominant on the African continent, with countries including Nigeria, Kenya, South Africa, Rwanda, and Egypt beginning to develop roadmaps for AI governance and establish regulatory task forces. Oloyede said the sectoral approach has allowed regulators to develop specific policies for the deployment of AI in healthcare, for example.

According to Mason Kortz, this sectoral approach is typically favored in the U.S. because the U.S. regulatory approach values subject-matter expertise over technical expertise. The U.S. will likely have subject-matter experts regulate AI in their own domains, Kortz said — for example, the Department of Housing and Urban Development would regulate AI for housing. The U.S. approach relies on the country’s strong administrative state and directs specific federal agencies to take on different pieces of AI regulation. Meanwhile, certain state laws have sought to regulate specific use cases of AI in housing and employment contexts.

Kortz also noted that the current approach in the U.S. is a confirmation that existing rights-based regimes will be applied or extended to harms resulting from the use of AI systems; with a notoriously slow legislature, he said, only making small changes as needed is an advantageous approach, particularly when existing enforcement agencies may already have the power to make those changes. The U.S. common law system is well-suited to this approach, he said, as it lends judges relatively strong power to reinterpret the law in ways that are binding on lower courts without necessarily having to rewrite civil code.

“When it comes to some of the more rights-based statutes we have,” Kortz said, “I think, actually, we have a pretty good governance model right there, and we just need some small adjustments around the edges to modernize those statutes and bring them in line, not just with AI, but hopefully, if not future-proof them, at least provide a little more stability for whatever comes next after AI.” However, Kortz allowed that AI is so fundamentally transformative that certain existing laws, such as intellectual property law and copyright doctrine, may not be enough and global harmonization of AI laws should be a priority.

Global Collaboration and Capacity

Oloyede indicated that African countries have introduced solutions at the level of the Global Privacy Congress, although these solutions will need to reflect differing national and regional interests. Mazzini noted that generative AI and general-purpose AI create additional issues that require international collaboration — fighting misinformation, he said, will require such collaboration. However, de Souza cautioned that regulatory transformation must keep in mind how those laws will be applied in the future. In some cases, he noted, new liability regimes for AI are now stricter than the remaining body of law; Costa Rica, for example, has adopted a strict liability approach for high-risk uses of AI.

“If we turn out to have the chapters of liability on our AI laws more severe than what we have in our general law for other situations, if we are all in agreement that, in the future, AI is going to be in everything, the legislators that are designing those laws today, they are designing general laws on liability, because we will have AI in almost all sectors,” de Souza remarked. “So the decisions that we’re making today on liability, they might end up scrapping the provisions that you have on your civil code, consumer protection code, because the AI law will be the law that is more recent, more specific, and that may be the one that will be applying in most cases.”

This international collaboration will require capacity building across the globe, and Mazzini emphasized that the EU AI Act has prompted additional work to support the authorities in the EU that will implement and enforce the regulation. Although the AI Act will impact multiple private sectors, he said, its public enforcement will require both financial and knowledge-based resources. De Souza noted that the Brussels Effect will prompt a need for global bureaucracy to support global compliance with the EU AI Act, and well-resourced national authorities are needed to support that implementation. Oloyede, however, said that lessons learned from the GDPR rollout may inform a better approach to implementing the EU AI Act with a more nuanced understanding of the local context. While the EU AI Act will require capacity building to support new governance bodies with funding and resources, he said, it is essential to preserve existing collaborations with data protection and competition authorities and empower those authorities to address AI in their own domains.

Despite different countries taking more sectoral versus horizontal approaches, the global community is working to establish flexible approaches to AI governance in their respective regions. As Oloyede said, “AI is here today. Tomorrow is going to be a different technology. And we can’t keep legislating for every new technology that we have.” Mazzini described a need for international coordination when he said, “when it comes to this new type of AI that is sometimes is called ‘generative AI’ or ‘general purpose AI’ that we have specifically regulated in the EU — notably in the last few weeks, in final stages of the negotiations — I think I would like to see there certainly more international coordination, because there we are dealing with a number of questions that I think are pretty common across jurisdictions.”

Though approaches across the globe may be different, a common cross-cutting theme of the work is balance: protecting rights versus supporting innovation, legislating a critical technology while its capacity and impact is still developing, and providing necessary limitations while allowing nimble innovation.

Watch the video with speakers Carlos Affonso de Souza (Director of the Institute of Technology and Society of Rio de Janeiro), Mason Kortz (Clinical Instructor at the Cyberlaw Clinic at BKC), Gabriele Mazzini (European Commission, chief architect of the EU AI Act), and Ridwan Oloyede (Certa Foundation).

The Network of Internet & Society Research Centers (NoC) is a collaborative initiative among academic institutions with a focus on interdisciplinary research on the development, social impact, policy implications, and legal issues concerning the Internet. The Berkman Klein Center at Harvard University served as NoC Secretariat from 2020–2023 and continues to participate in cross-national, cross-disciplinary conversation, debate, teaching, learning, and engagement.

--

--