Regulating AI — On the Importance of Transatlantic Cooperation

By Léa C. Glasmeyer, Core Writers’ Group

Artificial intelligence (AI) has not only become one of the most relevant technological developments likely to fundamentally alter our societies, but also a strategic asset for foreign affairs and geopolitical considerations. The regulation of the digital sphere, technologies, and personal data has continued to be a great concern over the past decades, notably between the EU and the US. While there is competition for market share, particularly with China, cooperation on regulations has become increasingly necessary — regulations that are central for providing a framework in order to face AI’s potential to cause harm, but which should not inhibit innovations. While both China and the US focus on new opportunities and seek to build an AI ecosystem, the EU, on the other hand, has rather been concerned with the negative repercussions and threats generated by AI. As a response to this, the European Commission has enacted the AI Act, which comes with requirements and prohibitions instead of recognising the AI potential. This may increase the EU’s technological dependency by further shifting the value creation of AI developments to the US or China, thereby risking transforming the Old Continent into an industrial museum.

The EU AI Act

The EU Commission presented an initial draft of the AI Act in April 2021. Following this, the EU set to create a uniform legal framework for trustworthy AI systems as well as uniform rules for the development, marketing, and use of AI within the EU in accordance with its values and fundamental rights.

Despite the Act’s original aim, the AI Act remains highly controversial among MEPs. At the centre of the discussion are topics such as the scope of application or the use of biometric recognition systems and their potential misuse. Because of such disagreements, the European Council consequently adopted a compromise proposal for promoting safe AI that respects human rights. This proposal provides for exempting authorities in third countries from the AI Act if they use AI in the context of international or judicial cooperation agreements and an adequacy decision by the EU Commission under the General Data Protection Regulation (GDPR) is available. There will certainly be exemptions for military use and possibly also for research and development. The EU Council has also called for a limitation of the scope to machine learning.

The Commission’s proposal, while necessary, has thus been widely criticized by policymakers, and notably the Council, which disapproves of the vague definition of AI that hampers a clear delineation of high-risk uses. As such, the European Parliament representatives decided to adopt the definition used by the Organization for Economic Cooperation and Development (OECD) on 3 March 2023, encompassing compromises on the scope, prohibited practices, and high-risk categories of AI applications. Capitalizing on the so-called “Brussels effect” — de facto adoption of legal norms, regulatory measures and standards of the EU outside the European single market — the EU hopes that other countries will implement similar standards as those in its AI regulatory framework.

At the same time, Europe’s hesitancy to invest in AI during the last decade has been highly dangerous. While private investors have invested about 150 billion euros in AI development in the US and around 62 billion euros in China; Germany and France together have only invested about 8,3 billion euros, and the situation only gets worse in the rest of the EU. The European Parliament hopes to adopt a position on the draft AI law by May 2023, while the negotiations with the Commission are expected to be concluded by the end of the year. However, it is anticipated that the entire process may take another 18 months before the law actually enters force, with the regulations taking effect in mid-2025 at the earliest.

The AI Act will follow a risk-based regulatory approach, categorizing AI systems into four levels of risk for the public:

  • Minimal-risk AI systems: encompass the majority of currently used AI systems in the EU; include AI-based applications.
  • Limited-risk AI systems: have transparency requirements and sectoral regulations; include AI chatbots and AI-powered inventory management.
  • High-risk AI systems: high requirements are envisaged for risk management, data quality, and technical documentation; include systems evaluating consumer creditworthiness, assisting with recruitment, using biometric identification
  • Unacceptable-risk AI systems: manipulative or exploitative systems representing a threat to the safety, livelihoods and rights of people; include biometric systems for remote identification, social scoring by authorities or manipulative systems using techniques of subliminal influence on those in need of protection.

Administrative Arrangement on Artificial Intelligence for the Public Good

On January 27, 2023 the US Department of State and the Directorate-General for Communications Networks, Content and Technology (DG CONNECT) of the European Commission signed an “Administrative Arrangement on Artificial Intelligence for the Public Good”, to address global challenges in the fields of climate change, natural disasters, healthcare, energy and agriculture by strengthening transatlantic cooperation on AI research. Building upon the Declaration for the Future of the Internet and in line with the Declaration on Digital Rights and Principles, the Administrative Arrangement reiterated the necessity to guarantee a digital sphere that reinforces democratic principles and fundamental rights. An Artificial Intelligence Risk Management Framework was also published for providing guidelines for a responsible development and use of AI systems on how to govern. According to the Agreement, emerging digital technologies built on cooperation bear enormous potential for solving global challenges and promoting “connectivity, democracy, peace, the rule of law and sustainable development”.

The AI working group of the Transatlantic Trade and Technology Council (TTC), a permanent platform established in 2021 for transatlantic cooperation in various priority areas, is fundamental for shaping future developments, especially in the digital market. The adopted roadmap between the two represents a first step in this regard. During its last high-level meeting in December 2022, the TTC developed a joint statement and a roadmap for achieving a common approach to critical aspects of AI, including risk management methods and metrics for measuring trustworthiness. The EU and the US agreed to collaborate on the development of societal applications of AI, notably in the field of climate change, through common research into new technologies. At the same time, Washington is also currently working on setting its own AI standards, as for now, there is no legal framework for transatlantic sharing of personal data. With this new administrative agreement, the EU and the US will share results and resources with further international partners.

The European Data Protection Committee (EDPS) published its opinion on this new transatlantic agreement on 28 February, welcoming the new safeguards that make sure US laws offer equivalent levels of privacy protection to European ones, while still acknowledging remaining concerns. This assessment is part of a process that started more years ago, with the so-called Schrems rulings. Indeed, while a first authorization for sharing personal data across the Atlantic took place in 2000, with the adoption of the Safe Harbor Agreement, the Schrems I case was soon brought before the CJEU, based on the claim that, in the context of the NSA’s PRISM program revealed by Edward Snowden, US companies would not be able to ensure adequate protection of personal data. It was consequently decided that under the EU Data Protection Directive, it would not be legal to transfer personal data between the EU and the US. This decision invalidated the Safe Harbor Agreement and had serious repercussions for the operations of several companies.

In 2020, the Court of Justice of the EU (CJEU) then invalidated the Privacy Shield — a transatlantic mechanism to comply with data protection that had been concluded in 2016 — in the Schrems II ruling, because it considered this legal framework unable to protect against “interference with the fundamental rights of the persons whose data was transferred”. While personal data constitutes a fundamental right that needs to be protected and has an economic dimension that needs to move between countries, the invalidation of the Privacy Shield has also had important impacts on the development of technologies such as AI. Policy cooperation, therefore, is all the more needed.

Steps for transatlantic policy cooperation

Altogether, with the AI Act and the U.S. government’s AI Bill of Rights, regulatory plans already exist. It is thus now a matter of aligning them in order for AI solutions to be transnationally interoperable. The upcoming G7 summit in Japan offers a concrete opportunity to coordinate the regulatory initiatives of the most important industrialized countries. Transatlantic cooperation should focus on:

  • Building powerful standards on data protection and AI by collaborating on policy development and sharing best practices.
  • Harmonizing European and US data protection regulations to ensure their compatibility.
  • Aligning privacy laws, such as the EU’s GDPR and the US’s California Consumer Privacy Act (CCPA), to create a more coherent and uniform framework, to facilitate cross-border data flows and to create a level playing field for businesses operating in both regions.
  • Cooperating on the development of common AI regulations that ensure transparency, accountability, and respect for human rights, notably through the establishment of joint committees or working groups to identify areas where new standards are needed.
  • Fostering international cooperation to encourage other countries to adopt similar regulations and standards.
  • Encouraging the industry to self-regulate by working with industry associations for the development of voluntary codes of conduct and best practices, as well as through the promotion of certification schemes to incentivize compliance with agreed-upon standards.

Overall, the EU and the US have an opportunity to lead the development of global standards on data protection and AI. By working together, they can create a framework that promotes innovation, protects privacy, and ensures that AI is used in a responsible and ethical manner. However, building powerful standards on data protection and AI will require a sustained and coordinated effort between the EU and the US, as well as a continuous commitment to promoting a global culture of privacy and data protection.

Addressing corporations’ resistance

Because resistance from large corporations is likely to emerge, it is essential to engage with stakeholders from the private sector, civil society, and academia. The EU and the US can create platforms for dialogue and collaboration that allow stakeholders to provide input and feedback on data protection and AI policies. They should also work with other countries and international organizations to build global consensus and to prevent companies from evading regulation by relocating to jurisdictions with weaker protections. Altogether, strong enforcement mechanisms are needed as well as an ethical and responsible promotion of innovation in AI development.

At the same time, web giants are heavily investing in AI, notably in the short-term lucrative innovation sector, leaving the R&D sector open to start-ups, which are now developing their own AI systems. The international competition that has emerged renders AI a real geopolitical issue. Large corporations may resist regulations that limit their ability to use AI in ways that generate profits. For example, regulations that restrict the collection or use of personal data may limit a corporation’s ability to target consumers with advertising, potentially reducing their revenue.

AI technology is also still relatively new and rapidly evolving, generating resistance as regulations may limit corporations’ ability to experiment with and develop new AI applications, thus stifling innovation. This is even more true as regulations often come with either compliance costs or penalties for non-compliance. Finally, large corporations may resist regulations that restrict their ability to use AI in ways that give them a competitive advantage over smaller competitors. Regulations that require transparency in AI decision-making could indeed limit a corporation’s ability to keep its algorithms secret, thus potentially reducing its competitive edge.

Moreover, regulations are often complex and subject to interpretation, making it difficult to comply with them while leaving little room for innovation or customization. Companies may also resist regulations out of fear of legal liability or see them as a sign of regulatory overreach and may thus be hesitant to cooperate with government agencies or share sensitive data. Regulations should therefore prioritize consumer protection, which is a common goal shared by both the government and the public. By emphasizing their benefits to consumers, it is easier to gain public support, thereby putting pressure on corporations. With clear and concise guidelines, corporations can better understand their obligations and meet them accordingly. Data protection and privacy can also be used as a selling point by governments to promote the regulations, and corporations can use this as a competitive advantage to gain consumer trust. Altogether, it is important to emphasize the long-term benefits of these regulations, which include increased innovation, improved safety, and a better overall AI landscape.

Léa C. Glasmeyer is part of the European Horizons Core Writer’s team. She holds a Master of Public Policy from the Hertie School in Berlin and the Munk School of Global Affairs and Public Policy from the University of Toronto, as well as a Franco-German BA from Sciences Po Aix and the University of Freiburg. She is part of Netzwerk F, an intersectional network for the promotion of a feminist foreign policy, and a member of the Diverse Young Leaders initiative, where she aims at bringing young people with migration biographies closer to politics. Passionate about theatre and literature, Léa is also a fervent European citizen and particularly interested in democracy and the rule of law.

--

--

The European Horizons Editorial Board
Transatlantic Perspectives

European Horizons empowers youth to foster a stronger transatlantic bond and a more united Europe.