Strong Neighbourhoods Start with Good Neighbours: Comparing Transatlantic Regulation on AI

By Elena Sofia Massacesi

Source: Ars Electornica, Flickr

On 31 March 2023 Italy banned ChatGPT. The country joined China on what many expected to become a list of states which prohibit access to the generative AI platform. However, by April, the Italian government lifted the ban. Today, businesses increasingly integrate ChatGPT and other forms of artificial intelligence (AI) into their operations, enjoying the increased efficiency and returns. On the other hand, the new technology brings transatlantic regulators as much worry as it did excitement. The European Union, United States, and United Kingdom must regulate the AI industry by keeping a close eye on each other, learning how to balance citizen safety with keeping their markets attractive to businesses.

The EU AI Act

The European Commission started discussing AI regulation in 2021 and the policy is now in the “trilogue” stage where, after passing through the EU Council and Parliament, the three bodies iron out the details so the policy can be set into law. The policy takes a risk-based approach, splitting AI technologies into four categories based on their potential for human harm. As a result of the Act, AI developers will now also become liable for how their technology is used in other systems, which puts further pressure on companies to be extremely transparent in both their data gathering and processing.

Similar to its regulation of the pharmaceutical industry, the EU will ask developers to conduct risk assessments prior to releasing their products so that they can be approved by EU regulators. But Věra Jourová, the European Commission’s Vice President for Values and Transparency, emphasised that the policy should not be perceived as overly inclined to classify technologies as high risk, for fear of the threat to innovation. Instead, Jourová proposed a “dynamic process” whereby if technologies are being used in an unexpectedly risky way after they are released, they can be moved into the more appropriate ‘high risk’ category — a reactive style, rather than a catch-all better-safe-than-sorry approach. Jourova pitched this European model to the G7 at a conference on internet governance in Kyoto this week, however, it has not yet been adopted given that member alignment on AI rules seems far from ready.

Nonetheless, this flexible approach does not seem to be enough to win businesses over; a reality demonstrated by a letter signed by over 150 leading business executives from a range of industries warning that the AI Act could “jeopardise European competitiveness”. The signatories argue that the EU rules raise the barrier to market access, excluding firms that cannot afford the increased compliance costs. An American trade body representing AI developers raised further concerns that the speed of technological development would quickly render the EU policy outdated, demonstrating that the EU’s regulatory efforts continue to face substantial business opposition. The EU must therefore also factor in businesses’ economic and innovation concerns to keep them in its market, all without sacrificing its responsibilities towards citizens.

Businesses Delight

With business speculation surrounding the EU’s approach, AI companies may find the United States and the United Kingdom a more friendly playing field. In July 2023 some of the largest American technology firms, including OpenAI (developer of ChatGPT) and Microsoft, signed a set of voluntary commitments focused on responsible AI development and deployment. The companies promised to conduct extensive tests prior to releasing AI technology to the public, to increase transparency, and to help consumers identify AI use. The US is prioritising educating lawmakers on the technology before starting the policymaking process, arguing that the voluntary approach provides a better, faster solution created by experts in the field. Unsurprisingly, businesses are favourable to the US approach — it gives them the freedom to innovate and release technologies without passing regulatory checks, which drastically speeds up the time in which they can release their products into the market at no compliance cost.

Contrary to the EU, the UK aims to regulate by sector rather than the target software technology, using existing regulators to implement measures in their respective areas. This approach is expected to lead to faster implementation of AI regulation but the decentralised policy-making process also requires sectoral issue implementation guidance, which may lead to an asynchronous or delayed roll-out. Britain also sees AI as an opportunity for regaining some of its lost soft power. In early November the UK will host a global AI Summit at Bletchley Park, where Alan Turing and other intelligence agents decoded messages during the Second World War. By recalling its time as a leader in the global order and convening world leaders to the summit, the UK is attempting to place itself at the heart of the global response. Whether these efforts will be enough to also present it as a leader in regulation and innovation, however, remains to be seen.

Transatlantic Unity

The United States is traditionally more business-friendly than the technocratic European Union. In early October, Bloomberg News obtained documents on the US State Department’s reaction to the EU’s AI Act, within which the government largely echoed business concerns over compliance costs for smaller firms, threats to R&D efforts, and potential ‘brain drain’. Nonetheless, American criticism comes with the primary purpose of aligning common values. Though they diverge on their approach, the Biden administration and EU recognise that they must collaborate closely on AI regulation to counter Chinese use of the technology, citing the increased hacking potential enabled by AI — that is, an increased ‘weaponisation’ of data. The EU and the UK only have power over companies that wish to participate in their markets, so their alignment with the US is the only leverage they have against Chinese market power.

Don’t Repeat the Social Media Mistake

Whichever route American and European policy makers choose, they should learn from past digital regulation mistakes. US social media regulation mostly falls at the state level, the EU Digital Services Act is starting to be implemented only now, and the UK Online Safety Bill has been a work in progress for the past six years. Social media developers are slow to address problems on their apps, and often only do so after legal probing. Tech companies initially saw social media as a powerful force for good, hoping it would spread democratic values in the wake of the Arab Spring. However, its direct impact on the erosion of democratic values — think Facebook and the debate on free speech — shows that we should be cautious in how we approach new technology, never losing sight of the potential for harm.

Europeans and citizens living in countries often influenced by the Brussels effect may be reassured by the European AI Act. But it is important to remember that it is not the only piece of regulation in the works. China’s own AI legislation does not only extend its notorious censorship laws to the new technology but also aims to beat the EU and US in the race towards greater military and data ‘weaponisation’ capabilities. Business decisions may divide the West but the transatlantic powers must at least align on their cybersecurity agenda to defend their technological edge. Meeting in Bletchley Park may be a good start.

Elena Sofia Massacesi is a student of Politics and International Relations at UCL. Her primary research interests include climate policy, the intersection of politics and business and international political economy.

--

--

The European Horizons Editorial Board
Transatlantic Perspectives

European Horizons empowers youth to foster a stronger transatlantic bond and a more united Europe.