AI Politics Gets Real — But Who’ll Have a Vote?

The Machine Race by Suzy Madigan
4 min readMay 25, 2023

--

If a week is a long time in politics, right now it’s a lifetime in AI. In early May, I switched off my phone to recharge its batteries, and my own, while puffing up some hills in the Moroccan Atlas Mountains.

In the last TMR article, Fighting Over AI: Lessons From Ukraine, I considered the AI risk and safety landscape through the eyes of an aid worker, thinking about humanitarian protection of civilians in the way that aid workers consider it in Ukraine and elsewhere.

A pause from AI. Atlas Mountains, Morocco. Photo credit: Suzy Madigan

At that time, prominent voices had recently urged a six month pause on powerful AI systems. Unsurprisingly, most of the signatories worked outside the big tech companies developing artificial intelligence. Undeterred by that letter, the UK Government announced its “AI superpower” ambitions in late March — firmly “pro-innovation” and no new regulation planned. In April, while the EU beavered away on its “risk-based” EU AI Act, the US was leaning towards companies’ self-regulation.

After I’d hobbled down the mountain, mentally refreshed and physically exhausted, I switched my phone back on. It fairly blew up with the AI news it was struggling to contain.

On May 11, the EU had moved a step closer to the first international rules on AI when MEPs endorsed a draft negotiating mandate with amendments to the Commission’s proposed Artificial Intelligence Act. The EU has historically spearheaded tech regulation, like its General Data Protection Regulation (GDPR) which came into force in 2018. That sparked what has been called the “Brussels effect” whereby it soon became the global standard (if not yet internationally followed). Interestingly, Italy’s data protection regulators banned, then reinstated, Chat GPT within a month in April.

In the US, shortly after the EU’s move on artificial intelligence, the “Godfather of AI”, Geoffrey Hinton, quit Google to speak openly about AI risk. On May 16, Professor Gary Marcus, Christina Montgomery from IBM, and Sam Altman from Open AI, the organisation which created and released ChatGPT to the public, appeared in front of a US Senate Subcommittee. They had presumably expected a combative session like that of 2018 with Facebook’s Mark Zuckerberg, who infamously coined the phrase ‘move fast and break things’. To their evident surprise, the Senators were greeted by the two big tech companies acknowledging AI’s potential risks and urging the Subcommittee to regulate them.

On May 19, G7 leaders met in Hiroshima, Japan, and after laying flowers at the atomic bomb memorial site, released a statement on the need for “guardrails” around AI development “reflecting our democratic values”. Ahead of the summit, G7 technology ministers had called for “risk-based” regulation. From his plane en route to the G7, a newly cautious Rishi Sunak even championed Britain’s leadership in “technological regulation.” Indeed, as I wrote this on 24 May, the British Prime Minister was meeting with the CEO’s of leading AI labs at Number 10 Downing Street including Sam Altman, Google Deepmind’s Demis Hassabis, and Anthropic’s Dario Amodei to discuss “the right approach to governance.”

An ever changing landscape. Erg Chebbi, Morocco. Photo credit: Suzy Madigan

While there may be broad-based support among G7 nations to coalesce around regulation, the question remains around how and what to regulate within an emerging technology, the future applications of which are innumerable, and, without a crystal ball, unknown.

How also to achieve internationally adopted and monitored regulation of a technology that crosses international borders and for which, unlike the building of nuclear weapons (the analogy that is commonly made), it is impossible to know who is creating what behind closed doors? It’s worth noting that China, which is of course outside the G7 and G20, published its own draft measures on generative AI provision back in April.

Harsh terrain. Merzouga, Morocco. Photo credit: Suzy Madigan

For regular readers of The Machine Race, you’ll know I’m both optimistic about the huge potential societal benefits of artificial intelligence, and also concerned that consideration of the nuanced, human, social impact of AI will be inadequate. Why? Because to do that in a genuinely inclusive way takes time, effort and actively seeking out diverse voices, particularly marginalised ones. That’s not effectively happening, and with the technological race only accelerating, people can get left behind. From the US, to the UK, to the EU, those taking centre stage and debating questions on AI are from rich nations, the ones who are producing technologies that are increasingly affecting everyone on the planet.

If we’re to avoid further deepening global inequalities, and ensure equal sharing of benefits, it’s time to widen public engagement and education on AI. It’s time specifically to elevate voices from the global south to speak, and be heard — from global south universities, tech-focused citizen groups, national NGOs, and ordinary citizens, on the implications for communities in those countries. If not, the dynamics of climate change may be repeated where rich nations benefit from industry at the expense of poorer ones - a reason why international NGOs should be amplifying their national partners’ views on the impacts of AI right now.

If you enjoyed this, hit ‘Follow’ for new article alerts, plus the email button to get The Machine Race into your inbox on release. Share your comments, corrections and suggestions here, on LinkedIn, Instagram, or on Twitter @TheMachineRace. See ‘About’ page for author’s biography.

--

--

The Machine Race by Suzy Madigan

Human rights specialist | Aid worker | Founder of @TheMachineRace | Accelerating human conversations about AI & society