Fighting over AI : Lessons From Ukraine

The Machine Race by Suzy Madigan
7 min readApr 10, 2023

--

I was in sitting in an air raid shelter in Kyiv during a missile alert when I saw the explosive media reaction to an open letter warning of AI’s potential risk to humanity.

Among the Ukrainians I was sheltering with, I’m pretty sure neither the letter by prominent figures, nor the frenzied response, was top of their news feed. Ukrainians’ familiarity with existential risk has been rather more visceral since February 2022 (or 2014 in the East).

Street art in Kyiv, Ukraine, by Banksy. A tank trap sits against concrete blocks. A Banksy stencil gives the impression of a little girl and a little boy using the tank trap as a see-saw. All photos in the The Machine Race series are in black and white representing binary computer code and binary views that often fill the AI debate.
Banksy art by tank traps in Kyiv, Ukraine. Photo credit: Suzy Madigan

I was reminded of driving through the damaged streets of Beirut with a maskless Lebanese colleague in August 2020, a few days after the catastrophic port explosion. I’d just deployed to support the humanitarian response and I was asking him about Covid. “Covid?” he replied. “Yeah, I’d say that’s about fifteenth on our list of worries right now.”

The Future of Life Institute (FLI) letter, whose signatories include Max Tegmark, Yoshua Bengio, Elon Musk, Professor Stuart Russell, author Yuval Noah Harari, and Apple co-founder Steve Wozniak, sounds an alarm about the “out-of-control race” between AI labs. Competition is driving companies “to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control.”

The letter’s authors propose a minimum pause of six months to the development of — not all AI systems — but those “unpredictable black box models” more powerful than GPT-4 (successor to Chat GPT). They argue that the consequences of unconstrained deployment of increasingly powerful systems are unknown and are going to be profound. So, they suggest, let’s take a breath to analyse implications and navigate workable governance.

Fighting talk: Either ‘utopia’ or ‘apocalypse’

If risks raised by artificial intelligence aren’t the priority for those living under missiles, elsewhere AI itself is becoming an intellectual battleground. Interested parties often categorise others as one of three types: myopic idealists naively pursuing an AI utopia, blinkered corporations recklessly chasing profit, or visionless killjoys exaggerating risk.

Coming with a protection of civilians lens, I find the infighting within the AI sphere frustrating. Current AI progress raises serious protection issues (current, near and long-term), and mitigating risk requires collaboration. Humanitarian actors try to mitigate all risks — both indiscriminate and individual - and coordinate with others on who is tackling what.

The open letter provoked a battle on Twitter, 280-character salvos flying in all directions attacking others’ positions. Critics included Timnit Gebru, former co-lead of Google’s ethical AI team, whose controversial departure from Google in 2020 was reportedly related to her concerns about risks posed by large language models — the very type of system, like GPT4, that FLI’s letter highlights.

A screen showing a 1980s video war game with the word ‘Warning’ flashing
Photo credit: Suzy Madigan

Nevertheless, Gebru labelled it a “horrible ‘letter’ on the AI apocalypse” and published a response statement with fellow authors of a well-cited 2021 paper on large language models. While agreeing with parts of the FLI letter, the researchers criticised it for “fear-mongering” and elevating long-term hypothetical risks — existential ones — over harms already occurring from AI systems deployed today, like biased decision-making, inequalities being entrenched, data and intellectual property theft, or worker exploitation.

For the majority of ‘average’ citizens who aren’t AI specialists, who are struggling to get their heads round AI at all, it’s difficult to know what to think. Particularly when different sector experts are bickering about which harms to worry about most and when.

What’s striking is that even nuanced debate takes place mostly within an AI bubble, among specialists living in wealthy nations, focusing on wealthy nation issues. There’s limited discussion of potential impacts on ‘Global South’ countries (a political economy term, not always geographical), including fragile states; and fractional inclusion of their citizens’ opinions.

Ways to approach risk

My ‘day job’ involves working on human rights and protection for an international aid organisation — hence writing this from Ukraine. I’m here meeting highly skilled partners, mostly women, who are delivering emergency help and services across the country. They’re sharing their analysis of the evolving, specific needs of different people, and the individual risks faced by unique groups so that we can tailor protection activities accordingly.

In Ukraine, as in other conflicts, there are indiscriminate threats like missiles and explosive drones that can kill or maim anyone, irrespective of their age, gender, physical health, or ethnic group. People’s ability to protect themselves against indiscriminate threats, however, isn’t necessarily equal — those with disabilities might not easily reach shelters; older people might not have smart phones or know how to access digitised safety information. Other risks, like sexual exploitation and trafficking, are higher for different groups, like women and adolescents.

The point is that humanitarian actors analyse and try to mitigate all risks — both generalised and individual. Different organisations specialise in different types of protection, but each should understand the wider risk landscape and coordinate with others on who is tackling what. (I’m not saying humanitarian coordination works perfectly, but it’s the right aspiration).

Cats in sacks can’t solve problems

A photo of a dead-end sign on a British street
Photo credit: Suzy Madigan

So, it is with that perspective that I find the infighting within the AI sphere frustrating. Contrary to thinking that the FLI signatories are indulging in hyperbole, I think they’re right to elevate the current AI race to a serious protection issue. The letter may not be perfect, but its timing means it has gained widespread attention where previous Cassandras weren’t listened to.

The letter raises long-term hypothetical risks, but it also includes imminent threats such as the flooding of media channels with disinformation on an unprecedented scale (facilitated by large language models and deepfake technology). In 2024, elections will take place in over 70 countries. What are the implications for democracy, and the avoidance of post-election violence, if citizens cannot trust anything they see or read, including the validity of election results?

There’s space to tackle both current harms and future risks. One doesn’t preclude the other, and each requires attention now. And that work requires supportive collaboration.

Don’t want to hear it

Sometimes warnings need to be loud to get noticed — just as in some Ukrainian cities, air raid sirens can suddenly sound on streets where life seems to be carrying on as normal. Raising this alert about artificial intelligence has amplified discussion about risk (in volume at least if not yet always in sophistication). That’s not easy over the noise of public hype around AI, big tech companies’ PR, excitable start-ups fuelled by venture capitalists, and international competition between states like the US and China. It’s in many actors’ interests not to pause AI development, fearing they’ll be left behind.

Gary Marcus, signatory to the Future of Life Institute open letter, speaking at AIUK hosted by The Alan Turing Institute, March 2023
Gary Marcus, signatory to the Future of Life Institute open letter, speaking at AIUK hosted by The Alan Turing Institute, March 2023. Photo credit: Suzy Madigan

The day after the FLI open letter, for example, the UK Government published a white paper, A Pro-Innovation Approach to AI Regulation. The UK is prioritising AI development, not legislation that would “risk placing undue burdens on businesses.” Instead, the Government argues that a light-touch “regulatory framework” will address risks and public concerns, focusing not on specific technologies, but the context in which they’re used, (e.g. not on chatbots per se, but their use in sensitive environments like health).

Secretary of State, Michelle Donelan, says in the introduction, “Having exited the European Union we are free to establish a regulatory approach that enables us to establish the UK as an AI superpower.”

Balancing act

As I explore in The Machine Race: How Humans Can Keep Up With AI, the fields of artificial intelligence and humanitarian aid have much in common. Artificial intelligence can offer huge opportunities for improving healthcare, education and tackling climate change, even, some proponents argue, poverty reduction — all areas in which humanitarians work. It’s possible that successful applications of AI could, in certain circumstances, produce greater social impact in one week than a hundred NGOs could achieve in one year.

Yet, artificial intelligence also has the potential to increase inequalities, strengthen authoritarian regimes, threaten democracy and therefore peace. That would multiply humanitarian needs, not reduce them.

Safely exploring opportunities and robustly analysing possible harms do not have to be mutually exclusive. Pausing in order to think through the almost unlimited areas of life which AI will impact, to plan accordingly and mitigate the risks, may be idealistic given the interests involved. But it’s the right thing to do.

Hit ‘Follow’ for new article alerts, plus the email button to get them into your inbox on release. Share your comments, corrections and suggestions here, on LinkedIn, or on Twitter @TheMachineRace. See ‘About’ page for author’s biography.

--

--

The Machine Race by Suzy Madigan

Human rights specialist | Aid worker | Founder of @TheMachineRace | Accelerating human conversations about AI & society