How Not to Worry About AI: The Rebellion Against “Extinction”

--

Just as we were all getting our heads around artificial intelligence, and governments were making reassuring noises about safety rules, a new row has erupted in the AI sphere.

Warnings from serious people — academics and tech professionals — have elbowed their way into headlines, fretting about AI’s ‘existential risk’ to humans and the prospect of AI developing ‘superintelligence.’

In reaction, other equally serious people, also academics and tech professionals, have urged, ‘Don’t get distracted by killer robots! Focus on AI harms happening right now!

What’s going on and how do regular citizens navigate all this? (I chat more in person about demystifying AI so everyone can understand it on this week’s Standard Issue podcast. Grab a cuppa and have a listen).

Listen now to the Standard Issue podcast on The Machine Race. Photo credit: Suzy Madigan

Who’s arguing about what, and why does it matter?

A note on terms: There’s no accepted definition of artificial intelligence, but The Alan Turing Institute describes AI as “when a machine or system performs tasks that would ordinarily require human (or other biological) brainpower to accomplish, such as making sense of spoken language, learning behaviours or solving problems.”

Hypothetical ‘superintelligence’ might be described as AI that surpassed human intelligence, which could lead to it becoming “unstoppably powerful.” (Maybe grab that soothing cuppa now).

If you’re a regular follower, you’ll know that The Machine Race aims to take a balanced view of artificial intelligence, peeking under the hood to understand what it’s all about. TMR is about calmly examining the social dilemmas, power dynamics and implications of a society-changing technology so we can understand it, and use our power as citizens and consumers to influence how AI affects our world. There are enough googly-eyed Terminator images circulating without adding to the AI hype.

That said, it was arresting to read a statement published by the US Center For AI Safety (CAIS) last week on the potential existential risk of AI, signed by some increasingly household names:

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

As statements go, it isn’t long, neither is it new. In How Humans Can Keep Up With AI, I shared Stephen Hawking’s warnings about AI potentially going pear-shaped for mortals if risks aren’t carefully weighed. Over the last few months, other heavyweight experts on AI have been saying this increasingly loudly — such as Geoffrey Hinton, ‘godfather of AI’ and only very recently ex-Google.

What raised an eyebrow about this particular statement is who signed it. Heading the signatories are leaders from several of the very tech companies currently developing and releasing AI systems to the public. That includes figures from Microsoft, Google DeepMind, and Sam Altman, the CEO of Open AI, the organisation behind Chat GPT. (Note that many scientists also signed).

All about balance. Photo credit: Suzy Madigan

Well, that was unexpected

So why are forerunner developers of AI who are currently investing vast sums in AI research and whose mission statements refer to benefitting humankind, effectively saying, “Just FYI, folks, we’re busy creating a technology that could end civilisation as we know it”?

Prior to the statement, Sam Altman had already surprised a US Senate Subcommittee in May by asking for regulation of AI systems, including large language models like ChatGPT and GPT4, its available successor. Sceptics suggest big tech is pushing regulation because it could help them crowd out smaller players lacking deep pockets. (Read TMR’s ‘AI Politics Gets Real — But Who’ll Have A Vote’ for a summary of which governments are doing what to regulate AI, and why it’s harder than it sounds).

Now, Open AI have gone further: they’re calling for “governance of superintelligence.” Many ask — where on earth do you begin with that?

Extinct cats

Critics have suggested that all this extinction talk may be a ‘dead cat strategy,’ a political decoy in which those under scrutiny announce something shocking to divert media attention from a troublesome story elsewhere.

Experts like Timnit Gebru, former co-lead of Google’s ethical AI team, and Professor Emily M. Bender have, for years, raised concerns not about future AI overlords, but about current “real-world harms.” For example, those caused by in-built gender, racial and other types of bias in AI training data, and problems like “discrimination, surveillance…data theft.”

Dan Hendrycks, director of CAIS which published the ‘extinction statement’, told The Atlantic that its tech leader signatories may be demonstrating authentic concern. Emily Bender told the same newspaper, “Even under that charitable interpretation, you have to wonder: If you think this is so dangerous, why are you still building it?”

What should and shouldn’t we worry about?

Highly intelligent scientists, professors and tech industry professionals are locking horns about which AI risks are the most important to talk about, including, crucially, where regulation needs to focus. Motivations are important to consider. Equally, it’s important that less anger and more illumination is applied to this subject. Why? Because ordinary citizens need to understand the arguments around AI and have a say in how these society-shaping technologies are designed, rolled out, and used. What are the real immediate and future risks, explained rationally and in good faith.

Fundamentally, marginalised groups and people in the global south need to be involved. AI training data doesn’t represent them, and currently, neither do the vast number of conversations about its design and use.

Too important to fight over

Conflict is a natural part of the human condition and when managed effectively can lead to creative solutions. When conflict isn’t managed well, and people don’t engage with alternative perspectives, positions become entrenched and can even lead to violence. (Sorry for the downer, but you are reading about AI from the perspective of a humanitarian aid worker, after all. Not that I’m suggesting there will be a punch-up in the computer science lab, but elections influenced by AI-generated deepfakes and disinformation are likely to cause post-electoral violence, so this isn’t to be taken lightly).

Since Facebook launched in 2004, social media has helped reduce nuanced debate to polarised positions. People who might actually agree with each other on some points instead zone in on one aspect of another’s argument. We’re so busy dismissing the view of ‘opponents’ that we miss opportunities for common ground.

Artificial intelligence is evolving rapidly, however we feel about it. It’s important that people express strong views on either side to stretch the limits of the argument. But before it gets too polarised, let’s turn down the heat and shed some light on a way forward that benefits everyone.

Listen to Suzy Madigan talking about The Machine Race on the Standard Issue podcast. Hit ‘Follow’ above to be alerted to new articles from The Machine Race blog. Share your comments, corrections and suggestions here, on LinkedIn, or on Twitter @TheMachineRace. See ‘About’ page for author’s biography. Thanks for reading.

--

--

The Machine Race by Suzy Madigan

Human rights specialist | Aid worker | Founder of @TheMachineRace | Accelerating human conversations about AI & society