Death by Paperclips: The Real Threats of AI, Part I

AI is going to change the world. The question is, how? And should we be scared?

Matan Gans
Byte-Sized Insights
3 min readMay 24, 2023

--

Colored paper clips spread out on a table
Photo by Dan Cristian Pădureț on Unsplash

Despite steady growth in artificial intelligence technologies over the past several years, the rise of ChatGPT and large language models have launched AI discourse into the mainstream. Not only are tech giants like Microsoft and Google making big launch announcements and racing to release the next great model but we are constantly swarmed with news stories about sentient machines trying to break up marriages and headlines saying that the robot takeover prophesized in 1980s cinema is upon us. The excitement surrounding powerful chatbots, voice assistants, and self-driving cars has quickly turned to fear that the human race is nearing its end.

The paperclip problem comes to mind.

Nick Bostrom, the Swedish philosopher responsible for work on the simulation argument and author of the book Superintelligence, proposed the paperclip problem as a thought experiment to detail the existential risks of AI. Say we have an intelligent machine that is programmed to make as many paper clips as possible. The machine will learn to do anything that can increase its probability of making paperclips, even by a little. It will find all the tools it needs to create paperclips, collect raw materials, and innovate to continue discovering new ways to efficiently and effectively achieve its goals. Eventually, it will realize that people themselves are an obstacle, as they may shut it off if it starts going overboard in pursuit of its challenge.

The AI will now focus its efforts on a new goal: getting rid of humans.

It’s safe to say that nobody will be asking an AI to make paperclips any time soon, but it’s easy to see how this kind of outcome is conceivable given the rapid advancements we’re seeing today.

That’s why Geoffrey Hinton, the AI pioneer that groundbreakingly conceptualized neural networks half a century ago, has left his research position at Google with comparisons to Robert Oppenheimer, the father of the atomic bomb. It’s also why leading experts in the field have called for a pause on the development of systems more powerful than GPT-4 for at least six months.

Here’s my take.

There is a very plausible existential risk here. I won’t argue against that. However, that shouldn’t be the main focus right now. A term that is brought up a lot in the AI discourse is “general AI.” General AI is the term used to describe a system that has the human-level ability to tackle a wide range of tasks, adapting to any problem. This kind of system will be the one that has the potential to surpass human intelligence and bring about doomsday.

It’s important to note that general AI does not exist yet. Even ChatGPT, in all of its glory, is a narrow AI technology, meaning that it is trained on a specific task or group of tasks. For ChatGPT, that task is conversing with humans and answering questions. For a self-driving car, that task is to process visual data and safely get you where you’re going. Even if an AI can perform a task even better than humans — chess-playing AI applications were beating grandmasters all the way back in 1997 — it is not powerful enough yet for Terminator to be top-of-mind.

We’re not in the clear yet, though. AI can still threaten us in many ways and already is. Read Part 2 to find out why:

--

--

Matan Gans
Byte-Sized Insights

Software Engineer | Writing About AI @ Byte-Sized Insights