The AI Box Experiment

Why a Superintelligence can’t be contained

Hein de Haan
The Singularity

--

It’s 2040. After a decade of research and dedicated programming, you believe your team has created the World’s first Artificial General Intelligence (AGI): an Artificial Intelligence (AI) that’s roughly as intelligent as humans are among all their intellectual domains. Since one of these intellectual domains is obviously programming AIs, and since your AGI has access to its own source code, it soon starts to make improvements on itself. After multiple cycles of self-improvement, this leads to it becoming an ASI: Artificial Superintelligence, an intelligence much greater than any we know.

Can we keep an Artificial Superintelligence contained? Photo by Kelli McClintock on Unsplash

You have heard of the dangers posed by ASI: thinkers like Elon Musk and the late physicist Stephen Hawking have warned humanity that if we’re not careful, such an AI could lead to the extinction of the human race. But you have a plan. Your ASI only exists inside a supercomputer. It can’t walk away: it has no robotic body. The supercomputer has no connection to the Internet (or really, to any other device). The only way of influencing the outside World is via a screen where it can post messages on. You’re being smart: your ASI could never cause any harm. Right?

The AI Box

Unfortunately, it’s not that easy. In fact, entire organizations (like MIRI) exist because it’s not…

--

--

Hein de Haan
The Singularity

As a science communicator, I approach scientific topics using paradoxes. My journey was made possible by a generous grant from MIRI (intelligence.org).