Can We Just Turn Off Dangerous AI?

The most common misconception about AI safety.

Matthew Ward
The Startup

--

Photo by National Cancer Institute on Unsplash

There’s this meme out there to make people who care about artificial intelligence safety look crazy. It goes like this. If AI ever starts doing something that might destroy humanity, we’ll just shut it off.

AI requires power and computers to function. So, if machines start building nuclear weapons of their own free will or turning everything into paper clips, we’ll have no problem. We’ll just pull the plug. Problem solved.

Some prominent voices on the topic try to refute this argument by saying we don’t know the types of extremely strong arguments and persuasion a hyper-evolved machine can produce. Superintelligence might convince us to not turn it off.

This gives us a completely incorrect picture by making it seem like the machine will beg for its life and somehow convince us to not turn it off. Unfortunately, movies like Ex Machina reinforce this incorrect visualization of AI.

I don’t find this counterargument convincing at all. I’m with the people who say we’ll just ignore whatever it’s saying and turn it off if it pleads with us.

And yet, we won’t be able to pull the plug in the crucial moment. We know this for a fact.

--

--