Dear Elon Musk, Bill Gates, and Stephen Hawking: The Problem isn’t Artificial Intelligence. It’s Giant Killer Robots

In the last couple years some of humanity’s brightest people have begun warning the rest of us dummies that the development of artificial intelligence is a real danger to the human race.

No doy! Terminator came out in 1984 guys. Since then there’s been literally no movie where the giant AI is all, “Gosh, I sure love humanity! How can I help you guys out?” You geniuses are a bit late to this party. (“Oh, I was busy making billions of dollars and/or discovering the fundamental laws of the universe,” is no excuse for not keeping up with pop culture.)

Now, I don’t normally like to disagree with my betters¹, and I hate to be that guy², but let’s consider where the real problem lies.

The problem isn’t AIs themselves. A rogue AI sitting in your phone could basically, well, make prank calls to your mom. Maybe it’d change your address book around, or mess up your high score on Candy Crush Saga³? Nothing extinction-level here.

The problem is putting AIs into giant robot bodies! These are two separate issues, but everyone conflates them. Consider that article I linked to above, where the subhead is “Google-owned Boston Dynamics released a video showing a 6' tall 320-lb humanoid robot named Atlas running freely in the woods.”

Also we could just move 20 feet away from the robot and it’d get caught on its giant schlong tether

The real problem here isn’t the AI, it’s the giant, horrifying killer robot named after a torrid novel about how smart people deserve everything good in life and should just take it and totally would if they weren’t held down by all the stupids who make them coffee and clean their houses, ugh.

The truth is zero people in movies have been believably killed by AIs who did not also have killer robot bodies. Going after AIs is like us trying to prevent shooting deaths by lobotomizing everyone but letting them keep their guns. (In the United States we already know this won’t work because we have Florida.)

Further, we’ve seen again and again that making the killer robot bodies nigh-indestructable is especially bad. But consider how far we are from that: Today we’re proud if we have a robot that can walk a mile without needing a recharge or falling over. So if we got AIs tomorrow, we could all just walk a mile away from them and that’d be the end of the great cyber war.

Also, robots don’t heal. Imagine if you still had every cut, scrape, virus, and broken bone you’d earned since you were born. You wouldn’t be very high functioning, although to be fair you’d still be better off than Donald Trump.

So, even if we do make autonomous killer robots and put rogue AIs inside them (and I have to stress we really shouldn’t do that), all we need to do is shoot them. Once. Then it’s pretty much done. Humanity wins!

That is unless we build rogue AIs and killer robots (or spaceships) and make them invulnerable or able to heal. Let’s not do that — on this even the mighty geniuses can agree with us lesser people who’ve actually seen Metropolis, The Day the Earth Stood Still,, 2001, Dark Star, The Black Hole, WarGames, WestWorld, Tron, Alien, Blade Runner, Robocop, Terminator, Bill & Ted’s Bogus Journey, The Matrix, I, Robot, Transformers, Wall-E, or Ex Machina.

¹not true
²actually love it
³not a saga
⁴disallowing the 1980’s “the AI sent an electrical pulse through the telephone wires and zapped my ear” or the 1990’s “if you die in cyberspace your brain also decides to die” bull