Is Artificial Intelligence an Existential Threat?
Algorithms will become exponentially more powerful. Humans, not so much. And algorithms can effectively control human-based machines, including governments, corporations, and money.
How many science-fiction stories have some huge institution inventing a new supercomputer, turning it on, and immediately it takes over the world? It won’t happen that way. Instead it will happen as we are seeing today. Humans will gradually lose control of our future. Then what happens to us won’t be our decision.
Someone should do a prediction market on the bet that there will be no humans alive in 300 years — or none except a few in zoo-like or lab-like settings. We won’t be around to see how that bet turns out. But if anyone can buy or sell shares in the bet at any time, the changing prices will track public consciousness of existential threats to human life, from artificial intelligence and other threats as well.
We probably could survive nuclear war, engineered plagues, climate disaster, or an asteroid strike. I cannot imagine how we will survive AI. The machines might choose to keep us around, but that decision is revokable at any time. The decision to get rid of us is not.
(Comment on The Real Bias Built In at Facebook, New York Times May 19; comments closed while I wrote it.)