“Apocalypse Now?,” Part 1: Robots Rise
Or: what happens when our machines finally outsmart us?

This is part one of the Open Source series, “Apocalypse Now?,” about the many ways our world might end.
The search begins on St. Ebbes Street in Oxford, England, in the curious offices of The Future of Humanity Institute. Inside, founder Nick Bostrom, researcher Anders Sandberg, and a number of other highly intelligent young philosophers, engineers, and scientists have set about imagining a way to keep what Bostrom calls “the human story” going safely along.

From Bostrom’s perspective, wicked problems like climate change or income inequality seem like a planetary heart condition, or back pain: serious, but not fatal. He and the staff of the F.H.I. want us to develop a vigilance against existential threats — the truly disastrous, world-ending outcomes that might arise, probably from our own fumbling.
Bostrom has been able to persuade very smart, tech-savvy people like Bill Gates, Elon Musk, and Stephen Hawking that one such risk might come from the world of machine intelligence, advancing everyday in labs around the world.
Before you protest that Siri can’t even understand what you’re saying yet, you have to remember that the apocalyptically-minded, like Royal Astronomer Martin Rees, think on the longest of timelines.
Here’s how they see the story so far: Earth has been turning for around 4.5 billion years. Homo sapiens has only witnessed a couple of hundred thousand of those. And only since 1945 have we human beings had the ability to wipe ourselves out.
On the astronomical timeline, 70 years of nuclear peace seems a lot less impressive. And the fact that advanced computers — equipped with new methods for autonomous learning — are mastering the devilishly complicated game of Go and analyzing radiology readouts well ahead of schedule is cause for concern as well as celebration.

And our apocalypse watchers want us to be perfectly clear: they’re not talking about Terminator. Bostrom more often describes AI “super-intelligence” as a sort of species unto itself, one that won’t necessarily recognize the importance we humans have typically ascribed to our own survival:
The principal concern would be that the machines would be indifferent to human values, would run roughshod over human values… Much as when we want to build a parking lot outside a supermarket and there happens to be an ant colony living there, but we just pave it over. And it’s not because we hate the ants — it’s just because they don’t factor into our utility function. So it’s similar. If you have an AI whose utility function that just doesn’t value human goals, you might have violence as a kind of side effect.
The Columbia roboticist Hod Lipson tells us how his “creative machines” pick things up — the process we now know as “deep” machine learning. It isn’t by being given new rules, but by being set free to observe new behaviors and draw their own conclusions. It’s a bit like raising a child.
It’s easy to think of these machines as stuck in a permanent infancy when you watch the strangely poignant robot videos posted by our local robot lab, Boston Dynamics. They can’t open doors; they stumble through the woods. But the point is that we have plunged into the deep water of man-machine interdependency, almost without noticing it, and the current is already carrying us away in unknown directions.


