Will A.I. Destroy Us?

We should be afraid of machines.

So say luminaries such as Stephen Hawking and Elon Musk, who have argued recently that artificial intelligence is the greatest threat to human survival.

As researchers and entrepreneurs in A.I., we think this argument is wrong.

On the contrary, we believe superhuman machine intelligence is our best chance of long-term survival as a species.

It’s not that artificial intelligence won’t someday become superhuman. It almost certainly will.

But we think the doomsday predictions about A.I. wiping out our species, thought-provoking as they are, fall into the same trap that renders most futurist predictions wrong: they assume everything else will remain constant.


Humans are notoriously bad at predicting the future, because we tend to extrapolate on just one dimension at a time, while assuming everything else about our reality will stay more or less the same. But of course, reality never progresses that way. Everything changes together.

The smartest and most knowledgeable among us are reasonably good at predicting the course that a particular technology or field may take, in isolation. The prediction that we will one day have superhuman machine intelligence falls into this category. It’s almost certainly true that computers will eventually be better at navigating the physical world, formulating scientific hypotheses, and developing novel technologies than our existing human brains are.

These are the very characteristics that have allowed humans to dominate the planet up to this point. So the idea that superhuman machine intelligence is the greatest threat to our long-term survival seems logical.

But that’s only if you focus on the trajectory of machine intelligence alone.

What about the human trajectory?


The most common image of superintelligence run amok is a metallic robot with advanced weaponry deciding that puny humans have no reason to exist. A war ensues and their superior technology, information, and lightning-fast decision speed render us helpless and doomed to total annihilation. That’s the sci-fi version.

Most “A.I. is dangerous” proponents offer a more nuanced portrayal of the future, in which machines don’t necessarily take up arms against us, but simply prioritize their own existence above all else, and somewhere along the way accidentally wipe us off the face of the planet.

If you pair a superhuman intelligence against a mere human one, this all makes sense. You can’t really argue with the assertion that the species designated as “super” will win. It’s a tautology.

But is it reasonable to assume that normal human intelligence will remain…normal?


We believe it’s far more likely that humans will use advancements in machine intelligence, synthetic biology and material science to augment our own natural abilities along the way.

It’s already happening. Today, we use external neuroenhancements in the form of software. Think of how dramatically our ability to navigate new environments has been enhanced by Google Maps. Or how Wikipedia has increased our ability to recall facts. Or how our ability to maintain vast social connections has been transformed through FB, Twitter and WhatsApp.

These are the beginnings of our bionic lives. Our enhanced abilities in these areas are already savant-like.

But soon enough, we’ll be able to embed this sort of software directly into our body and brain.

In the not-so-distant future, it’s possible that, instead of having to look at our phones to get directions, we will have a neural interface embedded inside our brains that will communicate directly with our synapses.

We’ll no longer need to “get directions” and consciously decide to go left then right. Rather, we’ll simply feel where we are and have an intuitive sense of the direction we need to go. Perhaps we’ll use echolocation to gauge distances, or have infrared sensors in our eyes that allow us to see in the dark.

Advancements in neuroscience and genetics may eventually also allow us to develop larger brains, enhance or destroy memories, or accelerate learning.

Human intelligence as we know it will change.


But at a certain point, we may run up against the limits of our biology.

And the advantages of superhuman machine intelligence over our own will become increasingly apparent. Advantages like brain size, processing speed, interconnectedness and immortality.

As this happens, the inevitable question will present itself: why constrain our existence to biology?

As we develop superhuman machine intelligence, it is likely that we will also gain a sufficiently detailed understanding of the human brain to simulate it digitally.

Once digital brain simulation gets good enough, we will start to create simulations of our brains, first as a way of testing interactions with neural implants and medications, then as a means of extending our natural biological lifespan, and bit by bit, as our preferred mode of existence.

We will eventually all become superhuman intelligent machines, by uploading our brains.

(Don’t believe us? Check out neuroscientist Michael Graziano’s excellent thesis on brain uploading for the science behind these claims.)

This is how superhuman intelligence may lead us, not to destruction, but to immortality.

By the time A.I. becomes powerful enough to destroy us, there will be no humans left to destroy at all.

Man and machine will have become one.

Prerna Gupta and Parag Chordia are the founders of Telepathic, a company that uses A.I. to enhance human creativity.