If AI Were to Wipe Out Humans…

Praful Krishna
The Startup
Published in
3 min readOct 23, 2020

If AI were to wipe out humans in the future, the following would have to be true.

1. AI will have to develop Free Will.

Free Will is a deeply philosophical concept that no Medium post can really do justice to. To discuss whether humans have Free Will takes tomes. For AI, though, we can pose a simple question — Will the artificially intelligent machines of the future make their own conscious decisions, or whether they will be slaves to their algorithms and training data?

If AI were to wipe out humans, it would have to decide on its own accord to do so, and it would have to follow through with that decision despite objections from, among others, humans. That requires Free Will and a strong character.

2. AI’s value systems will have to be very different.

Let’s say we have some technological breakthroughs in the 21st century and actually develop AI that is self-aware, independent and can think for itself. The question then arises, what will motivate it? Nobody knows, but possible answers include a) a willingness to serve humans; b) a human-like ego; or c) an intricate value system similar to what us humans have.

My bet would on the third, because at the end of the day it is us humans who are programming the AI and are creating the fundamental algorithms that will someday run these Free Willed programs. There is plenty of evidence of human biases, good or bad, creeping through to AI. There also is plenty of evidence that, as a whole, human society is moving towards a tolerant value system that values the dignity of a human life.

3. There would have to be one Bad AI.

Let’s say we lose the battle on values as well. There always are evil geniuses. Or a third world war could fundamentally change our own cherished ethos. Or maybe the Free Willed AI just ignores human values.

Still, to wipe out humanity, the ‘bad AI’ — the independent-thinking program with no values whatsoever — would have to be the one program that dominates in power over all others.

In reality, AI is a tool, and there would be trillions of AIs running around as they get more and more pervasive. Each will have a different function — we don’t expect human generals to perform surgery, or human doctors to command brigades, why would AI be different? Each will have a different operating system, a different level of control by humans and a different set of values.

In short, for every ‘bad AI’ there is likely to be one, or more, ‘good AI’ that would fight for us in an inter-planetary war.

4. The Bad AI would have to decide to wipe out humans.

Some would say that even humans organize themselves and act as one. That social contract is the very basis of nations. When we say that USA is the most powerful country in the world, what we mean is that 330 million citizens of a governing system act in unison via their government despite the huge differences among them.

Let’s say the Bad AI also emerges in that way, or in some other way, and comes to dominate the AI-sphere. Now the the Bad AI will have to decide to kill humans. To me, there is no obvious reason why it would. Sure, Hollywood and Bollywood have some suggestions, but how dumb do you think the omnipotent Bad AI would have to be to decide that humans are a threat to humanity or its own existence.

5. The humans would have to be powerless.

Let’s grant even that — let’s say in future there is one dominant Bad AI that decides it must wipe out humans. Still, for it to succeed we must assume that in the face of extinction entire humanity sits powerless. After creating such an awesome AI, we are incapable of creating other tools to contain it; tools that will actually listen to us. If this is true, we are doomed for sure.

**

Each of the five necessary conditions above has a non-zero probability of being true. So what are the odds that AI will cause humanity to be extinct?

You do your own math; I am sleeping fine.

--

--