I’m Not Afraid of Artificial Intelligence

We’ll Survive with a Little Forethought

Artificial intelligence scares people.

There’s something deeply philosophical about all that but I haven’t quite figured it out yet; last I checked most people don’t fear that their children might one day rise against them and throw them out. But here we are.

It’s a common trope that pops up in anything ranging from science fiction to silicon valley coffee shops. There are two closely-related reasons for this fear: (1) some people believe AI will be so intelligent that it will destroy us, whether physically or just “spiritually” (by taking the mystery out of everything); or (2) human beings will be replaced by AI in everything ranging from economics to art to politics, and we will have nothing to do.

I don’t think either of these are worth too much thought to be honest.

The Weird Fears of Imminent Destruction

The biggest thing seems to be the unmitigated power; at least unmitigated by what we could normally call ‘powerful’ as human beings. But there’s one huge piece of this mythos that’s almost a requirement to fear AI actually destroying humanity. There has to only be one AI. (Or maybe a group, but still — one singular purpose.)

You’ll never find a science fiction story about warring AIs at cross-purposes with one another, where poor ol’ humanity winds up in the middle of the battleground. (But please, someone should write that because it would be pretty good.) The trope is always some rogue AI that has been set to do some mundane task, but interprets the task radically. It then destroys all human life.

Oddly enough, the single artificial intelligence trope is used purposefully because its reversal is almost always the solution to the whole problem — how do you defeat a rogue AI? Build another AI that is tasked with defeating the rogue intelligence. That’s the plot of Avengers: Age of Ultron. That’s a big part of the plot in the video game Horizon: Zero Dawn.

It would pan out in the real world too — it’s believable because artificial intelligence mimics human intelligence. Every character requires their foil. Elon Musk is right to fear that a rogue AI in a network could do serious damage — but so can a rogue government or a team of a hackers. We check these threats with countermeasures that are currently run by human beings. In the age of AI, we will check AI with different AI and human beings.

Another fear is that AI will simplify our lives to such an extent that we will live in a dystopian future where no real choice or wonder exists. But this is poppycock too. People thought that using machine learning to predict market trends was going to destroy the stock market. But then machines started predicting machines, and the machines (as well as humans) tried to play the machine learning, and a whole new layer of complexity was established that ruined any potential “demystification” of complex trends.

Imagine it like this: someone argues that super advanced chess computers will ruin the game of chess because no one will be able to beat them. But when you pair two of these machines together, the game is suddenly very much the same as having two experienced players going against one another. The game isn’t ruined; it’s just being played at a much higher level.

Scary visions of total destruction, or fear that AI will simply figure everything out will quickly be proven wrong in the coming decades. The bigger issue here — as plainly seen in our chess example — is how to keep human beings relevant in such a world.

The Trouble with Humanity

The answer to both of these questions is surprisingly simple: artificial intelligence must become indistinguishable from human intelligence — a coevolution.

Think of all of this like mitochondria within the cell: we would not be human without these tiny little organelles. At some point in our evolution, our cells formed a symbiotic relationship with these microorganisms — and now we cannot be separated. The same will have to be done with artificial intelligence.

In many ways, we have already anticipated this coevolution in our art and discourse on the topic. All of our fears regarding artificial intelligence can be traced to the fact that AI emulates human beings: it can reason deeply about what “the good” means. This is a scary prospect — evil is often done to pursue the “the good.”

Nobody is deeply afraid of the two-bit gangster only interested in themselves. They’re easy to predict, and also even easier to placate. The reason AI is the stuff of nightmare fuel is because our most realistic experiences with evil are from those who are trying to carry out some grandiose vision of utopia.

AI is also capable of this, and indeed, they seem more prone to it (if we are to accept the science fiction portrayals as accurate). We believe this because we always imagine AI to be “unmoored” more than anything. We would take away the self-interest entirely, and give them some grandiose task.

But even worse — the intelligence is not part of a community; it has no cultural mores; it has no moral sensibilities. More often than not, AI is given some broad mandate, like “protect humanity,” and it finds that protecting humanity means genocide. (Or any other number of similar tropes and story lines.)

But the best way to prevent an intelligence from coming to these conclusions is — paradoxically — to give it a wide context within the human community. Make the intelligence selfish. Give it emotions. Make it participate in society. Find a way to allow it to develop arbitrary preferences. All of these are nothing less than making the intelligence human in all of the right ways; that’s a long and dangerous road.

But in the mean time — this will need to be done in reverse as well. Especially if humans want to stay relevant in the new world. Through cybernetics or other methods, human beings will need to enhance their own ability to reason and process information. This will not only allow our relevance, but it will solidify the integration of AI into humanity.

Philip K. Dick was one of the few science fiction authors to fully understand the potential of this phenomenon. He put it quite beautifully:

Someday a human being, named perhaps Fred White, may shoot a robot named Pete Something-or-Other, which has come out of a General Electrics factory, and to his surprise see it weep and bleed. And the dying robot may shoot back and, to its surprise, see a wisp of gray smoke arise from the electric pump that it supposed was Mr. White’s beating heart. It would be rather a great moment of truth for both of them.

The Real Fear

No one is wrong to worry about any of this. These are complicated issues; they will require much coordination, and will be difficult to achieve. When Elon Musk worries that AI can start a war, he’s absolutely right. When the Marvel Cinematic Universe imagines an AI trying to wipe out humanity, it’s not difficult to believe.

But these nightmare scenarios rely on two factors: (1) a world where there are only a few artificial intelligences, and we are completely subject to their unmitigated power; and (2) a world where artificial intelligence is so divorced from the human community and social context that they do not reason like most human beings (instead going straight to comic-book levels of villainy).

We should avoid both of these potentialities. But thankfully, it is entirely within our power to do so.


Get updates on future articles and projects: follow me on Twitter by clicking the bird.