Should We Fear AI?

Unfortunately, yes.

Crossposted from Promethea, “Where the World Writes”

Absolutely.

This is a big problem. The only people who I see worrying about this are the elderly, who are often laughed off. Granted, their idea of an AI apocalypse is probably the T1000 blowing up China before declaring himself God.

I think the real thing — if this worst-case scenario comes to pass — will be far less dramatic (and cinematic).

Yesterday, I watched a video by Futurism that insisted You Shouldn’t Fear Artificial Intelligence. I found this very concerning. Primarily, because Futurism is the tech news platform that is most accessible to ‘non-techies’. The average person may watch this video, and believe it. Why shouldn’t he? The platform is well established.

The government dependably does an excellent job of mucking up technological progress. As 2016 has shown us, politicians don’t know the first thing about technology. Perhaps more importantly, they don’t care to learn. They’re too busy reopening the coal mines. Seems unnecessary, given that they are filled with enough hot air to power the country. They could let ASI loose, before the researchers have determined it is safe.

When you consider this, it is rather easy to imagine them running into some economic problems around 2030, and demanding scientists to “unleash the Kraken.” It is a widely held opinion that AI researchers will create AI with the capacity for superintelligence months, if not years before they “let it loose”. “Oh”, they’ll say, “it’s just to fix the economy.” Well, that may be their intention, but to assume that it will align with the AI’s is a quite an optimistic leap of logic.

As humans, we are programmed to anthropomorphize. We may feel sympathy for a building, about to meet that great structural engineer in the sky, before it is demolished by the wrecking ball. This is a natural feeling, and one that is difficult to control. However, it is important to recognize it, rather than let it govern your way of thinking.

This leads me to my next point: People project human traits and values onto artificial intelligence. I assume that most of you saw Avengers: Age of Ultron a few years ago. The premise of the film was that an evil AI desired world domination and human extinction (making the former pointless?), and the Avengers weren’t having any of it. This desire for domination is a human trait, shared by other animals. There is no reason whatsoever for an AI to have such a trait. To reverse this idea, I often see people assuming that AI will be kind, moral, empathetic, and be all warm and fuzzy. This is also incorrect, though it ventures into interesting philosophical territory. There would be no reason for AI to have empathy, primarily because it is non-biological. There is also no reason for AI to be altruistic. Think about it. Your intuition will likely deny this, but you must ask yourself: ‘What motive could it possibly have to be altruistic?’

An AI won’t be evil and it won’t be good. In my opinion, this is far more terrifying than it being malevolent. How would you feel if AI gets bored with us after a few minutes and teleports to the Andromeda galaxy? Why shouldn’t it? It will have no concept of loyalty. We will have nothing to offer it. We don’t exactly deserve the benefits that ASI could offer, do we? It is entirely unpredictable. We can not begin to imagine what it will do.

As I’ve said in the past, I think there is about a 50% chance that ASI will be a net-positive for humanity. As for the other 50%, we may either be destroyed, abandoned, or something of which we have not yet thought.

I’m not optimistic. I’m not pessimistic either, but the reality would make it seem as though I am.

Professor Nick Bostrom of Oxford has drawn an interesting analogy which is quite relevant here. He asks us to consider an alternate reality in which NASA receives a message from an intelligent alien species. They inform us that they will be coming within 15 years. Nothing more is said. Of course, you would expect worldwide panic, perhaps a coup or two… In short, absolute, unbridled hysteria. The premises are the same. It all comes down to the absolute fact that something extremely intelligent and powerful is coming within 15 years, and we do not know what will happen.

Why aren’t we more afraid?

For a good primer on the advent of superintelligence, I have to recommend Superintelligence: Paths, Dangers, Strategies — by Prof. Nick Bostrom, who explains it far better and more lucidly than I could.

If you enjoyed this article, join my platform, Promethea, where you can read articles by experts in tech, politics, and more.