Will AI ever be a threat to humankind?

Anisha Yadav
Predict
Published in
4 min readJan 17, 2022

In 2020, Spot, a robot with the ability to broadcast messages, carry out video analytics and navigate environments, was released into public parks in Singapore to ensure that people were following safe-distancing rules.

It was a successful project and is a great example to show how Artificial intelligence (AI) has seeped into our lives. From managing YouTube Video recommendations to algorithms that are crucial to run search engines like Google, AI is a tool that furthers human abilities.

An image of Spot the robot when it was released into a park in Singapore
Spot spotted at a public park (no pun intended)

While invaluable in many sectors, can AI eventually cause the downfall of humanity?

To answer this question, we must understand that Artificial intelligence, while being artificial, is not actually intelligent yet.

Intelligence, make it make sense

What exactly does “intelligent” mean? Does it mean that you can solve any mathematical problem given to you? Or does it simply mean that you’re a good learner? While the definition states that “intelligent” is when an entity is “good at learning, understanding and thinking in a logical way”, truly being intelligent runs much deeper than that. As famously established by Howard Gardner’s “theory of intelligence”, being “intelligent” means to not only have Visual-spatial intelligence and Logical-mathematical intelligence but to also have Intrapersonal intelligence, where Intrapersonal refers to being self-aware and having feelings and motivations.

Howard Gardeners Theory of Intelligence

While current AI has high visual and logical intelligence, it lacks intrapersonal intelligence and therefore doesn’t have the motivation to set goals and tackle problems for its own sake. It instead relies on provided data, algorithms, feedback and methods of hit and trial to present outputs to solve problems faced by humans (aka Reinforcement learning). AI deprived of having personal feelings and emotions can only use its strengths to optimize human capabilities.

Without motive or self-realization of its capabilities, present-day AI is nowhere close to being an existential threat to the human species.

How about the future?

“Mark my words, AI is far more dangerous than nukes”-Elon Musk

So, why did Elon Musk issue this friendly warning? Well, this tracks back to being truly intelligent.

If AI ever reaches a point where it can be a threat to humankind, it will be due to one of two reasons:

  1. Control of AI by an intelligent and highly motivated source

2. AI reaches a point of self-realization: the creation of Super AI

In the first case, the AI itself isn’t motivated to be a threat, rather it acts as a tool and aids the source by carrying out necessary computation. To understand this better, take the example of autonomous weapons. The weapon itself (in this case the AI) isn’t required to have an intention, consciousness is irrelevant, rather the weapon has a human creator who is motivated.

The second case marks the point when AI becomes truly intelligent. Super AI is when AI surpasses human intelligence and ability. It is the byproduct of a hypothetical situation known as ‘technological singularity’, which is a point in time when advancements in technology are rapid and irreversible. Super AI can set its own goals, this ability makes AI a potential threat to humans as if its own goals do not align with the goals set by humans, conflicts could arise. This is because the probability of AI sharing the same moral values as humans is low. If the AI felt that its existence is threatened, or that humans were preventing it from reaching its goal, it would not hesitate to fight.

If AI reaches a tipping point of self-awareness, consciousness and emotion it can be a potential threat to the existance of humans.

So what now?

Turns out in the long run AI could be a threat. Scary huh? Well, as of now there is nothing to worry about.

The idea of technological singularity and super AI is ahead of its time, and if there was any possibility of it to occur, it wouldn’t be anytime soon or for the next few centuries. Nonetheless, this doesn’t mean that we shouldn’t be wary of AIs potential, as if it ever becomes truly intelligent in uncontrolled environments, then it would serve as more of a curse rather than a blessing.

--

--