Artificial Intelligence — A Threat to Humankind?

Elon Musk warns us about the dangers of AI and we should listen to him.


Twenty years ago, mobile phones were just starting to become popular and more accessible to the public. Today, everyone has a cell phone that easily fits in their pocket and we can use it to do things that you would not have even imagined in 1997 — from communicating with people all over the world to preventing drunk driving. With Siri, we always have our own personal assistant with us and we can drive cars that basically park themselves.

Technological innovations are being developed at an accelerating pace and, while they are impressive accomplishments, we should be wary about future developments.

Strong AI as a Threat to Human Kind

Engineers all over the world are currently working on artificial intelligence (AI) to give computers the processing abilities of the human brain and if you trust the word of entrepreneur Elon Musk, AI will become the “biggest existential threat” to humanity.

By that, he does not mean that Siri will use your iPhone against you, or that people will get run over by self-driving cars — these two technologies are considered weak AI, meaning that they are able to perform only certain tasks better than humans. The long-term goal, however, is to create strong AI. This technology will be able to outperform humans in every way, and that is what Elon Musk wants to warn us about.

In an open letter published in January of 2015, Musk, along with other experts, urged for AI to be used for the benefit of all people and called for some kind of regulatory oversight of those working on this technology.

AI could already be ruling the world in 15 years

So far, technological developments were made to assist humans and to further our society. But AI is different. If we achieve strong AI, we create something that outperforms humans in every way possible. What if it does not want to assist humans and does not need us anymore?

Or imagine this (not very far-fetched) scenario: In the future, AI is used for weapons of mass destruction. Historically, this is what humans do. We invent tools and technology, and then we use them to hurt each other. Of course, these scenarios are nothing new. There are at least a dozen movies about this exact subject. The problem is that science-fiction is slowly becoming reality.

According to Musk, AI will surpass humans as soon as the year 2030, although he hopes that his estimate is wrong.

In order to combat this ever-nearing robot apocalypse, Elon Musk has founded a new company in 2015: OpenAI is a non-profit research company that aims to create safe AI that will be used for the equal benefit of all humans.

Musk also wants to find a way to implement AI technology into the human brain, thus merging machine and human. This way, AI would be unable to surpass humans, because the two will be the same.


It has become clear that AI technology needs to be regulated, so that it does not fall into the wrong hands. The concept of being ‘better safe than sorry’ definitely applies in the context of a potential robot apocalypse. So, while the above-mentioned developments seem dystopian, it might actually be time for us to deal with this topic and to form an opinion on the matter — even if it is just by re-watching the Terminator franchise.