Thoughts on Elon Musk’s View toward Artificial Intelligence

Christopher
Jul 24, 2017 · 3 min read

In a recent talk given at the National Governors Association (Link), Tesla-founder Elon Musk warned against Artificial Intelligence (AI), stating that AI may become the biggest threat to human society. According to Musk, robots are capable of doing everything better than humans do. To prevent their takeover from happening, Mr Musk suggests to control research on AI.

Musk brings up valuable points to explain his decent position. On the contrary, one may regard the potential of AI in respect to a bigger philosophical picture. If robots can outperform humans, then what is so bad about it?

We humans came up with dozens of political systems all prospering and falling apart again. Wars, diseases and injustice belong to us since the beginning and may last till the end. Most of us apply double standards in daily life. Many of our decisions result in irrational conflicts other people suffer from. Just think of all the garbage we produce nowadays. Exploitation exists in historical just as modern times. The general imperfection of humans is often regarded as a love-worthy attribute, but could also be a romantic and even narcissistic joint view on ourselves.

So, how come a savvy tech-founder rejects the potential of AI? Are we just afraid of something that is more perfect than us? And yes, we should be in order for us to survive. But as certain it is, that we were born, we will die. This has applied to previous advanced civilizations in the same way it applies to individuals. As we have learned, even the lifetime of the sun is time-limited and since we rely on it, thus is ours finite.

However, I believe we emerged for a good reason, which might be to build up something that outlasts us. For instance, our research on AI may enable robots to recreate themselves making us redundant. The machines we create may live longer, do not need oxygen, can live on planets we cannot and out of their superiority perhaps even kill mankind one day. This, by the way, is in line with Mr Musk’s statements. I just would like to change the perspective on it, although it seems to be quite suicidal if we know about our deadly fate in advance and still work on it.

The imagination of us being exposed to a life-threatening situation is frightening and thus lets us instinctively think about ways to protect us. But what if we forget about our survival fears for a moment. From the evolutionary viewpoint, it still would make sense that the knowledge and intelligence we gained over thousands of years leads to something bigger. Just as satisfied as an elderly person may be knowing descendants were raised consciously, we could leave the stage satisfyingly once we did our contribution.

The only doubt I have is that we haven’t seen a similar evolutionary step in history in which more complex creatures originated purely from minds and abstract thoughts of any third parties. This can be the first time though.

I consider that the only certainty about our future is our death. If we continue to follow our curiosity and develop intelligent machines, we may risk our lives, but increase chances that our existence leads to a more advanced form of life ensuring our story will be told afterwards. If we stop research on AI, our civilization might last longer before the day comes that erases all our cultural achievements. This scenario may be just fine too. It just sucks to let the universe start over again to create life.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade