When Artificial Intelligence will outsmart us

Lancelot Salavert
My Messaging Store Blog
5 min readOct 27, 2015

--

For many artificial intelligence researchers, the ultimate goal would be to reach “human-level” intelligence. But first, how should we define “human-level” intelligence? For a while now, computers beats at very best chess players and our most creative minds at Jeopardy but these game only challenges a very limited part of our brains. Alan Turing claimed that it was too difficult to define “thinking”. Instead he proposed what has come to be called the “Turing test”. In order to pass the test, a machine needs to convince a human that they are communicating with another human. Unfortunately, since then the scientific community has criticized this test and it does not constitute a suitable criterion for “human-level” intelligence as well. For this article let’s consider the “employment test”. In order to pass it, an AI program must be able to perform the jobs ordinarily performed by humans. Systems with true human-level intelligence should be able to perform the tasks for which humans get paid. So, is it sciences fiction? Is it far out there? Is it crazy to think about those things? What happens when our computers get smarter than we are?

In order to answer this question I would like to rely on the work of Nick Bostrom, a Swedish philosopher at the University of Oxford. Since 2005, Bostrom has led the Future of Humanity Institute, a research group of mathematicians, philosophers and scientists tasked with investigating the big picture for the human condition and its future.

How will we achieve AI with “human-level” intelligence?

Artificial intelligence researches use to be following the old fashion way of programing where you got out only what you have previously put in. It was totally unscalable for anyone who was aiming for human like intelligence.

But in recent years, things have changed. Today, Artificial Intelligence research is all about machine learning. Rather than handcrafting knowledge representations, we create algorithms that learn, often from raw perceptual data, basically the same thing that the human infant does. No one teaches a child to see. They learn by encountering real-world examples. By 3 years old, they can understand complex images. This shift in the approach of artificial intelligence was the game changer than enabled us to recently make tremendous progress in that field.

The outcome is AI are no longer limited to one specific domain. Of course, AI are still nowhere near any powerful cross domain ability to learn and plan as a human adult being has. The human cortex still has a few algorithmic tricks that we don’t know yet how to reproduce. But given the breakthroughs we succeeded, experts from the Future of Humanity Institute no longer see much obstacles to get there. It is now simply a matter of time before we get there.

How far are we from getting there?

During one of their congress, Bostrom raised to question of the “employment test” to his peers: by which year do you think there will be a 50% probability that we would have achieved human level of machine intelligence? The medium answer from the whole panel was… 2040. Of course no body really knows for sure, even a panel of great experts and scientists. It can go faster just like it can take more time but I like to think that the range is somehow relevant and I am quite excited that this milestone for humanity will happen within my life time.

In his book Superintelligence, Bostrom advances the idea that “the first ultra intelligent machine is the last invention that man need ever make.” Beyond this, the AI will outsmart pretty much anyone and therefore innovate and solve issues faster and better than any of us. It is most likely that all the incredible things that we will not have invented by then (populating Mars, nuclear fusion, etc) will be inventing by an AI.

One additional aspect that is hard to conceptualize is that Alfred Einstein level of intelligence is not the AI train terminus. It will keep on going to unknown territories. As it does, what if this AI starts doing things that we might not approve? Let say that you ask the AI to solve the issue X but in order to do so, it decides that Y is a preliminary, but Y is an immoral action?

How can we ensure that it advances human knowledge instead of wiping out humanity?

This question might sound a little extreme as one could think of many Hollywood scenarios that will appear cartoonish. The paradigm echoes to the ancient Greece myth from King Midas who requested Dionysos to be blessed him with the gift that everything he touches turned into gold. It turned out to be very not such a great idea as he touched his food, it turned into gold. He touched his daughter, she turned into gold. This myth is not just a metaphor for greed but it remains us that if we gives an advanced order to a super AI we better think of all the consequences inbetween.

We should not be extra confident in our ability to keep a super intelligence genie locked up in its bottle forever. There will probably not be any obvious shut down button. So there is only one possible answer: we need to create super intelligence that is still safe because it is fundamentally on our side, because it shares our values. We will have to shape the AI preferences. I am taking for granted here that humans all share similar preferences but truth be told, it will most likely be Sergey Brin preferences than yours.

Just like we will no longer need to list down all functionalities of an AI program, we will not have to list down all of our values. The AI will learn them. This is even more important that it is most likely that the AI will have to be facing moral problems that we will not have encountered yet.

Making super intelligence AI is fairly complicated. Making super intelligence AI that is safe is adding an extra liar of difficulties. However, this should not stop researchers on their quests to human like AI as paraphrasing Andrew Ng: halting AI research to keep computers from turning evil is like stopping the space program to combat overpopulation on Mars.

Let’s be fairly optimistic: AI won’t exterminate us. Quite the opposite, it will empower us to tackle real problems and help humanity. That being said, cracking super intelligence AI without anticipating for its preference set would be very damageable.

For these reasons in January 2015, Bostrom joined Stephen Hawking, Max Tegmark, Elon Musk among others, in signing the Future of Life Institute’s open letter warning of the potential dangers associated with artificial intelligence. The signatories “…believe that research on how to make AI systems robust and beneficial is both important and timely, and that there are concrete research directions that can be pursued today.”

--

--