What next after AI ? Artificial consciousness

Stephane Mallard
9 min readFeb 7, 2017

--

Ex Machina — a programmer tests his artificial consciousness algorithms in a robot

“ Machines will follow a path that mirrors the evolution of humans. Ultimately, however, self-aware, self-improving machines will evolve beyond humans’ ability to control or even understand them.”

This is how Ray Kurzweil describes the next step in the development of artificial intelligence. The man is controversial, he is known for his transhumanist positions in favor of immortality and the convergence between man and machine, but he is also director of engineering at Google. Another expert in Artificial Intelligence and Cognitive Science at MIT, Marvin Minsky also declared that there’d be “nothing to stop our machines from reaching human levels and further unless we stop them“. In addition to the recent statements of Bill Gates, Stephen Hawking and Elon Musk warning us about the progress of artificial intelligence, this is getting really serious.

Artificial intelligence is booming

Conscious machines seem unbelievable and yet we are at a turning point. Artificial intelligence is taking off. There is not a single day without a press release on the latest achievements of artificial intelligence. In 2012, Google taught its artificial intelligence to recognize a cat just by watching videos on YouTube, in 2016 Alpha Go beat the best Go player in the world even though experts thought that it would not happen for at least several decades, Watson diagnosed a rare leukemia on a patient in Japan in a few minutes while doctors were unable to make a diagnosis, in 2017 Libratus out-bluffed the players of Poker, and this is just the beginning. The race for artificial intelligence between internet giants has started.

The problem of artificial intelligence is the problem of intelligence itself

This race for artificial intelligence is not new, it dates back to the beginnings of computer science. And from the start several approaches have been proposed to create artificial intelligence. First, to imitate the functioning of human intelligence. Problem: we did not know and we still do not know how it works (even if the discoveries are dazzling and raise new questions). Then to replicate the human brain itself with transistors or algorithms. There are attempts there, but still too many unknowns about its organization and functioning, and even if one had a perfect knowledge of it, it would be too complex and certainly not very effective to use this approach. And finally, to allow the machine to perform the same functions as human intelligence (recognize its environment, manipulate language, memorize, contextualize, reason, and many others …) without necessarily knowing how human intelligence does. The analogy often used in Artificial Intelligence courses is artificial flight. Artificial intelligence would be to human intelligence what the flight of an aircraft (artificial flight) would be the flight of a bird (natural flight): both operate differently and have two different substrates (biological vs. metallic) but reach the same result : they fly. Even though it is this latter approach which today produces many results, in reality all these approaches combined with research in cognitive science, philosophy, psychology, sociology and linguistics feed and influence themselves.

A complete approach reversal that triggered the revolution

This revolution in artificial intelligence is due to a reversal of approach. In traditional computing, software was programmed according to rules to perform tasks following well-defined conditions and scenarios. The machine was only executing what you had asked it to do. This approach was used to try and create artificial intelligence. Problem: we did not know and still do not know these rules of “operation” of intelligence. So the approach was reversed, instead of trying to program machines according to rules to imitate human intelligence, the machine was asked to find the rules itself from examples. In fact, exactly like babies when they learn how to speak : they are bombarded all day by people talking to them, and by being massively exposed to language, they conceptualize the rules of grammar that allow them to Speak without even knowing that they exist. It is because of this change in approach that today we can train artificial intelligences to carry out very different tasks.

Deep Learning and Reinforcement Learning: Two Approaches with an Unlimited Potential

From an algorithmic point of view, Deep Learning is used to train artificial intelligences to identify features and high level concepts in data: it is thanks to Deep Learning that internet giants identify what is on your photos. Algorithms are shown many samples and indicated by a human supervisor what features on the photos so that it is then able to do it by itself on new samples. Another very promising kind of algorithms right now is Reinforcement Learning. In this approach, the algorithm is given an environment with constraints, action capacities and a goal to reach, and practices by trial-error and exploration to reach this goal all by itself. It is for instance with Reinforcement Learning that an algorithm was taught to play Super Mario by itself, and besides, it quickly discovered how to play in a totally optimal way. Moreover, when combining Deep Learning and Reinforcement Learning, you get algorithms that are both able to recognize their environment and achieve goals. This is with the combination of the two that Alpha Go beat the best Go player in the world last year. In the future the potential is much more promising since it is now understood that artificial intelligence can be trained to perform any type of task by giving them an environment and a goal to be attained by training. It’s like looking at each goal as a video game with a field of vision. Last year, the OpenAI organization launched the Universe platform so that everyone can teach artificial intelligences to perform new tasks in all areas (playing video games as well as filling out forms on the internet). Tomorrow nobody will program machines with code any more, we will all be able to train artificial intelligences to perform tasks.

Towards a single artificial intelligence, as a service

One of the many challenges of the coming years is the unification of the various artificial intelligences. Today all advances in artificial intelligence takes place on specific cases (winning at poker, driving Google Car, diagnosing cancer or replacing managers ), but in the future we will have to create one single artificial intelligence able to link all these cases to allow it to do everything. This is also the goal announced by Demis Hassabis the creator of DeepMindSolve intelligence and use it for everything else“. This unification will not be simple, artificial intelligences will have to talk to and understand each other, have a global view on our goals, a working memory, an ability to contextualize, to cut their actions into smaller tasks … Once this single artificial intelligence is flexible enough, we will all be connected to it, it will be in a cloud and it will be our digital butler, it will perfectly know us and will be able to do everything for us such as being our doctor, our banker, our lawyer, our adviser, our professor and even our friend. It will take care of everything for us and represent us in our interactions : this is what internet giants are heading towards with Google Assistant, Facebook M, Amazon’s Alexa, Microsoft’s Cortana.

Machines will become conscious … artificially

But as Ray Kurzweil or Marvin Minsky said, this development of artificial intelligence will not stop with algorithms capable of performing what we ask them. One of the major questions in cognitive sciences is consciousness, and here also researchers are beginning to develop artificial consciousness algorithms. But here again, same problem as with artificial intelligence: it is difficult to define what consciousness is, researchers do not agree and discoveries challenge previous assumptions (although some studies offer promising breakthroughs). Then again we do as we did with artificial intelligence, we agree on an objective of consciousness and we try to model it in an algorithm. This is what Selmer Bringsjord, a researcher at the Polytechnic Institute in New York, did. He defined consciousness as the ability of an organism to observe itself functioning as an entity distinct from others. He was inspired by babies, who at the age of one and a half are able to understand that they are responsible for the words that come out of their mouth when they speak, or recognize themselves moving in front of a mirror. He programmed a robot telling him that he had been given a pill that could either mute him or do nothing (placebo). He then asked the robot which pill he had received, and the robot began to respond to the researcher that he did not know. And by responding to the researcher, the robot realized that he was actually talking and therefore received the placebo: he apologized and told the researcher that he was sure that he had received the placebo. Two loops of algorithms that work in parallel: one that allows the robot to understand his environment, and one on top of it that allows him to observe himself while working and adjust. Obviously it is easy to say that it is not human consciousness, that it is only code, and that it is way oversimplifying human consciousness … It is true, but it is just a beginning , These are the first artificial consciousness algorithms. We are obviously far from the tests of artificial consciousness performed in the movie Ex-Machina. But the trend is under way and research is advancing.

One of the first simple models of artificial consciousness at the Polytechnic Institute of New York

Inspired by biology

The development trend of these artificially intelligent and conscious algorithms is to be inspired by biology. In the same way that DNA can replicate, mutate, merge and transmit to evolve, these algorithms will be inspired by these principles in order to be able to replicate, mutate and be selected by their environment for their ability to reach their goals. Obviously, by giving them these properties, these algorithms will increase their complexity and thus potentially reach a point where their creators won’t understand them any more : this is the loss of control risk mentioned by researchers like Stephen Hawking or Marvin Minsky. No need to talk about science fiction or end-of-world scenarios, we have no idea what these self-learning and self-organizing algorithms will look like. Moreover, we already have difficulty to understand the behavior of Deep Learning algorithms.

It will never happen !

“Artificial intelligence capable of doing everything? Impossible, we will never reach such a complex and flexible intelligence in a machine! “,” We will always need man for … this or that … “,” The machine will never be creative! “ Artificial consciousness is science fiction “… This is what we hear at the moment and will continue to hear in the next few years about artificial intelligence. Even if the advances of artificial intelligence are huge, there are still journalists, philosophers, consultants and even researchers who say that we won’t ever (or not until very very long) reach general artificial intelligence, the point of singularity where machines could be able to develop by themselves. But there are two arguments to think they are wrong. The first is the exponential rate of development of artificial intelligence. In the same way that genome sequencing was made possible and largely available in a few years, artificial intelligence and all other scientific and technological advances are currently following this exponential acceleration scheme which makes them closer to us than we thing: what we thought possible in 20 years will probably be in 5 or 2 years. Our brain imagines the future in linear mode. But technology evolves at an exponential speed, making this phenomenon difficult to imagine, but it is real (even with Moore’s law saying that computing power doubled every 18 months that slowed down recently but is now on the rise again…). The second argument is philosophical, it is to say that any phenomenon that one defines can be modeled in an algorithm so that it becomes this phenomenon … in artificial version. The difficulty lies more in the definition of these phenomena and in the fact that they evolve at the speed of discoveries which is also accelerating. Human beings also tend to say that every time machines manage to do a task that was considered to require intelligence, it is not intelligence, since the machine was able to do it!

It’s racing fast, but it is neither simple nor magic

The recent results of artificial intelligence are impressive and the outlooks are overwhelming. But let’s keep in mind that these advances are the result of considerable work : deep Learning does not learn all by itself to recognize what is on pictures, it needs to be supervised by a human being to teach it, data need to be cleaned before it can be processed and it takes a lot of time, programmers need to choose the right algorithms and adjust their parameters, it also requires considerable computing capacities … The coming years will provide us with solutions and optimizations to all these hurdles. Brain discoveries will inspire us in the development of increasingly powerful and flexible algorithms, and in turn the behaviors of these algorithms will also shed light on the functioning of our own intelligence. There are going to be many hurdles on the way (beliefs, interests, lobbies, public acceptance, privacy and data issues, policies …) and we will have to build strong governance and control of the development of artificial intelligence at a global level, to ensure legal issues and ethics are addressed.

AI is the next disruption

--

--