The obstacles to human-level AI

In a time of DQN, what is holding us back to build a human-like artificial intelligence?

At Baabedo we are working on a general-purpose, educable system, that improves from raw experience, to do work that before only humans could do.

Artificial Intelligence might be the most exciting field humans ever worked on and stimulates people across the board. Part of its fascination surely comes from the fact that artificial intelligence is somewhat mysterious and therefore allows for opinions and speculations. “Is a human-level AI really possible in the near future?”, “How could a computer even think?” and “Are we humans unique with our emotions and consciousness?” are some FAQs concerning artificial intelligence.
Changes and breakthroughs are coming fast these days and a lot of what is commonly known as impossible has already be done or does not play a role at all. So what’s really in the way to a human-level AI? Let’s start easy.

What’s an AI?

Artificial Intelligence refers to an engineered system, running on machines and software, that can behave intelligently. The term AI was first introduced by the later creator of LISP, John McCarthy, back in 1955 at an application for the Dartmouth Conference. If that definition of AI sounds a bit unsatisfactory to you, then you are not alone. But it is as clumsy as it is because we have trouble with defining what intelligence is in the first place.

So what is this intelligence anyway? We are not really sure, but one of the best and most helpful definitions comes from Professor Linda Gottfredson in which she describes intelligence as “a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience.” But still intelligence is a complicated topic which we can only categorize rather than quantify. That is why we distinguish between to two general types of AI, namely weak and strong AIs.

Weak and strong AIs differentiate mainly by their scope off action. Weak AIs can perform one task and one task only but often outperform us at that job by a large margin. Strong AIs can operate on many tasks and are often called general artificial intelligence. We humans would be strong AIs. But there are of course AIs between the two extremes and it is the underlying technique of the AI, that determines if the AI is weak, strong or a something in between. This underlying technique, let’s call it learning algorithm, also determines how the AI learns and what capabilities it has. In the last few years, we came very, very far with powerful learning algorithms, to a point where they would have the potential to power a strong AI.

The algorithms that power weak and strong AIs

If an AI is weak or strong depends very much on the algorithm on that it runs, which gives the learning system not only the potential to become a weak or strong AI, but also decides how intelligent it can become. There is a vast amount of techniques, technologies, and algorithms out there, that can provide a machine with the capability to learn.

An image recognition algorithm, that tells, in the upper left corner, what it sees via its camera

The most and probably well-studied learning algorithms are the supervised ones, which generalize from data so that they can say something about new and unseen data. A few examples for systems where we can find these learning algorithms are IBM’s Watson, Google’s Prediction API, recommendation systems like those from Amazon, Netflix, Pandora and Image and speech recognition services like Siri, Cortana or Google Translate. One pain spot is, that they can only learn from labeled data, which is much harder to get than unlabeled data. Nevertheless, we came very far with those generalization algorithms in the last years even to a point where machines beat us humans at our own hobbyhorse, image recognition. But AIs with supervised learning algorithms have a training restricted application scope. And they are not in complete sync with the initial definition of intelligence, as those AIs can not plan, solve problems or learn from experience. But the results are still very impressive.

The Atari-agent developed by DeepMind learning by itself to play different Atari games and beat the best human high scores. Here it plays Breakout.

But recently we made breakthroughs with a new type of learning — reinforcement learning. AIs that are based on reinforcement learning are often still weak AIs as they work only in a limited area, but their potential is huge and pretty close to power strong AIs. AIs with reinforcement learning under their hood can plan, reason, solve problems and — like we humans do — learn completely from experience. Most people would describe those AIs with the adjective intelligent, as they have the potential to explore their environment free on their own, learn how to act in this environment, achieve goals and reason and all this without the need of human labeled data. They can abstract knowledge from plain, real-world experience. So we came quite close to a system that can potentially act in many areas and could outsmart humans. But what things are holding us back to really create a human-level AI, that can act in as many areas as we can?

Things we have to figure out

The hierarchical structure of human and animal behavior has been of long interest to neuroscientists. In our human life, we will make roughly 20 trillion physical actions, which are the result of the decisions we make. Every morning from getting up to going out of the front door, we perform close to 30 million physical actions. But it would be just computational absurd if our brain had to perform decisions over 30 million steps in the future. So we are using higher-level actions representations such as ‘take shower’, ‘get dressed’, ‘make tea’, ‘leave house for work’, that reduces the number of decisions we have to compute and makes complex problems tractable.

In fact it was machine learning and the emerging framework of hierarchical reinforcement learning, a learning framework that builds upon the traditional reinforcement mechanisms, that helped neuroscience to make progress in hierarchical behavior and human decision making. But whereas the concepts of reinforcement learning is well researched by now, hierarchical reinforcement learning is quite new and will need some more time to show if it provides an adequate blueprint to power machines with higher decision-making. But so far both the concept per se and the neuroscience finding are encouraging. AIs with hierarchical decision-making capabilities would become very powerful and practical and would make a huge step forward to human-level AIs that can operate in a broader area.

The block world problem, one of the most popular problems in AI, explains the need for relational learning capabilities. In a block world, we have different blocks on a table, which we can either move on the table or on top of other blocks. But we can’t move blocks, that have other blocks on top, or multiple blocks at the same time. With classic reinforcement techniques, the AI would build a new state representation for each possible block arrangement. Such an AI could not use previous knowledge from a state where it tried to move Block A, which had Block B on top, to another state where Block C has Block D on top. This results in bad and inefficient performance and needs recomputation even for the slightest changes e.g a 6-block world instead of a 5-block world. AIs with today's reinforcement algorithms are lacking relational state representation, something we humans possess and which allows us to apply gained knowledge to similar states.

Relational learning or also called inductive logic programming received great interest since the early 1990s. In the early 2000s, people combined the two rising fields — relational learning and reinforcement learning — into relational reinforcement learning. The advantages are obvious, as AIs could bring already made knowledge to new environments that have a similar relational description. The current generalisations achieved with deep neural nets, might not be enough and relational reinforcement learning could broaden the AIs scope in ways not possible today. Again relational reinforcement learning has the capabilities to broaden the AIs scope of application.

Linguistic abilities might have been on of the greatest successors for the rise of the homo sapiens. Language helps us to communicate with one another via speech and writing, it is an important tool for learning and with language, we build our internal representation of the world around us.

A human-level AI might also need linguistic abilities as the AI could otherwise hardly communicate with us or other AIs, could not learn from written or spoken words and maybe could not build a flexible and powerful reasoning and internal representation. Alan Turing predicted back in 1950, The use of [a symbolic] language will diminish greatly the number of punishments and rewards required.

Yet it is still under debate how much of our human language ability is built in and how much is learned and natural language processing by computers is not quite ready either. But we could easily imagine how great the practical impact would be if AIs could communicate and learn from text — from which is plenty available on the internet.

Between 2020–2025 we will have reached human-brain computation power for 1000$

Computational power might actually be one of the smallest problems on the way to a human-level AI. As we might not even have to build a lot of computation power anymore. China’s Tianhe-2 already exceeds a human brain three-fold in terms of calculations per second (cps) — even if this might be an inaccurate factor, computation power will accelerate and get cheaper and outpace our brain. But still it is not proven, that we would need the same amount of humans brain’s cps in order to build a human-level AI as our learning algorithms could be more, but also less, efficient than the learning algorithms of our brains.

Things we don’t have to figure out

Let’s talk about emotions and what role they are playing in our life. Every Intelligent system — including us — is a prediction machine. We are looking what actions we have to take to get to a state that we like better. And what else are emotions than road signs that help classify states into those we like and those we don’t.

We make an awful lot of decisions because we want to get into states that feel good. That’s why we eat, reproduce, avoid pain and make the decisions we make. We might think “If I am not going to class today, I might fail the test and that feels bad”, so we are going to class. Our emotions are shaping a lot of our behavior, but there is absolutely no problem with providing AIs with a similar reward system.

Close to our emotions is our value system — that often gets shared by the society we are living in. An AI that would share the same physical space with us would have to understand the value system and the general culture so that it does not make decisions that would seem troubling to us. Imagine an older lady, which slipped on the streets and has difficulties to get up again. Not helping the old lady up — especially when no one else is around — would be a great offense and I can already see the news outburst such an incident would cause “[Company X]’s AI let old lady lay on the street for hours”. AIs that want to engage with us would have to understand our value system, human emotions, and our culture.

Inverse reinforcement learning deals with the ability to figure out what the motives behind certain actions are. AIs could use it to learn about human emotions, value systems, and cultures and then act on the observed behaviors.

Ok, AIs can grasp our emotions, values, and culture, they can plan, abstract, reason, and learn by themselves — but can they sentient? Can they feel like we do, can they be conscious and self-aware about their doings like we are?

With consciousness, self-awareness and sentience, we moved — as far as we are concerned today — to human-only phenomenons that seem to be unrelated to intelligence. Consciousness is a very unclear topic and at the moment, we do not a test that could measure consciousness and likely we will never be able to. Maybe a superhuman level AI will one day figure it out, but so far consciousness and sentience seem to be unrelated to intelligence and would be therefore obsolete for an AI.

And then there might be no such thing as a
human-level AI

That’s how we might have to imagine a ‘human-level’ AI

We have seen that we already can built AIs that behave intelligently on lower action spaces and we might think, if we fix the few big problems, we can create human-level AIs. But in reality, the point where AIs are comparable to our intelligence, will just be a vague and insignificant moment. AIs have the ability to scale and they will scale exponentially. At one moment, we see AIs beating our high scores at video games and before we realise they will have beaten us in everything else, too.

Our learning algorithm is still more sophisticated, but we have many great disadvantages compared with computers, which is not only the reason why AIs will become an obvious choice long before they will have our scope of application but also why they will become powerful so rapidly.

Machines have no hard limitations when it comes to computation power. They can be extended easily with more CPU, more RAM, more Sensors. Their sensors could be countless and scattered all over the world and would be connected with the AIs brain via the internet. CPUs and mechanic parts do not fatigue as our brains do and run theoretically 24/7 forever. The maintenance costs of AIs will be 100–1000 times lower than those of humans, which makes them very attractive for the workspace. We and AIs could easily edit, upgrade and improve AI software (learning algorithm) to adapt to ever new areas of applications or improve at existing ones. And lastly computers can collaborate more efficiently. Although we humans came a long way by the means of collaboration and can now share information with anyone in practical real-time, machines — with no doubt — could do this more efficiently. AIs could sync their experiences and knowledge with each other or could work on goal as a unit, without goal compromising behaviors like self-interest or personal motivations.

So, with reinforcement learning, we have a powerful framework to create generic forms of Intelligence that share astonishing similarities to how we learn and perceive the world. But we should not fool ourselves with the perception that machines have to copy us in all aspects, as we are obviously flawed. The pace of progress on AIs will accelerate and sooner than later we will have to find a way that makes it impossible, even for the smartest AIs, to override the human given goals. But with the latest learning algorithms and the natural hardware advantages we reached a point where AIs can already be applied to many practical fields — especially for work.

Let’s build learning, general artificial intelligence to help companies turn data into actions - send us a mail: