Google’s artificial learning is not Artificial Intelligence: And it never will be.

Recently, Demis Hassabis, founder of the AI company DeepMind (now owned by Google), addressed the potential for his approach to AI to result in a “strong AI” which means a human level AI. At the end of his lecture he finished with the statement that “In order to find a ‘Theory of Everything’ it may turn out that we have to solve intelligence first.”

This addresses a number of issues that currently plague AI. For example, Google and DeepMind think that AI can be achieved by statistical learning. Statistical learning uses large datasets that are fed into a program that can learn the relationships between the data. Artificial neural networks do this very well; they take input data like the prices of stocks or home prices and then they input as much data as they can in order to try and predict or classify new data for stocks or home prices.

Google has gotten quite sure of itself regarding its use of these networks. So sure, that they even are stating that they’re creating a kill switch in case their fancy learning programs become sentient killing machines like Skynet. Now, this isn’t to discount Google’s efforts with Oxford on discussing how to deal with AI as it becomes more powerful. I am only in support of such things; however, let’s not blow these neural networks out of proportion here.

(Image from allnewspipeline.com)

Why? Because it is almost a guarantee that statistical learning isn’t enough to create a human level artificial intelligence. But is there another way? Well…

Yes, we could start to look at the mechanisms of the human mind. Cognitive psychologists have noted that when we sense things in our world we don’t have a 1:1 input to output response, often we manipulate the information from our environment in our minds and then our response comes out. This is why we can come up with novel phrases that we have never spoken before. If we only learn language through statistical learning (or “association”) then we would only be able to do or say things we had seen or heard before. But we know this is not true, this has lead psychologists to note that there are “mechanisms” in the human mind that may be evolved.

First, a few points of clarification: 1) scientists really have little idea as to what intelligence is. Therefore, it is a bit dishonest to say that anything we are doing is an artificial variant thereof; 2) the aims of artificial intelligence can be bifurcated into two subsets: strong and weak. The weak claim is that AI will be able to reproduce human level performance in a specific task or set of tasks. The strong claim is that AI will be in some way isomorphic to the way(s) in which human minds perform the same task.

The weak AI is currently very reliant on statistical learning. This sort of learning works by association. If we repeatedly we see one thing, and then another happens, we learn to associate the cause with the effect.

Computationally, this starts with inputting data into a network. There are then as the data is fed through, if there are relationships to be discerned, the links between two points in the network grow stronger (this is then repeated in the hidden layers). This can then be fed to an output layer where the “answer” is given. This sort of learning does happen in the brain, and generally this sort of learning was quite popularly studied in the early 20th century by the behaviorists such as B.F. Skinner.

Neural Network (image from wisc.edu)

However, psychologists in the 1950s were able to show that this isn’t enough to explain basic patterns of human behavior, like language.(1)

By taking the findings of cognitive psychologists more seriously we may be able to one day create a strong-AI but doing so requires that we think more critically and move away from an over-reliance on naïve statistical learning.

Currently, one of the first principles underlying the cognitive approach to human psychology is that our minds are the result of evolutionary pressures. These pressures, in line with all evolutionary pressures, are not guided outcomes aiming for perfect accuracy. Evolutionary pressures select for fitness; and we should not confuse fitness for either accuracy or perfection.

For example, when something goes bump in the night, we jump. This mechanism is known as the hyperactive agency detection device (HADD)(2). It causes individuals to react to non-threats as if they were threats. For example, if we hear a bump in the night, we react as if there was a threat, such as a burglar. This is one mechanism of many which, it seems, may have been selected for due to their helpful inaccuracies; evolution doesn’t favor accurate cognitive mechanisms but “fast-and-frugal”(3) mechanisms.

There are some number human capabilities, without which we would not be able to claim that we have created “strong” AI. Two important domains to discuss here are culture and morality. These domains of human psychology can shed light on the importance of heuristics and biases, not statistical learning. No doubt, these domains are complex. They are all rooted primarily in complex psychological mechanisms, not statistical learning or conditioning. Rather, these three domains rely on natural (4) cognitive mechanism.

Image from psy.ox.ac.uk

It is well documented that all human social groups have culture. Although it is debated whether or not non-human animals have culture; it is suffice to say for now that humans do. What is culture? Culture can be defined as the beliefs and behaviors that humans use in order to define group boundaries, negotiate cooperative behaviors, and communicate with one another. Practices such as ritual, language, tool use, and even war fit into this category. What is interesting is that many of the underlying psychological mechanisms that produce culture are not learned through association or even observation. Besides language — discussed earlier — ritual intuitions have also been shown to be something that one does not need to learn. That is to say, we intuitively know whether or not a religious ritual “worked” regardless of what religion we are, or what religion the other group is. Many researchers believe that this is because our human psychology allows us to understand these things because we evolved to do so; these abilities are natural aspects of our human psychology.

Another part of our psychology that does not appear to be learned through experience alone is morality. Researchers often claim that religion and morality are linked, but that is beside the point for now. What is important however, is that humans across the world, regardless of upbringing or experience, appear to agree that harming another person is rarely — if ever — appropriate. This is a belief is part of a pattern that emerges early in child development and also holds well into adulthood. This is not something we need to learn, it is something that we intuitively believe.

So, if we know that there are some parts of our psychology that are not based on learning and experience, but are — in a sense — “hardwired” into our mind, how could we create Strong AI by using neural networks alone? It doesn’t appear that we could. While neural networks may be useful — indeed may be necessary — for creating strong AI, they are not sufficient. We would also need to incorporate complex algorithms for human actions such as those currently being developed to study terrorism and extremism using computer modeling.

However, in order to develop these algorithms, we need a greater understanding of the human mind before we start to make any other kind of artificial mind. For this we not only need big supercomputers or quantum computers, or fancy neural networks. We need cognitive psychologists who can research and quantify the way our mind works in such a way as to be compatible with computer models.

In conclusion, while modern dialogues concerning AGI often revolve around the ethics and issues of what happens if we are correct, they all too often fail to appreciate not only how far we are from realizing that dream, but how we are not even on the right track to achieve strong AI. This incongruence could be captured in the difference between what is (robust statistical learning machines) and what should never be (a Skynet like sentience outside of our control).

And now, I leave you with a cool talk from one of the greatest thinkers in the history of AI, Marvin Minsky. No, it isn’t directly related… but it is cool.

https://www.youtube.com/watch?v=RYsTv-ap3XQ

References

1. Chomsky, N. A Review of B. F. Skinner’s Verbal Behavior. Language (Baltim). 35, 26–58 (1959).

2. Barrett, J. L. Why would anyone believe in God? (AltaMira Press, 2004).

3. Kahneman, D. Thinking Fast and Slow. (Farrar, Straus and Giroux, 2011).

4. McCauley, R. N. Why Religion is Natural and Science is Not. (Oxford University Press, 2011).