How to Build a Brain

Raul Jordan
The Final Invention
8 min readMay 20, 2015

--

How close are we to building intelligent machines?

The story of Artificial Intelligence is filled with unprecedented hype of potentially changing the course of technology forever, but also tainted with lackluster results and an often stagnant research sector that has lost promise over the years. What was once a field marked by groundbreaking statistical models and the promise of revolutionizing every aspect of our lives, from automation to making almost perfect predictions about the future, eventually became a discipline that had lost its gleam until recently. Now, with the advent of Deep Learning, the tech community has scrambled to find all sorts of machine learning solutions using this new tool, seeing its potential to bring us closer to the idea of “Strong AI” — the highest form of artificial intelligence which we could ever attain, virtually indistinguishable from human intelligence.

DeepMind was acquired by Google for more than $500 Million Dollars amidst the frenzy of tech enthusiasts seeing Deep Learning as the future of AI

However, now we see frequent news of Google’s progress on their self-driving cars, a growing interest in machine learning for high frequency trading on Wall Street, and the buzzwords Big Data everywhere. If the era of true deep learning is upon us, why are we still far from building Strong AI? Why can we not build truly intelligent machines that can use their abilities for something more than just several specific tasks? What are we missing?

How can we ever create intelligent machines?

To explore the implications of the trajectory AI research has been following in recent years, we first need to understand what an intelligent machine really is. Traditionally, scientists have gauged intelligence based on behavioral similarity to the human condition. For example, if a robot is indistinguishable from a human through its actions and reasoning, we can say that it is intelligent. This is the basis for the famous “Turing Test” for artificial intelligence. However, the limitations of this are clearly apparent from John Searle’s thought experiment, the Chinese Room Argument. It goes as follows:

“Suppose that I’m locked in a room and that I know no Chinese, either written or spoken, but I have a set of rules in English that enable me to correlate one set of Chinese symbols with another set of symbols, that is, these rules allow me to respond, in written Chinese, to questions, also written in Chinese, in such a way that the posers of the questions — who do understand Chinese — are convinced that I can actually understand the Chinese conversation too, even though I cannot.”

Essentially, imagine someone outside of a room writing down questions in Chinese and slipping them under the door of the room and getting answers slipped under the door right back, in Chinese. To the outsider, the person inside of the room clearly knows Chinese, when the truth is that this person only knows a set of instructions on how to answer the questions but does not truly understand Chinese. This is the problem with judging computers as intelligent merely from their behavior, because they simply understand how to interpret instructions but do not know what these instructions truly mean.

The Chinese Room Argument debunks the use of behavior as the sole determinant factor behind intelligence. However, can intelligence exist independently of behavior? Most definitely. This is the core of Jeff Hawkins’ book, On Intelligence, which contains the key to the future of AI and machine learning.

To understand this other framework of intelligence, we first have to grasp how the human cortex, the region that is responsible for our higher thinking, functions underneath our skull. However, is our reasoning really constrained to different parts of the brain? Is thinking really compartmentalized as many believe it is? Neuroscience often conjures up images of diagrams of the brain with different regions of “thinking” highlighted in different colors, or ideas of someone being “left-brained” vs. “right-brained”, making us believe that the cortex isn’t a continuum but rather a highly-segmented area with regions that seem to have very different underlying functions.

Mountcastle rejects the idea of a segmented cortex and proposes that cortex works under the same underlying computation. A single algorithm everywhere.

Hopkins Neuroscientist Vernon Mountcastle argues that this is not the case. Instead, all parts of our cortex operate under the same underlying method. There is a single computational code that operates all of our intelligence. Jeff Hawkins calls this the “Rosetta Stone of Neuroscience”

“There is a simple algorithm underlying all of our brain functions. Vision, hearing, taste, and every other sensation are all processed using the same basic building blocks of computation everywhere in our brain. They are all the same across the underlying architecture”

Hawkins argues that the cortex works through a hierarchical model, where the lower regions map simple sensory patters (the smell of spring, the melody of your favorite song, a view of your car), that display really specific information about a stimulus to higher regions that process abstract thoughts and recognize patterns of input. Sharing representations in a hierarchy also leads to generalizations of expected behavior. When you see a new animal, if you see a mouth and teeth you will predict that the animal eats with his mouth and that it might bite you. The hierarchy enables a new object in the world to inherit the known properties of its sub-components.

The higher regions are able to give us an abstract sense of the thing we are sensing, even if the lower regions are bringing in constant streams of different sensory information for the object at hand. Consider yourself reading this article. Your visual field is constantly changing as you move your phone or laptop around, or as your eyes go all over the page and the lighting in the room changes. At the same time, the noise from your environment is constantly reaching both of your ears and there is so much variation that it is almost impossible that your environment will be the same from one second to the next while you are reading this. However, the higher regions of your cortex are assured that you are reading this article — there is no doubt. Your brain has understood it as an abstract object that it is conscious of regardless of how your environment changes. This is what Jeff Hawkins calls an Invariant Representation, and it is the key to understanding how our mind works. Most importantly, if we follow Mountcastle’s argument and understand the cortex to work under the same underlying principle, we know that if one region creates invariant representations, then all regions do the same.

A good example of an invariant representation is our ability to seamlessly recognize a friend in any circumstance. Consider a computer given the task of identifying a person’s face given that his or her expressions deviate from an original “resting” face. The computer would have to use a lot of computational ability to make predictions based on small variations from what the original face looks like compared to when it smiles, frowns, cries, etc. However, when we see someone we recognize as a friend, regardless of the lighting in a room, the position of his or her facial muscles, the position of his or her body, we are certain that the person is indeed the same friend regardless of how every single image of that person we obtain from our retina will always be a different image each second. Our brain simply has that invariant representation and notion that it is still the same face, and this does not change our higher-dimensional thinking. In contrast, even our best computers work very poorly under variation.

Invariant representations, however, are not the only factors that make us intelligent. It is our ability to make predictions from these invariant representations and patterns what allows us to reason and thrive intelligently. To see how pervasive prediction is in our every day life, imagine yourself the next time you pick up a midnight snack from the fridge. As you walk up to the fridge, you anticipate that you will grasp the metal handles and feel its familiar texture, and you anticipate that upon opening the door, cold air will brush your face. You also imagine yourself reaching for the snack and closing the fridge door and expect to hear a familiar “thump” as it closes. Regardless of the change in your environment, your cortex will already anticipate the pattern of events based on the invariant representation of the fridge and snack that you have in your memory. Your ability to associate memories to invariant representations is what truly drives the predictions that allow you to be intelligent.

Consistent Predictions from Invariant Representations Are What Make Us Intelligent

The power of having this as the central algorithm behind the way our mind works is that it makes us extremely efficient at identifying change. This is why humans are so prone to sticking to a routine, because a routine is merely a well-structured invariant representation. For example, if you follow the same routine to get ready in the morning, you know that there are endless small details that always occur in that routine that are second nature to you and you never even give them a second thought. Your toothbrush could always be to the left of your sink, your toothpaste is always in the medicine cabinet, and you always rinse off by following the same set of steps and movements with your mouth. However, if even the smallest detail changes, like the toothbrush holder is a little bit more to the left than usual, you will immediately catch that something is off. This makes us so good at survival and at understanding how to identify new patterns and associate memories to variation. Once again, we use prediction from invariant representations to build a model of the reality around us and react intelligently to changes.

If we can build machines under the same principles, we can someday create programs that are as intelligent as we are, but even more powerful than we can anticipate. We are bounded by our anatomy — by the capacity of our ears to listen to frequency ranges, by the limited ability of our eyes to see far away objects, and many other things we cannot accomplish due to our senses. Now imagine a machine that is not bounded by these limits. A machine that is able to implement our same neural algorithms but can receive any set of inputs and patterns. The possibilities are endless. This new framework allows us to look at intelligence as the ability to create predictions from abstract patterns, not from behavior which is what Searle refuted in his Chinese Room argument.

Artificial Intelligence is far from dead. If anything, the timing is perfect for a new revolution. There is promise in Deep Learning to give us the tools to create these invariant representations from a hierarchy and allow programs to build increasingly complex patterns from there. DeepMind and Jeff Hawkins’ own Numenta have high hopes for the future and are using unprecedented methods to change the way the world uses AI. Once we are able to effectively implement invariant representations and use them as the basis of prediction, we will reach the turning point in the creation of truly intelligent machines.

--

--