Unsupervised Machine Learning: What Will Replace BackPropagation?

The Great Awakening?

Rebel Science
4 min readSep 20, 2017

At long last, the AI research community is showing signs of waking up from its decades-old, self-induced stupor. Deep learning pioneer Geoffrey Hinton has finally acknowledged something that many of us with an interest in the field have known for years: AI cannot move forward unless we discard backpropagation and start over. What took him so long? Certainly, the deep learning community can continue its own merry way but there is no question that AI research must retrace its steps back to the beginning and choose a new path. In this article, I argue that the future of machine learning will be based on the precise timing of discrete sensory signals, aka spikes. Welcome to the new age of unsupervised spiking neural networks.

The Problem With Backpropagation

The problem with backpropagation, the learning mechanism used in deep neural nets, is that it is supervised. That is to say, the system must be told when it makes an error. Supervised neural nets do not learn to classify patterns on their own. A human or some other entity does the classification for them. The system only creates algorithmic links between given patterns and given classes or categories. This type of learning (if we can call it that) is a big problem because we must manually attach a label (class) to every single pattern the system must classify and every label can have hundreds if not thousands of possible patterns.

Of course, anybody with a lick of sense knows that this is not how the brain learns. We do not need labels to learn to recognize anything. Backpropagation would require a little homunculus inside the brain that tells it when it activates a wrong output. This is absurd, of course. Reinforcement (pain and pleasure) signals cannot be used as labels since they cannot possibly teach the brain about the myriad intricacies of the world. The deep learning community has no idea how the brain does it. Strangely enough, some of their most famous experts (e.g., Demis Hassabis) still believe that the brain uses backpropagation.

The World Is Its Own Model

Loud denials notwithstanding, supervised deep learning is just the latest incarnation of symbolic AI, aka GOFAI. It is a continuation of the persistent but deeply flawed idea that an intelligent system must somehow model the world by creating internal representations of things in the world. As the late philosopher Hubert Dreyfus was fond of saying, the world is its own model. Unlike a neural net which cannot detect a pattern unless it has been trained to recognize it (it already has a representation of it in memory), the adult human brain can instantly see and understand an object it has never seen before. How is that possible?

This is where we must grok the difference between a pattern recognizer and a pattern sensor. The brain does not learn to recognize patterns; it learns how to sense patterns in the world directly. To repeat, it can do so instantly. Unless a sensed pattern is sufficiently rehearsed, the brain will not remember it. And if it does remember it, the memory is fuzzy and inaccurate, something that is well-known to criminal lawyers: eyewitness accounts are notoriously unreliable. But how does the brain do it? One thing is certain: we will not solve the perceptual learning problem unless we get rid of our representationalist baggage. Only then will the scales fall off our eyes so that we may see the brain for what it really is: a sensory organ connected to a motor organ and controlled by a motivation organ.

The Critic Is In the Data

How does the brain learn to see the world? Every learning system is based on trial and error. The trial part consist of making guesses and the error part is a mechanism that tells the system whether or not the guesses are correct. The error mechanism is what is known as a critic. Both supervised and unsupervised systems must have a critic. Since the critic cannot come from inside an unsupervised system (short of conjuring a homunculus), it can only come from the data itself. But where in the data? And what kind of data are we talking about? To answer these questions, we must rely on neurobiology.

How to Make Sense of the World: Timing

One of the amazing things about the cortex is that it does not process data in the programming sense. It does not receive numerical values from its sensors. The cortex only receives discrete signals or spikes. A spike is a discrete temporal marker that indicates that a change/event just occurred. It is not a binary value. It is a signal. There is a difference. The brain must somehow find order in the spikes. Here is the clincher. The only order that can be found in multiple sensory streams of discrete signals is temporal order. And there can only be two kinds of temporal order: the signals can be either concurrent or sequential.

This here is the key to unsupervised learning. In order to make sense of the world, the brain must have the ability to time its sensory inputs. In this light, the brain should be seen as a vast timing mechanism. It uses timing for everything, from perceptual learning to motor behavior and motivation.

Coming Soon

In my next article, I will explain how sensors generate spikes and how the brain uses timing as the critic for fast and effective unsupervised learning. I will also explain how it creates a fixed set of small elementary concurrent patterns as the building blocks of all perception. It uses the same elementary patterns to sense everything. It also uses cortical feedback to handle uncertainty in the sensory data. Hang in there.

See Also:

Fast Unsupervised Pattern Learning Using Spike Timing
AI Pioneer Now Says We Need to Start Over. Some of Us Have Been Saying This for Years
In Spite of the Successes, Mainstream AI is Still Stuck in a Rut
Why Deep Learning Is A Hindrance to Progress Toward True AI
The World Is its Own Model or Why Hubert Dreyfus Is Still Right About AI

--

--