The Death of the Bayesian Brain

Rebel Science
3 min readNov 2, 2017

--

I’m glad to see that the AI community is finally beginning to realize that probability has nothing to do with intelligence. Some of us have been saying this for at least a decade. “People are not probability thinkers but cause-effect thinkers.” These words were spoken by none other than Dr. Judea Pearl during a 2012 Cambridge University Press interview. Pearl, an early champion of the Bayesian approach to AI, apparently had a complete change of heart. In my opinion, this should have been a wake-up call for the AI community but Pearl’s words seem to have fallen on deaf ears.

The Brain Assumes a Perfect World

We can forget about computing probabilities because the brain’s neurons are comparatively slow. There is very little time or energy for computation in the brain. The surprising truth is that the brain is rather lazy and does not compute anything while it is perceiving the world. It assumes that the world is perfectly deterministic and that it performs its own computations. The laws of classical physics and geometry are precise, universal and permanent. Any uncertainty comes from the limitations of our sensors. The brain learns how the world behaves and expects that this behavior is perfect and will not deviate. The perceptual process is comparable to that of a coin sorting machine whereby the machine assumes that the various sizes of the coins automatically determine which slots they belong to.

We cannot hope to solve the AGI problem unless we emulate the brain. But how can the brain capture the perfection that is in the world if it cannot rely on its sensors? It turns out that sensory signals are not always imperfect. Every once in a while, even if for a brief interval, they are indeed perfect. When that happens, the brain is ready to capture this perfection in both pattern and sequence memories.

How Does the Brain Handle Uncertainty?

Intuitively, one would expect a pattern recognition neuron in the brain to recognize a pattern if all of its input signals (spikes) arrive concurrently. But, strangely enough, this is not the way it works in the brain. The reason is that patterns are rarely perfect due to occlusions, noise pollution and other accidents. Uncertainty is a major problem that has dogged mainstream AI for decades. The customary solution in mainstream AI is to perform probabilistic computations on sensory inputs. As I mentioned earlier, this is out of the question as far as the brain is concerned because its neurons are too slow. The brain uses a completely different and rather clever solution and we should do likewise.

Pattern recognition is a cooperative process between pattern memory and sequence memory (not shown). During detection, all sensory signals travel rapidly up the pattern hierarchy and continue all the way up to the top sequence detectors of sequence memory where actual recognition decisions are made. As soon as enough signals reach a top sequence detector in the sequence hierarchy, they trigger a recognition event. The sequence detector immediately fires a recognition signal that travels via feedback pathways all the way back down to the source pattern neurons which, in turn, trigger their own recognition events. Thus a pattern neuron recognizes its pattern, not when its input signals arrive, but upon receiving a feedback signal from sequence memory. This way, a pattern neuron can recognize a sensory pattern even if the pattern is incomplete or otherwise corrupted.

See Also:

Unsupervised Machine Learning: What Will Replace Backpropagation?

--

--