Deep Learning’s Uncertainty Principle

Carlos E. Perez
Intuition Machine
Published in
5 min readApr 6, 2018
Photo by zhang kaiyv on Unsplash

DeepMind has a new paper where researchers have uncovered two “surprising findings”. The paper is described in “Understanding Deep Learning through Neuron Deletion”. In networks that generalize well, (1) all neurons are important and (2) are more robust to damage. Deep Learning network has behavior that reminds us of holograms. These results are further confirmation of my conjecture that Deep Learning systems are like holographic memories.

networks which generalise well are much less reliant on single directions than those which memorise.

Holograms are 3-dimensional images that are created by the interference of light beams. The encoding material of a hologram is a 2-dimensional surface that captures the light field rather than the projection of an image on to the surface. In general, you can take this to a higher dimension, so a hologram for 4D space-time will be in 3D (one dimensionless). This relationship that a lower dimensional object can represent a higher dimensional object is, in fact, something our human intuition is unable to grasp. Our biological experience has understood the reverse, like 3D objects projecting in 2D planes.

https://necessarydisorder.wordpress.com/2018/03/31/a-trick-to-get-looping-curves-with-lerp-and-delay/

In the beginning of the 20th century, light was discovered to have that perplexing quality of being both a wave and particle. This non-intuitive notion of quantum physics, this wave-particle duality, is also expressed as the Uncertainty Principle. A concept that is even more general than this is the recent discovery of the “holographic duality”. Apparently, nature also binds two different objects, the hologram, and its higher dimensional projection in an equally bizarre manner:

When the particles are calm on the surface, as they are in most forms of matter, then the situation in the pond’s interior is extremely complicated.

If strongly correlated matter is thought of as “living” on the 2-D surface of a pond, the holographic duality suggests that the extreme turbulence on that surface is mathematically equivalent to still waters in the interior.

There exists a holographic duality in nature that may likely also translate to the workings of Deep Learning networks. The greater the generalization of a network, the more entangled its neurons and as a consequence the network becomes less interpretable. The uncertainty principle as applied to Deep Learning can be stated as:

Networks with greater generalization are less interpretable. Networks that are interpretable don’t generalize well.

Which led me to my conjecture in 2016 that “The only way to make Deep Learning interpretable is to have it explain itself.” If you think about it from the perspective of holographic duality, then it is futile to look at the internal weights of a neural network to understand its behavior. Rather, we best examine its surface to find a simpler (and less turbulent) explanation.

This leads me to the inevitable reality that the best we can do is to have machines render very intuitive ‘fake explanations’. Fake explanations are not false explanations, but rather incomplete explanations with a goal toward eliciting an intuitive understanding. This is what good teachers do, this is what Richard Feynman did when he explained quantum mechanics to his students.

In addition, this explains the real-world fragility of symbolic systems. Symbolic rules are abstract interpretations of a system. Artificial Intelligence as envisioned in the late 1950s was based on creating enough logical rules to arrive at human intelligence (see: Cyc). Decades of research in AI has been wasted on this top-down approach. An alternative more promising bottom-up approach (see: Artificial Intuition) is the basis of Deep Learning.

The Holographic Principle is a very compelling tell of how Deep Learning works. Unfortunately, just like Quantum Physics, it belongs to a realm that is simply beyond our own intuition. Sir Roger Penrose may have been on the right track when he speculated that brains work as a consequence of quantum behavior. However, I have doubts that this is true.

I will grant this observation that we simply don’t have enough detail as to how the brain actually works (see: “Surprise Neurons are More Complex”). There are also good arguments that certain animals (i.e. bird brains for navigation) are enabled by unique mechanisms found in the physical world.

However, the idea that quantum effects are the explanation for not just human cognition but animal cognition is a conjecture that is based on very sparse experimental evidence. The brain is likely more complex than our present artificial models, however, that complexity (like turbulence) does not require quantum effects as an explanation. If you can explain turbulence as originating from quantum effects then that’s a similar kind of argument you will have to make about cognition. This is the argument that is missing with Penrose.

Penrose argues that you must have quantum effects to arrive at cognition. This currently is contrary to a majority of the opinion in neuroscience or in Deep Learning models. I am however proposing something different that says that quantum-like uncertainty is present in neural networks. There is emergent complexity in reality that is due to the internal interactions of massive populations (like the weather). I propose that our brains (similar to Deep Learning systems) exhibit this holographic-duality but entirely inside the regime of classical mechanics (i.e. composed of deterministic subcomponents).

Fun Fact: J.J. Thomson won the 1906 Nobel in Physics for experiments showing electrons are particles. His son G. P. Thomson won the 1937 Nobel Prize in Physics for showing electrons are waves.

https://neurosciencenews.com/silent-type-cells-brain-10822/

https://towardsdatascience.com/paper-summary-nips-2017-the-un-reliability-of-saliency-methods-8ed7774a69aa

Explore Deep Learning: Artificial Intuition: The Improbable Deep Learning Revolution

--

--