Ptolemy and the Limits of Deep Learning

Carlos E. Perez
Intuition Machine
Published in
5 min readJul 10, 2021
https://www.sciencephoto.com/media/776342/view/ptolemaic-system-geocentric-model-1531

Let’s begin today with the realization that Ptolemy’s model of the movement of the planets was extremely accurate. Ptolemy’s model was accurate enough to be very useful for navigators of their time. But it worked well because it was finely tuned to fit with observed experimental data.

But was wrong with Ptolemy’s model is that it did not correctly capture cause and effect. The earth and the planets revolve around the sun due to gravity and not everything revolves around the earth. This was the Copernicus model which he paid gravely for proposing.

200 years later Newton invented calculus and formulated the laws of gravity that could be used to mathematically derive the motion of planets around the sun. What was inside Newton’s calculus that exposed otherwise unknown patterns?

In calculus, there are sums of infinite series that converge to a fixed number. In these infinite series, there are patterns such that you can deduce what numbers they converge to. Calculus works because the sums of these repeating patterns converge to fixed numbers

Curve fitting also works in an analogous way in that a function is approximated by sums of functions each with different coefficients. The difference is that an algorithm does not need patterns to arrive at a convergent number.

Deep learning has shown that given enough good data, a universal approximator can conjure up the parameters required to fit any function. This is just what Ptolemy’s model was able to accomplish.

The problem with Ptolemy’s model is that it does not reveal the actual causal mechanism. That’s also the problem with neural networks. It does not matter if they can explain how they arrive at a conclusion. This is because the abstraction they’ve discovered is likely to be wrong.

This conclusion of course is damn obvious and it’s a surprise that too many deep learning practitioners don’t get understand this! Ptolemy was wrong just as deep learning researchers are wrong with their models.

Mell Conway (see: Conway’s Law) remarks that “Ptolemy’s model might have worked for terrestrial navigators but would not work for solar-system navigators. So first you have to define the domain over which you collecting data.” Which is a good point, with a narrow enough domain one can fit anything. The question then is, how well do one’s models work when the reference frame is changed? This is what David Deutsch explains as having a good model.

But here’s the deal. A general intelligence does not need complete models of reality. It only needs to know which models work in which contexts. So navigators of the past used different tools when the sun was up and when the sun was down. Here’s the other curious thing. In the past, it was quite common for people to go about their daily lives speaking different languages depending on what kind of activity they were engaged in. Humans deal with complexity using complementary models.

The limit of deep learning is that it is possible to create very accurate but very wrong models of reality. However, a hallmark of general intelligence is that complexity is handled using an adaptive capability that can leverage a patchwork of complementary models to navigate the world. Deep learning solves the model problem, but it has yet to solve the model coordination problem.

Deep Learning is valuable for predicting complex systems in biology and quantum mechanics and but it is still incapable of human-level intelligence. The key question about AGI is how do you evolve systems that are capable of curve-fitting to systems capable of abstracting.

General Intelligence systems (i.e. us) are unique in our ability to create abstractions. That’s because we are encumbered in this world to have limited computational capabilities. We need abstractions and generalizations to navigate the complexities of this world.

But along the way in developing a way to simplify the complex world, we discovered recurring patterns that have infinite reach. The models that we have discovered also allowed us to reason about many more different systems and to create universal computational machines.

Obscured from our intuitive understanding of this world is that fundamental reality that everything is of computational origin. But we only discovered this notion after the invention of universal machines.

It took us time to become intuitively familiar with the virtual. This is despite the reality that our brains themselves operated in the virtual.

But what is difficult to grasp is how the physical leads to the virtual. Descartes did not have computers as examples of how this can be possible. Imagine yourself as Descartes and thinking about the mind and body but instead knowing of computers.

Would you not be able to see the bridge between the physical and the virtual? Many still refuse to see this because we are encumbered with understanding systems that enable their capability through visible physical means. Many neuroscientists are trapped in this fiction.

The mathematics we employ, primarily borrowed from physicists who developed approximation theories of reality (see: perturbation theory and renormalization), remains what we use to explain cognition.

But it’s like attempting to model the function of a microprocessor using principles of thermodynamics. It simply is missing the cognitive gadgets to do the heavy lifting. We are happy to delight ourselves with the mathematical complexities but never making real progress.

The intuition pumps to help us reason about general intelligence has always been available to us. It’s right there staring at our faces. Unfortunately, many of our intellectual traditions obscure us from seeing the patently obvious.

It’s immensely interesting why flying creatures with many more freedoms of mobility than ground-bound creatures like us have developed visually beautiful bodies. What is it about evolution that derives beautiful ornate creatures?

Extending this idea further, what is it about human minds that have an imagination that has created beautiful language to express their world? Why does evolution drive toward this just like avians are driven to physical beauty?

Curve fitting is just one of the stepping stones towards general intelligence. But it is the first and correct step. We are the existential proof of this.

I leave you this quote from Paul Feyerabend on Renormalization: “Thus one admits, implicitly, that the theory is in trouble while formulating it in a manner suggesting that a new principle has been discovered”

--

--