The Immense Complexity of AGI

Carlos E. Perez
Intuition Machine
Published in
4 min readJun 27, 2021
Photo by Tim Johnson on Unsplash

We do not know what we don’t know until we attempt to understand why we don’t know what we don’t know. The quest for knowledge begins by understanding our ignorance.

Melanie Mitchell paper on why AGI is harder than we think:Why AI is Harder Than We Think

She enumerates 4 fallacies in AI research.

First fallacy. The first step fallacy. (see: Hubert Dreyfus “tree climbing with one’s eyes on the moon”) I tend to frame this differently from Mitchell. With AGI we really don’t know how long the journey will be. So any step, although appears to make progress, we really don’t know how many more steps will be necessary to reach the goal of AGI.

Source: Murray Shannahan https://twitter.com/mpshanahan/status/1429044928815026195

2nd fallacy. As revealed by Moravec’s paradox. A consequence is that we underestimate the complexity of the cognitive process required for seemingly simple behavior. We are largely unconscious of our own thought processes. We are essentially Intuition machines. We do not know how much computational power is needed to achieve human brain equivalence.

Hans Moravec — When will hardware match the human brain?

3rd fallacy. ‘The lure of wishful mnemonics’ (see: Drew Mcdermott on Artificial Intelligence meets Natural Stupidity). We use words to describe cognition when we are really using symbolic placeholders for words we don’t truly understand. How a machine does something is very different from how humans do things. The words ‘understand’ and ‘meaning’ do not have good definitions.

4th fallacy. Intelligence can be disembodied. AI treats intelligence as a ‘brain in the vat’. It may be impossible to develop AI in a disembodied body.

https://linkinghub.elsevier.com/retrieve/pii/S0896627319307901

To summarize above, (1) we don’t know how many breakthroughs are required to achieve AGI; (2) we don’t know how much compute resources we need to achieve the ‘simple’ tasks of a brain; (3) We often confuse the symbols we use as a substitute for understanding; (4) We lose too many details when we isolate a system of study.

AI is harder than one thinks because researchers have too many blind spots: 12 Blind Spots in AI Research

These are insightful points, but what are the first principles as to why general intelligence is really hard? A theory of general intelligence should make it obvious why achieving synthetic general intelligence is difficult.

These fallacies are actually obvious from the vantage point of irreducible computation. To begin, we have to understand the difference between causation and causality (see: The Meaning of Causality). Biological systems are immensely complex systems such that scientific knowledge is very far from unraveling its complexity. Life and brains are a consequence of generative models that seek solutions to novel contextual environments. (see: A Generative Model for Discovering the Unknown)

At best we have causal models of how bodies and brains work. We employ these models to reason about the causality of the brain, but at best these are cartoonish models. As a result, we underestimate the difficulting of recreating or reengineering a brain.

Furthermore, biological life and biological brains bootstrap themselves in an open-ended environment. The universe is open-ended and as a consequence, the universe has created intelligence that navigates this open-endedness (see: Computation, Subjectivity, and OpenEndedness). We are unaware of the multitude of problems evolution and brain development have solved. We imagine that brains must solve certain problems that we have identified using cartoonish models, but we really don’t have a sense of the complexities of these problems.

The cognitive bias that exists in human symbolic thinking is that we substitute understanding for symbols. (see: The Empathic Mind versus the Symbolic Mind) In fact, our civilization would not be possible without symbolic thinking. But we are so embedded in symbolic thinking that we have become unaware of the core of cognition.

We are in fact empathy machines. (see: Humans are Empathy Machines) Recognizing this is the first step in understanding why AGI (i.e. synthetic general intelligence) is hard.

--

--