In Search of a Universal Theory of Intelligence

Carlos E. Perez
Intuition Machine
Published in
9 min readOct 21, 2019
Source: https://twitter.com/ricard_sole/status/1185677716470845445

An agent’s intelligence is the basket of cognitive gadgets that it has access to. The basket of all possible gadgets is unknown, that is why a complete definition of intelligence is unknowable.

Ricard Sole is wondering how biological brains can be compared with electronic computers. They are of course different, but he wonders how a brain would look like if it followed a von Neumann architecture. Clearly such a brain could not result from evolution. But the speculative drawing (drawn in Leonardo da Vinci style) in his tweet lead to a very insightful discussion about the evolution of intelligence. His mental speculation brings up intriguing questions as to whether there are different paths towards biological brains.

I must emphasize that biological or evolutionary design is very different from how human technological design. For example, there’s a reason why (only in a very rare instance) that biology doesn’t invent the wheel. I’ve discussed this in greater detail here:

But within just biological evolution, do there exits other evolutionary paths to brains? Octopuses definitely have different kinds of brains than vertebrates. A comparative study of the evolution of octopuses, bird and mammalian brain should be informative. A recent study does compare octopus brains with vertebrates:

Perhaps there are patterns of universality here that can be discovered in the biological evolution of brains. For instance, we are aware the avian brains have different wiring back to the cerebellum from the avian equivalent of the mammalian neocortex. This indicates the universal importance of the more primitive cerebellum to higher-order functions required for the newer neocortex. This suggests a universal need for maintaining a sensorimotor loop.

The evolution of a brains is universally dependent on the nature of its embodiment and also its historical development. Also, the relationship of body structure to brain evolution suggest a universal relationship. I would argue that the unique development of human cognition can be attributed to the unique dexterity of human hands and vocal organs:

Coincidentally, the octopus (evolved separately from vertebrates) has only 500m neurons but it is unusually intelligent compared to comparable animals (Rabbits have the same number of neurons). I make the suggestion that the octopus’ intelligence is a consequence of its extremely dexterous body and arms. The octopus intelligence however is likely hindered by a lack of social cognition, vocal communication, and a very short lifespan.

Another universality is that intelligence is also related to the cognitive pressures required for social cognition. Animals with larger brains tend to have rich social environments. Elephants have remarkable navigation memories and live in large social structures. The songs of whales can travel great distances and thus have an internet-like network that can influence their Dunbar number. A comparison of the brain sizes of whales and dolphins do indicate correspondence to social cognition (see: https://www.nature.com/articles/s41559-017-0336-y.epdf ). It is intriguing that the pilot whale with more than twice as many neurons in its neocortex as humans. It’s also surprising that pilot whales exhibit REM sleep that other whales do not. This REM sleep is another universality. What cognitive pressure demands all these neurons? Why does the pilot whale need all these neurons?

There is also an interesting twist about the sizes of brains and intelligence. Present-day Homo Sapiens brains are actually 20% smaller than Cro Magnon (28,000 years ago) brains. Feral animals, in general, require more cognitive capabilities than their domestic counterparts. Our agriculture neolithic civilizations that were established 7,000–10,000 years ago has lead to specialized brains with lower cognitive capacity. We are thus likely to be 20% less intelligent than our Cro Magnon paleolithic ancestors!! There are limits to employing an organisms brain size in evaluating its ‘intelligence’.

In studying the evolution of many kinds of brains, we can discover possible universalities that can inform our definition of intelligence.

There are of course many dimensions of intelligence and each dimension would relate to the specific skills it enables. Howard Gardner’s theory of multiple intelligences is a classic example of this notion. That is humans exhibit strengths in different kinds of intelligence. Specifically, these are interpersonal, intrapersonal, verbal, logical, spatial, rhythmic, naturalistic and kinaesthetic intelligence. In each case, we see how different intelligences evolve based in the different ways we interact with our environment.

Different species have different kinds of intelligence. So for example, a dog’s olfactory intelligence is superior to humans. You can compare intelligence from a measure on a different dimension. Each dimension defines a narrow set of tasks. So if you test dogs versus humans in a smelling task, then dogs will be more intelligent. If you test humans versus hand calculators in long division, one then could absurdly argue that a hand calculator is more intelligent. That said, we conventionally assign intelligence to human-like capabilities. That is, there is a narrow set of tasks that we define human intelligence to be. We will explore why this is so in more detail.

We know from biology that an organism’s intelligence is coupled to its environment. There’s no instance of intelligence in biological evolution that fits a definition of task-independence. The fitness of organisms to their environments is influenced by evolution. The increase of an organism's cognitive capacity is due to the complexification of its participants and thus the complexification of the environment. This reveals an evolutionary niche where a species like humans can exploit. Humans build societies that become increasingly complex. A consequence of this is that it crowds out natural biology and its evolutionary principles differ from natural biology’s.

One can thus define anthropocentric intelligence as a subset of strategies that exploits complexification to thrive in an environment. Competence in a subset of tasks (not independent of tasks) is what defines intelligence and thus is a contextual definition (i.e. it depends ). That subset of task must, of course, include not only inference but also learning. These include the following kinds of learning (interpolative, extrapolative and ingenuity):

So far we’ve coupled our definition of intelligence with that of evolution. But one might propose a framework that has three different kinds of evolution. (1) Natural evolution, (2) minds with societies and (3) information-age evolution. The second kind is where biological minds can create new methods and tools to recreate their world to enhance survival. The evolution of minds and societies happens at a much faster time scale than natural evolution. The third kind of evolution is driven by automation (i.e. computers). Thus the nature of fitness of each kind of evolution differs because each environment differs.

The definitions of intelligence that are based on Kolmogorov complexity (see: https://arxiv.org/abs/0712.3329) tends to mirror Shannon’s information theory. Shannon’s information theory describes information communication or storage capacity. It is a definition that is independent of semantics. It seems to me this motivation of intelligence as defined to be ‘agnostic of tasks’ comes from Cartesian dualism. That is, the mind is independent of the body. That is, intelligence is independent of biology. It isn’t a definition that is relevant to the understanding of human brains. This definition of intelligence needs to walk up the semiotic hierarchy and arrive at symbols that are disentangled from reality. It’s a kind of intelligence that is abstract and computational. It’s different from the intelligence that we find for autonomous agents and all of biology.

Furthermore, Shannon’s information theory does not inform higher orders of information such as the “aboutness” of information:

Higher-order use of information is demanded by intentional agents that seek fitness within environments. Stuart Kaffman call this the “mattering in matter”. Agents must make sense of their environments and this is pragmatically done by reasoning using indexical information. In biological environments, this involves messy information (i.e. partial and incomplete information).

The other motivation of definitions of intelligence is that of Occam’s razor. This is what motivates the idea that compression is equal to intelligence. This takes it to the meta-level that seeks the most compact explanation of intelligence with the hope that it is independent of tasks. But really what is sought are mechanisms of intelligence that are universal across all conceivable tasks. To discover universality, we have to sample the multitude of tasks that exists and have yet to exist in an open-ended reality of emergent environments. A compression algorithm for this appears to be in violation of the Turing-Church thesis. Intelligence is always measured relative to a set of narrow tasks and most definitions tend to fall apart when we consider openendedness.

We may conclude that a universal definition of intelligence may be of a very low bar. Intelligence is the capability of solving tasks that’s more efficient than a random process. In the space of possible minds, there are intelligent minds that outcompete minds that are purely random.

But going back to the human complexification strategy and how that might relate to a definition of intelligence. Intelligence with respect to the information age might leads to a clearer definition. This is realized in the ideas of recombinant evolution:

In the information age that consists of environments in virtual worlds, we can achieve symbol grounding (i.e. semantic attachment) relative to abstract concepts. Intelligence in this “clean room” environment can be defined with respect to accumulated systematic methods of reasoning. So a logic proof system can be defined to be more efficient at solving a mathematical task than a comparable human. The more civilization transitions into a virtualized world, the more likely humans will find synthetic minds that are ‘more intelligent’.

Classical definitions of computation tend to favor sequential processes. A consequence is that more natural parallel processes such as evolution tend to be ignored. Therefore a properly informed definition of intelligence must take into account how to scale parallel cognition. The problem of scaling intelligence across parallel agents is, in fact, a key problem of artificial intelligence. Our biological brains consist of multitudes of cognitive agents, how these agents coordinate in a manner that leads to emergent intelligent behavior is in fact a universal characteristic of intelligent biological agents.

Parallel cognition also occurs at the scale of civilizations. There are plenty of instances in human history where civilization has acted in a less than an intelligent manner (a.k.a. Stupidity). Stupidity of course defined as any process that performs worse than random chance. Unfortunately, as reflected by our inability to respond to the challenges of climate change, our civilization scale intelligence is in collapse. Human civilization is indeed the “paper clip maximizer” that we all fear. It is mindlessly consuming the resources of the planet and exponentially destroying its ecology. The problem is that humans have paleolithic brains, medieval institutions, and god-like technologies.

The development of a “Theory of Intelligence” begins with identifying universal patterns across many instances of intentional agents and their associated environments. But, framing this problem as a “Theory of Intelligence” is analogous to calling Shannon’s theory of communication as a “Theory of Information’. It places focus on a term (i.e. Intelligence or Information) that is ill-defined. Perhaps it is more useful to describe the quest as a “Theory of Competence”. This way we don’t conflate competence with comprehension that we do when we use the word ‘intelligence’.

The exhaustive set of cognitive gadgets that intelligence makes use of is undefined. Intelligence is defined by the set of cognitive gadgets that it has access to. There is no formula that can capture this openendedness. There is no measure that exists in all environments.

Further Reading

Artificial Intuition: The Deep Learning Revolution

--

--