AGI and the Empathy Prior

Carlos E. Perez
Intuition Machine
Published in
4 min readJun 30, 2021
Photo by Josh Calabrese on Unsplash

Ben Goertzel has a set of videos explaining his theory of general intelligence. It’s instructive to me to understand his framing so as to be able to compare and refine my own ideas.

Goertzel’s approach begins with a formulation of intelligence that has no resource limits (see: Hutter and Legg, AIXI). From this definition, he formulates alternatives with resource constraints.

So the model is a pragmatic specialization of a universal intelligence model. He then applies additional constraints based on cognitive architectures inspired by human intelligence.

He argues however that there can be many kinds of general intelligence that may not have the same flavor as human intelligence. For example, an intelligence that is devoid of an ‘emotion module’.

The model of his system is based on hypergraphs and rewrite rules on this hypergraph. The algorithm for rewriting is a function of resource constraint, specifically, transformations that lead to functions requiring less computation are favored.

This hypergraph is shared across a multitude of algorithms. This allows a kind of computation that is like a trampoline (or race relay) using one algorithm taking over the work depending on the problem being solved.

His approach appears reflexive in that programs are transformed via different functional morphisms. His formulation appears to be a general approach to a kind of computation that is reflexive and self-optimizing.

Learning is achieved via rewriting existing algorithms into more computationally efficient ones. It seems to me that his originally framing demands that general intelligence requires enormous compute and thus the solution is to reduce compute.

So general intelligence in this framing appears to be analogous to an optimizing compiler.

But just like the ‘Reward is Enough’ stance of DeepMind, there’s something just not right about ‘An Optimizing Compiler is Enough’ approach to general intelligence. The DeepMind framing is flawed because it sidesteps the complexity of the system defining the rewards. Goertzel approach appears to say that with the judicious selection of appropriate algorithms and modules, one can implement general intelligence.

However, I suspect DeepMind and Goertzel are banking on a system that bootstraps itself. Even Schmidhuber says the same with his Godel machine. A minimum characteristic of general intelligence is that it is able to bootstrap itself. The fascinating thing about artificial neural networks is that it is able to interact with the environment and bootstrap an algorithm to predict the environment. However, unlike humans, these systems are devoid of abstraction making.

Is my theory of general intelligence significantly different from that proposed by DeepMind, Goertzel, or Schmidhuber? I suspect it is because its framing is entirely different. The difference is in the priors. DeepMind does not commit to any priors. Goertzel commits to a ‘communication prior’. Bengio has an idea for a consciousness prior.

I actually did not understand Yoshua Bengio’s ‘Consciousness Prior’ until Goertzel talked about it (see: The Consciousness Prior). So skimmed through the paper trying to confirm my new understanding. Bengio describes consciousness as a System phenomenon.

I was actually thinking of something related which I will call ‘The Subjective Prior’. This is a fundamental part that is missing in AGI proposals. A general intelligence is something that is alive. Something that is alive has a subjective perspective. Therefore, its agency must have a subjective prior.

A subjective prior is not the same as a consciousness prior. A consciousness prior is an awareness of thoughts (see: https://en.wikipedia.org/wiki/Higher-order_theories_of_consciousness ). A subjective prior is more fundamental. It is an awareness of self. This awareness may be unconscious.

Furthermore, this awareness of self can be expanded to awareness of self while embedded in space as well as in time (the past, present and future).

The sophistication of a general intelligence relates to the expansion of its individuality. A social animal has its identity coupled with other animals in its social grouping. So the awareness of self is much larger than a non-social animal. (see: The Fluid Nature of Individuality)

A fundamental aspect of cognition is that it involves a framing of what is perceived into a specific reference frame. Meaning can only be instantiated via a mapping to a subjective reference frame. In its most general sense, the notion of empathy is the mapping of the behavior of another individual into the reference frame of a subjective observer. If you’ve ever read Lacan, this mapping from the observed into the subjective can be a complex affair (see: Jacques Lacan — Wikipedia).

However, the general intelligence that is unique in humans can be found in the complexity of interactions between humans. Language is but a shadow of that interaction. The essence of it is in the language games we play.

The nature of human learning is equally odd. We learn because we participate. We learn because we are embedded in a sensory-motor loop. We learn because we are engaged and it manifested in the engagement of our attention. Unlike a computer where new skills can be downloaded, humans learn by participatory experience. As Feynman said, what we cannot create we cannot understand. We learn by recreating. We learn by doing.

This is not a bug of general intelligence, rather it is a feature. Skills are only learned when an agent is able to recreate for itself the skills.

So in my own theory of general intelligence, I commit to something more sophisticated than a subjective prior. I commit to an empathy prior. An ‘Empathy Prior’ is the shortest way I can explain the difference between my theory and other AGI theories in the wild.

For more on this construction, see:

--

--