Achieving Artificial General Intelligence (AGI) via the Emergent Self

Carlos E. Perez
Intuition Machine
Published in
8 min readJun 21, 2018

--

Photo by Aditya Chinchure on Unsplash

“The essence of general intelligence is the capacity to imagine oneself” — myself

Recognize that to gain the perspective that comes from seeing things through another’s eyes, you must suspend judgement for a time — only by empathizing can you properly evaluate another point of view. — Ray Dalio

Moravec’s paradox is the observation made by many AI researchers that high-level reasoning requires less computation than low-level unconscious cognition. This is an empirical observation that goes against the notion that greater computational capability leads to more intelligent systems.

However, we have today computer systems that have super-human symbolic reasoning capabilities. Nobody is going to argue that a man with an abacus, a chess grandmaster or a champion Jeopardy player has any chance at besting a computer. Artificial symbolic reasoning is technology that has been available for decades now and this capability is without argument superior in capability than what any human can provide. Despite this, nobody will claim that computers are conscious.

Today, with the discovery of deep learning (i.e. intuition or unconscious reasoning machines), low-level unconscious cognition is within humanity’s grasp. In this article, I will explore the ramifications of a scenario where machine subjectivity or self-awareness is discovered prior to the discovery of intelligent machines. This is a scenario where self-awareness is not a higher reasoning capability.

Let us ask, what if self-aware machines were discovered before intelligent machines. What would the progression of breakthroughs look like? What is the order of the milestones?

In a previous article, I wrote about different intelligences being orthogonal to each other and not necessarily parallel to each other or ordered. There is plenty of evidence in nature that simple subjective animals exist without any advanced reasoning capabilities. Let’s assume that it is true that simple subjective machines form the primitive foundations of cognition. How do we build smarter machines from simple subjective machines or machines with simple self models.?

Previously, it is shown that ego-motion (i.e. bodily self-model) is discovered via a curiosity inspired algorithm prior to the discovery of object detection (i.e. perspectival self-model) and object interaction (i.e volitional self-model). In other words, the foundation to do object detection and interaction is via a self-awareness of where one’s body is with respect to space, followed by one’s awareness of perspective and then one’s awareness of agency.

The architecture to achieve ego-motion also allows the reconstruction of 3d space given the image capture of viewpoints. Thus object detection is enhanced in that objects are recognized from different perspectives and objects that occlude one another are identified in their position in 3d space. Furthermore, to achieve 3d interaction with the object, a body needs to know where its articulator is relative to the objects that it can interact with. Therefore, in this example, the more computationally demanding task of ego-motion is a requirement to perform a less demanding capability.

The common notion of the progression of intelligence is that higher level cognitive capabilities require more computation. Moravec’s paradox is a hint that this is not true and that the quest for consciousness is perhaps the first cognitive capability that needs to be discovered and not the last!

Anil Seth enumerates five different kinds of self-models: bodily, perspectival, volitional, narrative and social selves. These selves are not orthogonal and perhaps partially ordered in what is a pre-requisite over another. In his essay “The Real Problem”:

There is the bodily self, which is the experience of being a body and of having a particular body. There is the perspectival self, which is the experience of perceiving the world from a particular first-person point of view. The volitional self involves experiences of intention and of agency — of urges to do this or that, and of being the causes of things that happen. At higher levels, we encounter narrative and social selves.

Anil Seth argues that the problem of understanding consciousness is less mysterious than it is made out to be.

An AGI roadmap would, therefore, require learning all these five selves in the same order as described above. Autonomy, for example, can be achieved through learning the volitional self without the need to be able to tell stories or participate effectively in a social setting. In fact, to achieve Conversational Cognition, the narrative and social selves need to be present.

Brendan Lake described a roadmap toward “Building Machines that Learn and Think like People.” In his paper, he argues for the following capabilities: (1) build causal models of the world that support explanation and understanding (2) ground learning in intuitive theories of physics and psychology and (3) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. It is unclear if Lake has prescribed an order on which skill is a pre-requisite of which other skill. I propose that we use the notion of self-models to prescribe an ordering.

Here is the consequence of various selves and the skills that are learned.

Note: Volitional should be below perspectival in an enactivist framework.

In this formulation, all the world models include the notion of self-models. These are all ‘Inside Out’ architectures. To understand compositionality, one needs an understanding of the body. To predict physics requires the awareness of where and what direction one is looking at one makes an observation. To understand learning, one needs to understand the interaction. To understand causality, one needs to imagine stories. To understand psychology, one needs to understand oneself. In summary, you cannot develop any of the skills that Brendan Lake describes without a previous grounding with a model of the self.

Current AI orthodoxy does not include a notion of self-models in its models. I suspect this is due to either (1) the tradition of science to prefer an objective model of the world or (2) assumption that self-awareness is a higher level of cognitive capability. The latter motivation implies that research in lower level cognition doesn’t need to take a self-model into account.

Cognition is a constraint satisfaction problem that involves the self, its context, and its goals. Distinguishing inference from learning is actually incorrect. Inference is learning, and both are constraint satisfaction problems. A self-model is what provides meaning, awareness makes the context explicit and intentions are the motivations for goals:

An important point here is that the self, the context and the goal are all mental models. Although they may have corresponding real analogs, constraint satisfaction is achieved only with the approximate mental models are ‘hallucinated’ by automation. These models are also not static and do change with the interaction with the environment.

There is perhaps a synergy between the different selves such that some are pre-requisites for others. The reason why the order of skills is extremely important is that the more abstract levels must have the grounding found only in the lower levels. Furthermore, skills that are assumed to be context-free are not independent of the context of the self. If we to assume Moravec’s paradox to be correct across all cognitive levels, then it is the unconscious bodily self-model (the instinctive level) that requires the greatest computational resources. This implies, that contrary to popular consensus, it takes fewer and fewer resources as you move up cognitive levels.

https://medium.com/intuitionmachine/the-key-to-agi-learning-to-learn-new-skills-a2ce49d9bb0b

Said differently, it takes less and less effort to make exponential progress. This conclusion is very different from the more popular notion that it takes more and more computation to achieve artificial intelligence.

The reason that Anil Seth believes in the insurmountable problem of achieving AGI is that creating an artificial bodily self-model may be too difficult. He writes:

“We are biological, flesh-and-blood animals whose conscious experiences are shaped at all levels by the biological mechanisms that keep us alive. Just making computers smarter is not going to make them sentient.”

The biggest hurdle is at the beginning (i.e. the bodily self-model). This kind of automation does not exist today. So the acceleration only happens when this capability is achieved, meanwhile, innovation will be determined by brute force computational resources. I agree though that more computer resources doesn’t ignite into general intelligence. However, simple self-model machines may be that foundation that gets you there. The uneasy reality is that this looks very much like a slippery slope. Creating bodily self-models and you just easily can slip into a ditch where you accidentally discover AGI.

The Inside Out architecture is key because it is what we have in the neocortex. Today’s deep learning is similar to insect-like intelligence which is just stimulus-response.

Can we build narrow slices of this cognitive stack and have the stack broaden out over time? For example, a bodily self-model that does not have the entire sensor network that a human will have. Thus the system will have gaps in its understanding. In short, can we avoid an all or nothing situation and build this incrementally?

The rough sketch for this is that the bodily self-model is developed by learning as an embodied entity in a simulated virtual world. It would have a subset of sensors that is proportional to what can be simulated in this world. The objective is to learn the three lower-level selves (i.e. bodily, perceptive and agency). This is already being done today. Once you see this develop with high fidelity, then I think you’ll see a more rapid acceleration. There is a tipping point here and that tipping point may be much closer than anyone may have imagined! It is indeed scary that I cannot disprove that the roadmap described here is incorrect.

Further Reading

Varieties of Self Reference

Explore Deep Learning: Artificial Intuition: The Improbable Deep Learning Revolution

.

Exploit Deep Learning: The Deep Learning AI Playbook

--

--