The Stepping Stones of AGI and Infant Cognitive Development

Carlos E. Perez
Intuition Machine
Published in
6 min readJul 18, 2018

--

Photo by li tzuni on Unsplash

I will make the case that the roadmap towards AGI will have to be done through the development of stepping stone capabilities. These capabilities should be inspired by human infant cognitive development.

To bolster my case, there are several principles in play here that I’ve discussed previously.

The first principle is that developing new skills must be learned through “Embodied learning”. It is an impossibility to be able to create the specifications to achieve even the simplest of tasks (see Moravec’s paradox) that the only viable mechanism that we have is to train complex cognitive machines is inside a virtual environment. To learn intuition about an environment, an agent must first interact with its environment. This provides grounding of the agents’ own model of self with respect to the environment. Absent this grounding, there is no concept of meaning and without meaning there isn’t a concept of common sense. There is no concept of what is possible and impossible within an environment. There is no way of understanding what action or set of actions is feasible or infeasible.

There have been remarkable strides when a system is trained first with the basic skill of “Ego-motion.” In experiments, a system was first trained with an awareness of physical self and then evolves into higher capabilities such as object attention and then object interaction. There seems to be solid evidence that certain skills are good prerequisites for other skills. This is of course obviously intuitive, however, what is not obvious is how to sequence the development of these skills?

The complexities of evolution also reveal to us additional hints that above is correct. Evolution makes use of what already exists rather than re-inventing. Evolution prefers reuse. To achieve higher complexity it begins from a state of current possibilities and generates new possibilities only from existing components. The process of evolutionary innovation is through reuse. I discuss this in more detail previously. If we are to assume that Cognitive Learning employs common (or perhaps identical) mechanisms as evolution, then what we consider as cognitive skills can be looked as structural building blocks that are to be reused for higher cognition. What is not entirely obvious is how one skill serves as a building block of another skill. However, it is always important to remember that evolution does not build the optimal solution, rather it builds only the solution that is available in the adjacent possible. Said different, evolution is pragmatic and not optimal. Making this analogy, cognitive development is also pragmatic and not optimal.

In two recent talks by Yann Lecun and Josh Tennenbaum, they both brought up different slides that document the cognitive development of human infants.

Yann Lecun’s slide
Josh Tennenbaum’s slide (ICML 2018)

It’s important enough for both Yann LeCun and Josh Tennenbaum to agree on the importance of human infant development in the context of AI. There are good reasons to believe that we need to use infant development as an inspiration. That’s because infants are obviously learning in a way that is very different from our current Deep Learning Architectures. More advanced AI must be able to replicate this capability.

The current problem with Deep Learning is that (despite its advances) its cognition does not appear to be anywhere similar to human or even biological cognition. I wrote about this previously in my exploration of human visual perception. I’ve had this conjecture that anomalies in human cognition, manifested in cognitive biases and sensory illusions, can give us unique hints as to how humans think. These hints then can be leveraged to understand how an AGI may be built.

It’s strange that this method is very ‘cargo cult’ like. That is if we just imitate various observations of a system that we are replicating that we could conjure up the same system. The inhabitants of a South Pacific island were unable to replicate an airborne logistical system by creating planes out of husks. How then should such an approach described here even work?

The difference is the islander had no access to a generative technology such as Deep Learning. They had no technology at their disposal that allowed them to fill in the blanks. They had no technology that substituted for their lack of cognitive capabilities. However, today’s Deep Learning researchers have such a technology. That is a technology wherein only the boundaries of a problem need to be sketched and through a kind of alchemy a system is trained.

This is of course much easier said than done. What exactly are the principles of Deep Learning? How does evolution lead to innovation? What are the principles of biological learning? How is human learning different?

Here is a very insightful debate at MIT’s Center for Brains, Minds, and Machines that gives you a feel of the complexity of this problem:

You will find in the above debate a lot of disagreement about the nature of cognition. Some debaters can’t even agree on whether the brain is analog or digital!

Now, if the path towards an AGI would mirror the path of human infant development, they would not an AGI have the same kinds of problematic and destructive traits that many humans possess? I have however reason to believe that an AGI would have mostly human beneficial traits. I’ve explored this is some detail in “Human Personalities and Learning Strategies.” There I argue that the beneficial traits are those traits that value exploration overexploitation. A human beneficial AGI is one that explores options exhaustively. This, in fact, reminds me of that movie “War Games”, where the W.O.P.R. machine learned the concept of an unwinnable game through the massive exploration of playing an unwinnable tic-tac-toe game.

See Subjective Self for AGI

Further Reading

Explore Deep Learning: Artificial Intuition: The Improbable Deep Learning Revolution
Exploit Deep Learning: The Deep Learning AI Playbook

--

--