I will make the case that the roadmap towards AGI will have to be done through the development of stepping stone capabilities. These capabilities should be inspired by human infant cognitive development.
To bolster my case, there are several principles in play here that I’ve discussed previously.
The first principle is that developing new skills must be learned through “Embodied learning”. It is an impossibility to be able to create the specifications to achieve even the simplest of tasks (see Moravec’s paradox) that the only viable mechanism that we have is to train complex cognitive machines is inside a virtual environment. To learn intuition about an environment, an agent must first interact with its environment. This provides grounding of the agents’ own model of self with respect to the environment. Absent this grounding, there is no concept of meaning and without meaning there isn’t a concept of common sense. There is no concept of what is possible and impossible within an environment. There is no way of understanding what action or set of actions is feasible or infeasible.
There have been remarkable strides when a system is trained first with the basic skill of “Ego-motion.” In experiments, a system was first trained with an awareness of physical self and then evolves into higher capabilities such as object attention and then object interaction. There seems to be solid evidence that certain skills are good prerequisites for other skills. This is of course obviously intuitive, however, what is not obvious is how to sequence the development of these skills?
The complexities of evolution also reveal to us additional hints that above is correct. Evolution makes use of what already exists rather than re-inventing. Evolution prefers reuse. To achieve higher complexity it begins from a state of current possibilities and generates new possibilities only from existing components. The process of evolutionary innovation is through reuse. I discuss this in more detail previously. If we are to assume that Cognitive Learning employs common (or perhaps identical) mechanisms as evolution, then what we consider as cognitive skills can be looked as structural building blocks that are to be reused for higher cognition. What is not entirely obvious is how one skill serves as a building block of another skill. However, it is always important to remember that evolution does not build the optimal solution, rather it builds only the solution that is available in the adjacent possible. Said different, evolution is pragmatic and not optimal. Making this analogy, cognitive development is also pragmatic and not optimal.
In two recent talks by Yann Lecun and Josh Tennenbaum, they both brought up different slides that document the cognitive development of human infants.
It’s important enough for both Yann LeCun and Josh Tennenbaum to agree on the importance of human infant development in the context of AI. There are good reasons to believe that we need to use infant development as an inspiration. That’s because infants are obviously learning in a way that is very different from our current Deep Learning Architectures. More advanced AI must be able to replicate this capability.
The current problem with Deep Learning is that (despite its advances) its cognition does not appear to be anywhere similar to human or even biological cognition. I wrote about this previously in my exploration of human visual perception. I’ve had this conjecture that anomalies in human cognition, manifested in cognitive biases and sensory illusions, can give us unique hints as to how humans think. These hints then can be leveraged to understand how an AGI may be built.
It’s strange that this method is very ‘cargo cult’ like. That is if we just imitate various observations of a system that we are replicating that we could conjure up the same system. The inhabitants of a South Pacific island were unable to replicate an airborne logistical system by creating planes out of husks. How then should such an approach described here even work?
The difference is the islander had no access to a generative technology such as Deep Learning. They had no technology at their disposal that allowed them to fill in the blanks. They had no technology that substituted for their lack of cognitive capabilities. However, today’s Deep Learning researchers have such a technology. That is a technology wherein only the boundaries of a problem need to be sketched and through a kind of alchemy a system is trained.
This is of course much easier said than done. What exactly are the principles of Deep Learning? How does evolution lead to innovation? What are the principles of biological learning? How is human learning different?
Here is a very insightful debate at MIT’s Center for Brains, Minds, and Machines that gives you a feel of the complexity of this problem:
You will find in the above debate a lot of disagreement about the nature of cognition. Some debaters can’t even agree on whether the brain is analog or digital!
Now, if the path towards an AGI would mirror the path of human infant development, they would not an AGI have the same kinds of problematic and destructive traits that many humans possess? I have however reason to believe that an AGI would have mostly human beneficial traits. I’ve explored this is some detail in “Human Personalities and Learning Strategies.” There I argue that the beneficial traits are those traits that value exploration overexploitation. A human beneficial AGI is one that explores options exhaustively. This, in fact, reminds me of that movie “War Games”, where the W.O.P.R. machine learned the concept of an unwinnable game through the massive exploration of playing an unwinnable tic-tac-toe game.
[1808.09352v1] Evaluating Theory of Mind in Question Answering
Abstract: We propose a new dataset for evaluating question answering models with respect to their capacity to reason…
[1707.03389] SCAN: Learning Hierarchical Compositional Visual Concepts
Abstract: The seemingly infinite diversity of the natural world arises from a relatively small set of coherent rules…
What is a cognitive map? Organising knowledge for flexible behaviour
It is proposed that a cognitive map encoding the relationships between entities in the world supports flexible…
[1805.11593v1] Observe and Look Further: Achieving Consistent Performance on Atari
Abstract: Despite significant advances in the field of deep Reinforcement Learning (RL), today's algorithms still fail…
[1807.03392v1] Evolving Multimodal Robot Behavior via Many Stepping Stones with the Combinatorial…
Abstract: An important challenge in reinforcement learning, including evolutionary robotics, is to solve multimodal…
[1809.11087] Learning to Remember, Forget and Ignore using Attention Control in Memory
Abstract: Typical neural networks with external memory do not effectively separate capacity for episodic and working…
Babies Are Not Blank Slates
Exquisitely shot and hopeful-without-being-sugary, the film focuses on the day-to-day lives of babies and parents and…
A new developmental reinforcement learning approach for sensorimotor space enlargement
October 10, 2018 by Ingrid Fadelli, Tech Xplore, Phys.org Researchers at the University of Lorraine have recently…
[1308.2124] Space as an invention of biological organisms
Abstract: The question of the nature of space around us has occupied thinkers since the dawn of humanity, with…
[1806.02739] Discovering space - Grounding spatial topology and metric regularity in a naive…
Abstract: In line with the sensorimotor contingency theory, we investigate the problem of the perception of space from…
[1804.01128] Probing Physics Knowledge Using Tools from Developmental Psychology
Abstract: In order to build agents with a rich understanding of their environment, one key objective is to endow them…
The Theoretical and Methodological Opportunities Afforded by Guided Play With Young Children.
Abstract: For infants and young children, learning takes place all the time and everywhere. How children learn best…