Embodied Learning is Essential to Artificial Intelligence

Carlos E. Perez
Dec 12, 2017 · 7 min read

Jeff Hawkins has a principle that intuitively makes a lot of sense, yet is something that Deep Learning research has not emphasized enough. This is the notion of embodied learning. That is, biological systems learn from interacting with the environment. Hawkins writes:

Hawkins believes that the brain learns by interacting with its environment.

The classic Deep Learning training procedure is one of the crudest teaching methods that one can possibly imagine. It is based on repetitively and randomly presenting facts about the world and hoping that the student (i.e. the neural network) is able to disentangle and create sufficient abstractions of the world.

One should at least be able to do better by having a curriculum. That is, to present training data that starts easy and then scaling it up to more difficult training. We actually see curriculum learning effectively used in the form of the latest StackGAN architectures. This is where smaller problems are tackled first and the network is incrementally resized to tackle even larger problems.

However, biological beings learn quickest by allowing them to interact with the environment. In other words, rather than just having a teacher who defines a rigid curriculum, one allows the student to drive their own exploration of the teaching material. There is no better way to learn a new subject than to allow the student a way to interact with the subject and to discover its responses.

This is exactly what we see in the advances in cognition that DeepMind has exhibited in its AlphaGo Zero and Alpha Zero game playing machines. If you can set up a teaching environment that adjusts to the capabilities of the student, then the student can comfortably walk up a staircase toward richer understanding.

Judea Pearl has written a paper that explores the theoretical impediments of the current form of machine learning where he writes:

Here Pearl presents this categorization:

Of higher forms of learning from an environment. Conventional machine learning is stuck in the first level. Reinforcement learning explores the second level. Pearl proposes a third level that explores a cognition process as something reminiscent of a Gedankenexperiment (known in English as a ‘thought’ experiment).

Pearl argues for the development of counterfactual reasoning as the most advanced form of cognition:

Pearl explains why induction-only machine learning systems are incapable of reasoning about actions, experiments, and explanations. In short, induction-only machines require a mechanism that can perform imaginative experiments to assess the ramification of different situations. This is reminiscent of the tree search used in game-playing AI.

Pearl concludes in his paper:

Ann Pendleton-Jullian and John Seely Brown have a book “Pragmatic Imagination” which explores the entire spectrum from perception to free play:

Source: Pragmatic Imagination

As you can see from above, the inductive inference found in Deep Learning can be found in the more primitive instances of cognition. There is a large spectrum of cognition that needs to be traversed to get to higher intelligence. The end of that spectrum involves a lot of imagination and creativity.

In the recently concluded NIPS 2017 workshop, Valentin et al. presented a paper “Disentangling the independently controllable factors of variation by interacting with the world” this idea of embodied learning further:

The team devises a new kind of objective function that is capable of disentangling aspects of an environment without the need for an extrinsic reward. Where they report “Pushing representations to model independently controllable features currently yields some encouraging success.” This approach addresses only level 2 in Pearl’s classification. Deep Learning researchers have a ways to go!

DeepMind recently published a position paper (“Building machines that learn and think for themselves”) that argues for autonomous machines:

DeepMind observes that the approach argued by Lake et al is “agnostic” to the use of human engineered “innate cognitive machinery”. DeepMind argues that the forms of a priori knowledge that should be used to develop intelligent machines should be kept to a minimum. Machines should be able to learn about ambiguous as well as complex domains where a piece of prior knowledge is at a minimum or difficult to capture a priori. Furthermore, machines should have the adaptability to handle tasks in contexts related to their previous training. Finally, an autonomous system should have good models as well as the ability to create new models.

DeepMind argues for greater dependence on model-free methods:

Arguing the opposite of Lake and colleagues approach. DeepMind argues for intuition machinery as a substrate as opposed to the GOFAI approach where model-based approach forms the substrate. How a model-based and model-free approach coordinate are described in my earlier article “the coordination of rational and intuitive intelligence”.

Human a priori knowledge can be used to drive development through the design of environments that grow (or teach) innate cognitive machinery. The next evolutionary step will be in understanding how to develop these learning environments for cognitive machinery. The current conventional thinking is, “how do I design better architectures and algorithms?”, however, the more promising question is “how do I design better learning environments to teach intuition machines?”

These learning environments should be designed to bring about richer counterfactual thinking; it is through this mechanism that we can eventually create the adaptive general intelligence that we seek.

Further Reading

How Embodied is Cognition

Explore Deep Learning: Artificial Intuition: The Improbable Deep Learning Revolution
Exploit Deep Learning: The Deep Learning AI Playbook

Intuition Machine

Deep Learning Patterns, Methodology and Strategy

Carlos E. Perez

Written by

Author of Artificial Intuition and the Deep Learning Playbook — linkedin.com/in/ceperez

Intuition Machine

Deep Learning Patterns, Methodology and Strategy