The Mind and The Body: Towards True Artificial Intelligence

A take on the future of artificial intelligence from the viewpoint of cognitive sciences.

Photo by Andy Kelly on Unsplash

Let’s start the discussion by asking a simple question: Is it possible to simulate the human brain without the simulation of human body and without the notion of embodiment of that body in the real world?

Embodied cognition is the idea that our cognition is spanned across the mind, the body and the environment in which the body exists. It basically implies that the mind is not the only cognitive unit that humans have and that the body is as much responsible for most of the cognitive responses that we form.

Consider the following scenario. Suppose that we are given the ability to develop an agent who can perform all the tasks as a normal human being and has the same cognitive and physical abilities as a normal human being. But one fault with this ‘human’ agent is that it cannot feel pain. Now the question is that, ‘Is it in any way possible for this agent to understand the ‘true’ meaning of pain?’.

The answer would be, No. Any explanation or understanding that it can have regarding pain will be a secondary interpretation based on outside inputs and there is no absolute meaning assigned to it.

I would like to extend this analogy in a broader spectrum. Is it possible to simulate all the cognitive aspects of the human brain on a computer without assigning it to a corresponding body. A large part of our brain relies on our body to directly interact with our environment and emotions like pain, fear, relief, happiness are manifested by the body before even reaching the brain.

As Hubert Dreyfus says, the transition from merely competent behaviour to expert behaviour requires “being in the world” through having a body embedded in the world.

The important question is, what is the form of this body that should be associated with the simulated artificial brain? A human body is a complex biological structure with a dense relation to the brain through the nervous system.

If there were to be an artificial counterpart for this symbiosis, what will be the structure and complexity of such a design?

What kind of interaction will this structure have with the environment in order to perform all the cognitive tasks?

What will be the form of this environment? Will it be an environment similar to what humans interact with or will this environment be artificial?

These are some questions which make this problem really difficult to pose let alone solve.

To really achieve human-like intelligence, artificial agents need an opportunity to learn things directly from their interaction with the environment. For humans, this interaction takes place through the body and hence the body becomes an integral part of the learning process.

But when it comes to artificial systems, be it self-driving cars or autonomous drones or robots, the focus remains on training only the ‘brain’, the ‘body’ just follows the instructions without getting the chance to do anything ‘intelligent’. The spirit of embodied cognition lies in the fact that the state of the body effects the state of the mind. In future, we need to move towards systems which enforce this spirit. Systems with distributed cognitive resources going through a unified learning process will be more intelligent than the current artificial systems.

As Rolf Pfeifer, Max Lungarella, and Fumiya Iida have written -

“An embodied perspective, because it distributes control and processing to all aspects of the agent (its central nervous system, the material properties of its musculoskeletal system, the sensor morphology, and the interaction with the environment), provides an alternative avenue for tackling the challenges faced by robotics. The tasks performed by the controller in the classical approach are now partially taken over by morphology and materials in a process of self-organization. . . ”
Photo by Daniel Hjalmarsson on Unsplash

The motivation for this embodied cognition comes from the inherent loopholes in the classic cognitivist models that tend to overlook two important factors that are needed to understand cognitive development-

  1. The exact way organisms are embodied
  2. The manner in which this embodied form simultaneously performs certain interactions with the environment.

Another assumption that the embodied cognition theorists go by is that the form of embodiment of an organism prescribes the kind of cognitive processes that are available to it.

In an interesting study done by Yale psychologist John Bargh, participants holding warm as opposed to cold cups of coffee were more likely to judge a confederate as trustworthy after only a brief interaction. There have been similar studies which reinforce the claims of embodied cognition theorists.
- Thinking about the future caused participants to lean slightly forward while thinking about the past caused participants to lean slightly backwards. Future is Ahead.
- Squeezing a soft ball influenced subjects to perceive gender neutral faces as female while squeezing a hard ball influenced subjects to perceive gender neutral faces as male. Female is Soft.

As Pfeifer says here, as the notion of ‘embodied artificial intelligence’ grows, artificial intelligence which was initially a computational discipline, dominated by computer science, cognitive psychology, linguistics and psychology has turned into a multi-disciplinary field requiring the support of other fields such as biology, neuroscience, robotics, bio-mechanics, material sciences and dynamical systems.

Pfeifer goes on to discuss one of the major challenges in this regarding the aspect of ecological balance, that there should be a match in the complexity of the sensory, motor and neural control systems. Most robotic systems are “unbalanced” in the sense that they are built of hard materials and electrical motors, and thus the control requires an enormous amount of computation. This is a remarkable observation because it mean that most of the artificial systems due to lack of a sophisticated sensory-motor system require high level of computation to balance the control requirements.

For example, unlike the artificial vision systems, retinas in the human eyes perform an enormous amount of computation right at the periphery so that the signals that are passed on, are already highly processed.

To achieve this kind of seamless flow of information between different cognitive units, we need to build good models that can quantify morphological and material properties in a similar currency in which we quantify sensory and motor data.

Photo by Daniel Cheung on Unsplash

Currently, artificial systems are trained for particular tasks and data & instructions are ‘fed’ to these systems to perform the tasks. This kind of learning is forceful and such systems though seem to work well in most cases, when they fail, they fail badly, ungracefully and seemingly unpredictably. I believe, a powerful alternative is to build artificial agents which are not given any specific tasks at their ‘birth’ and allowed to explore the environment and learn things without any specific task at hand. Once they gain the awareness of the environment, they can be asked to perform elementary tasks, gradually increasing their complexity. Further, when the artificial agent has gained enough knowledge about its environment and how the agent is embodied in the environment, it can be tuned for the desired task. This kind of hierarchical learning is more natural and general purpose.

The idea of ‘embodied cognition’ has been around for quite some time in the artificial intelligence community but we are still far from a simulated system which can come close to humans in terms of the sensory-motor and neural control balance. To achieve complete embodied cognition, we need to push the barriers of existing science and technology available.

For example, understanding and controlling the highly involved complex dynamics of genetic regulatory networks will require a lot of research.

Secondly, the physics-based simulation models need to be augmented for more complex agent-environment interactions.

Thirdly, deformable, flexible materials, additional sensors as substitute for ‘skins’ and olfactory sensors and artificial muscles should be accounted for.

The above list can be extended further. However, the grand challenge in the broader scope is to find a way to ‘grow’ systems instead of building them. Biological systems grow in real world by means of cell division and multiplication. This is the fundamental quality that separates us from things artificial. Perhaps, this is also the elementary reason that we are capable of self-learning. Hence, developing growing structures is a great challenge that needs to be overcome with the cooperation of material scientists, nano-tech experts and biologists.

Going forward, it will be unfair to say that artificial intelligence is a domain specific to computer scientists. In the quest of unleashing the secrets of the human mind, we will need to understand its symbiosis with the human body. This will require joint effort from researchers across the broad spectrum of engineering and sciences.