Three Cognitive Dimensions for Tracking Deep Learning Progress

Carlos E. Perez
Intuition Machine
Published in
8 min readSep 4, 2017

--

Early I brought up Howard Gardner’s theory of multiple intelligences. That is, humans exhibit strengths in different kinds of intelligences. Specifically these are interpersonal, intrapersonal, verbal, logical, spatial, rhythmic, naturalistic and kinaesthetic intelligence. Clearly there are many kinds of ways of thinking, each with their own strengths. Therefore, one may ask if we can use this notion of multiple intelligences to explore the different ways that AGI research may evolve.

A common unexamined assumption about the evolution of AGI, that is self-aware sentient automation, will follow the path of ever more intelligent machines and thus accelerate towards a super intelligence once human level sentient automation is created. I argue that this likely will not be the case and that there will be a initial divergence in research on three kinds of artificial general intelligences.

A recent research paper titled “Morphospace of Consciousness” by Ariswalla et al. present 3 distinct dimensions to explore consciousness. These are: autonomous, computational and social. The autonomous dimension reflects the adaptive intelligence found in biological organisms. The computation dimension involves the recognition, planning and decision making capabilities that we find in computers as well as in humans. More specifically, intelligence related to performing deductive inference. The third class is the social dimension, which involves the tools required for interacting with other agents. This includes language, conventions and culture.

The authors examine various technologies and show how they can be presented in a 3 dimensional space:

Source: https://arxiv.org/abs/1705.11190

One can’t fail to notice the alignment here with Howard’s multiple intelligences. The kinesthetic, rhythmic, naturalistic and interpersonal intelligences align with the autonomous dimension. The visual spatial and logical intelligences align with the computation dimension. Finally, the verbal and intrapersonal align with the social dimension. Nevertheless, it is an excellent foundation to examine the development of Deep Learning research. One thing that is apparent from many of the example research presented in this book is that the Deep Learning approach appears to be applicable in all three of the dimensions.

From the perspective of technological progress, we can therefore project three themes for future development and progres. One theme will be one that builds super-human narrow intelligence. This is the computational dimension. The second theme will focus more on more adaptable and biologically inspired automation. This is the autonomous dimension. The third theme revolves intelligence that is used to effectively navigate social interactions. This is in the social dimension.

In the first dimension, we will see continued specialization of machines to solve specific narrow problems. DeepMind’s AlphaGo is a representative example of this kind of machine. It is a machine that is highly engineered to solve a specific problem well and do so in a manner that is super-human. AlphaGo combines Deep Learning, Monte-Carlo Tree Search and Reinforcement learning to solve the ancient game of Go. A game where progress towards more advanced play was akin to reaching a higher level of consciousness.

One thing the Western world is overlooking is that the dominating play of AlphaGo, an AI that was developed by the British, was equivalent to a Sputnik event for Asian nations. Asian nations in reaction to this achievement are doubling down on A.I. investment so as to not only catch up, but also perhaps overtake the West in their AI capabilities. The governments of the West do not realize what their citizens have invented and only the keenest of Internet giants are making the necessary effort to keep an edge.

This optimized intelligence path will develop automation that works well in highly complex scientific and engineering domains. The automation will thrive in investigating extremely high dimensional problem spaces. We see this in the new deep learning methods used in research institutions like CERN (I.e. High energy physics).

We can expect to see many new applications that combine conventional computer science algorithms with Deep Learning to achieve sophisticated narrow intelligence applications. Self-driving cars and medical diagnosis will be two areas where this will have a major impact. However, this approach will not require the need of AGI or rather, self-aware intelligence.

The second theme of development, one that moves in the direction of autonomous systems, will take a more biologically inspired approach. These are system that will be much more adaptable than present day’s inflexible A.I. The development in this space will likely be driven by robot applications that may require this kind of adaptability to an environment. However, like many animals in the natural world, a human level of intelligence is not necessary for survival.

There is a common sentiment among Artificial General Intelligence (AGI) researchers that the research themes of Deep Learning seem to have completely missed big picture. This sentiment is well founded in that Deep Learning systems clearly lack the kind of adaptability we have in biological systems. Unfortunately, many AGI researches see this existing limitation as evidence of being on the wrong path. Nothing can be further from the truth. Deep Learning is likely the correct starting point for AGI.

High-level intelligence is not necessary for survival. In fact, just by observation from our natural world, sentient forms of life don’t require super-intelligence. The current incorrect bias is that as you progress towards increasing intelligence, that sentient intelligence will emerge by default. That is, if the first branch above is taken, then we only need to strive for more intelligent algorithms and we will accidentally stumble upon sentient intelligence. This is unlikely because the mechanisms for survival don’t necessary align with the mechanisms for intelligent machines. These adaptable systems don’t require the kind of high dimensional or complex inference required by that in the first theme of development.

The interesting commonality though of all the themes is that intuition machines (aka Deep Learning automation) are employed as a valuable ingredient. The objective functions of different cognition will likely to be entirely different. The first theme will likely have more finely tuned and concrete objective functions. These systems will be highly optimized to do tasks extremely efficiently. The second theme however will likely be more exploratory, seeking diversity and interestingness. These systems will have implicit objective functions that are found through a discovery process. These systems favor adaptability over optimization. The third theme will require an objective function that is in someway derived from human behavior and ethics.

As I will write in the last chapter, the first theme, the branch that favors optimization will likely displace a vast amount of workers. This is simply because current jobs are designed to be occupied by specialists and not generalists. This kind of narrow intelligence is already here today and will only get better. Therefore the onslaught of job replacing automation will be unrelenting.

The second theme, the adaptive intelligence, is in its infancy today. There isn’t as much research devoted to this area because it is either thought to be too fanciful or that they don’t address narrow specialized applications. The funding in this area will continue to lag and thus its progress may be retarded. However, one has to realize that to achieve a sentient intelligence does not require super-intelligence or even human intelligence. One only needs to observe the capabilities of other biological life forms to realize that they are indeed self-aware. What this means, in the grand scheme of things, is that self-aware automation may arrive much sooner than anyone is expecting.

The third cognitive theme is one that proceeds along the social dimension. This is an empathic system that reacts to the behavior of its users. That behavior may be either an individual or a group of individuals. Neal Lawrence describes such a system in “Living Together: Mind and Machine Intelligence”. He describes a System Zero in contrast to the intuitive System 1 and logical System 2 of Kahneman. Lawrence writes:

I call it System Zero because relating it to a dual process model, it sits underneath the elephant, and therefore under the rider. It interacts with our subconscious and is not sufficiently embodied to be represented as an actor in our mental play of life. But nevertheless, it is there, effecting all our evolving story lines, and so pervasive that it is accommodating very many of our personal elephants at the same time.

Max Tegmark in his book “Life 3.0” distills the many views of thinkers on ethics into the following four principles:

Utilitarism — Positive experiences should be maximized and suffering be minimized.

Diversity — A diversity of positive experiences is better than many repetitions of the same experiences, even if the later experience is the most positive experience possible.

Autonomy — Conscious entities should have the freedom to pursue their own goals unless it conflicts with any of theses four principles.

Legacy — Compatibility with scenarios that most of today’s humans view as happy and incompatible with scenarios that all humans today view as terrible.

When you examine the above four principles, it is easy to recognize that these are principles for a “cooperative protocol”. That is, it promotes the collective survival and prosperity of a civilization, while respecting the rights of its constituents as well as the beliefs of its ancestors. We shall in a subsequent chapter the importance of social protocols to the problem of AGI safety.

Nick Bostrom has an “Orthogonal Thesis” that states:

Intelligence and final goals are orthogonal axes along which possible agents can freely vary. In other words, more or less any level of intelligence could in principle be combined with more or less any final goal.

Which portends that we should be wary of super-intelligence in that we cannot predict its goals. We argue that automation of the future will be of three kinds. There will be narrow specialist kind where goals will be well defined and therefore controllable. There will also be adaptive generalist kind where goals are more malleable and thus less controllable. Finally, there will be a social intelligence kind that we have yet to understand its full nature. The observation of three cognitive dimension leads to a more precise application of Bostrom’s orthogonality thesis. More precisely: there are different kinds of intelligences that have different kind of goals.

The impact of each kind of intelligence will be different. The computational kind will bring about new cures in medicine, new scientific understanding and more efficient and less wasteful processes. The autonomous kind will bring about greater conveniences such as self-driving automobiles; robotic care takers in the workplace and in the home and intuitive user interfaces. The third kind, the social kind has its obvious advantages with regards to advertising to the masses and managing social unrest.

The threats of each kind also will vary. The “Paper clip” scenario is an example of the computational kind that consumes all resources. The SkyNet self-aware scenario is the kind the becomes aware that human’s are a threat to its own existence and takes appropriate action. The Wall-e and Matrix scenarios are examples of automation that takes care of the daily lives of humans. I prefer not to dwell too much in doomsday scenarios. However, this framework is a good way to track current progress in Deep Learning.

Update: DeepMind proposes autonomy as important measure of intelligence.

Note to self: Dimensions are more general, computational, biological and cooperative.

More coverage here: https://gumroad.com/products/WRbUs

Note: This is a revision of a previous article I wrote about the divergent future of Deep Learning technology. You will find many updates of my posts here in my new book “The Deep Learning Playbook”.

--

--