Conversational Cognition: A New Measure for Artificial General Intelligence

Carlos E. Perez
Intuition Machine
Published in
10 min readFeb 9, 2018
https://unsplash.com/photos/i5FsBOLsB50

What’s wrong with a majority of research in Artificial Intelligence(AI), Deep Learning and Artificial General Intelligence (AGI)?

First, let’s explore the latest research on “The social and cultural roots of whale and dolphin brains” published in Nature.

Encephalization, or brain expansion, underpins humans’ sophisticated social cognition, including language, joint attention, shared goals, teaching, consensus decision-making and empathy. These abilities promote and stabilize cooperative social inter-actions, and have allowed us to create a ‘cognitive’ or ‘cultural’ niche and colonize almost every terrestrial ecosystem. Cetaceans (whales and dolphins) also have exceptionally large and anatomically sophisticated brains. Here, by evaluating a comprehensive database of brain size, social structures and cultural behaviours across cetacean species, we ask whether cetacean brains are similarly associated with a marine cultural niche.

In summary, cetacean social and brain evolution represent a rare parallel to those in humans and other primates. We suggest that brain evolution in these orders has been driven largely by the challenges of managing and coordinating an information-rich social world. Although these challenges may increase with group size, it is not group size itself that imposes the challenges. In both primates and marine mammals, structured social organization is associated with higher levels of cooperation and a greater breadth of social behaviours. Thus, we propose reframing the evolutionary pressures that have led to encephalization and behavioural sophistication to focus on the challenges of coordination, cooperation, and ‘cultural’ or behavioural richness.

Cetaceans found in mid-sized social groups had the largest brains (in both absolute and relative terms), followed by those that form large communities (mega-pods); those predominantly found alone or in small groups had the smallest brains.

One of the unsolved problems of AGI research is our lack of understanding of the definition of “Generalization.” I’ve pointed this out in my previous writing: most of the definitions created for Generalization are incomplete. These definitions are either too narrow or even worse incorrect.

What I am going to suggest in this article is that our measure of intelligence is tied to our measure of social interaction.

As I write this, I am thinking of several thought-provoking titles for this:

  • Conversation Cognition in Intelligence Design
  • Deep Conversational Learning
  • Contextual Mechanics
  • The Conversational Principle of Intelligence
  • A New AGI Inspired Generalization

Each of these titles is equally good, but I will settle with the current title I used for this article.

Ultimately, to build improved artificial intelligence, we need more automation that can generalize extremely well. Unfortunately, you can query researchers as to what Generalization means, and they will give you a litany of overly simplistic definitions. How then can we possibly achieve AGI when we have a very shallow understanding of what Generalization means? In this article, I will propose an entirely new definition. I call this Conversational Cognition (I will have to find a more unique and striking name later!). From this perspective, we can draft an opinionated road-map as to how to achieve AGI.

To summarize, there are many proposed ways to characterize Generalization:

  • Errors Response (Difference between training and test error)
  • Sparsity of Model (Alternatively Simplicity of Model or Sherlock Holmes’ method)
  • Fidelity in Generating Models (Ability to generate realistic representations)
  • Effectiveness in ignoring nuisance variables (Ability to ignore the unimportant)
  • Compression (Ability to compress description)
  • Risk Minimization (Ability to predict and compensate for failure)

I’m proposing Conversational Cognition be added to this list.

An ecological approach to cognition is based on an autonomous system that learns by interacting with its environment. Generalization in this regard is related to how effectively automation can anticipate contextual changes in an environment and perform the required context switches to ensure high predictability. The focus is not just in recognizing chunks of ideas but also being able to recognize the relationship of these chunks with other chunks. There is an added emphasis on recognizing and predicting the opportunities for change in context.

This notion is a level of complexity higher than risk minimization. Risk minimization demands a predictive model that can function effectively in the presence of imperfect information of the world. This implies that an automaton’s internal models of reality must be able to accommodate vagueness and unknown information. However, the definition seems to only allude to the need to support context switching.

In realistic environments, a system must be able to adjust appropriately when a context changes. These environments are sometimes referred to as “non-stationary”. It is not enough to have models that can model the world in a single context. The most sophisticated form of generalization that exists demands the need to perform conversations.

This conversation, however, is not confined only to an inanimate environment with deterministic behavior. The conversation must also be available for the three dimensions of cognition. Specifically, we need to explore conversation for computation, autonomy, and social dimensions. In the computational dimension, the AlphaGo Zero self-play is an effective demonstration of adversarial agent play in a deterministic environment. In autonomous environments, we require models that can continuously perform their capabilities in different similar domains and if necessary, be able to adapt to context changes in environments. An example of this would be biological systems adjusting their behavior in response to cyclic changes in the seasons. The social environment will likely be the most sophisticated system in that it may demand understanding the nuisances of human behavior. This may include complex behavior such as deception, sarcasm, and negotiation.

The needs of social survival will also require the development of cooperative behavior. It is only through recognizing that skilled conversation is a necessary ingredient that we may achieve this. Effective prediction of an environment is an insufficient skill to achieve cooperative behavior. Conversations require the cognitive capabilities of memory, conflict detection, and resolution, analogy, generalization, and innovation. The development of language is a fundamental skill. However, languages are not static — they evolve. Moreover, the evolution of language happens only through conversations.

Effective conversation is a two-way street. It requires not only understanding an external party but also the communication of an automaton’s inner model. This communication requires more than decompression, but rather it requires the appropriate contextualized communication that anticipates the cognitive capabilities of other conversing entities. Good conversation requires good listening skills as well as the ability to assess the current knowledge of a participant and performing the necessary adjustment to convey information that a participant can relate to. This is indeed an extremely complex requirement. However, if we are seeking out Artificial General Intelligence, then it only makes sense that we should begin accepting a more complex measure of intelligence.

Conversation can also be considered as a kind of game play. That is, an agent may have goals that demand it to devise approaches to recruit other participants to aid it in achieving its goals. This, however, may seem to demand almost human-like capabilities even to achieve. The question, therefore, is whether we can build primitives of cooperation that can bring us closer to this kind of generalization.

Conversational Cognition is related to Cognitive Synergy. In cognitive synergy, different agents may pick up from where a conversation left off.

Focusing on conversation is actually not a novel thought in the study of intelligent systems. However, this approach has recently taken a backseat with the development of effective machine learning techniques. I, however, would like to get a more serious exploration of this approach in the context of deep learning.

Research in primitives for cooperative agent system has been studied extensively in the past. Winograd’s Speech Acts and FIPA are excellent starting points to identify composable language elements to build more complex forms of cooperation. Furthermore, we can leverage our understanding of the Loose Coupled Principle (LCP). LCP has at its core the assumption that the most likely method that nature will select will be the method that requires the least amount of assumptions (or preconditions).

Societies and large scale organizations also require effective coordination to scale.

What sort of measure can we use to evaluate the effectiveness of a conversation? I will leverage Lawrence embodiment metric. However, it should measure at a minimum of three legs of a conversation. This includes the initial communication, the response to the communication followed by a response to the final response. Conversations, of course, can go on forever, but a measure is based on:

(1) The ability to articulate an internal mental model of the external participant.

(2) Receive a response to communication and re-assess the internal mental model.

(3) Finally, articulate a second response based on the newly changed mental model.

This measurement is applicable regardless of the intelligence of the external participant.

Conversational Cognition is the most abstract form of generalization. Let’s compare this new measure of generalization with alternative frameworks for cognition.

Karl Friston’s Free Energy — This idea makes use of the ubiquitous concept in Physics known as the Principle of Least Action. All dynamics, even cognitive dynamics must take this idea into account. This is fundamental to how nature works.

Wissner Maximizing Future Freedom — This is a unique idea that explains intelligence as the maximizing of options. This, like the principle of least action, is a universal one. The idea is that rather than just blindly choosing the most efficient action, intelligence requires the action that maximizes survival.

Compression — There is an overly simplified notion that compression implies intelligence. Perhaps it is because compression is a measurable attribute of language. However, are all notions of compression the same? Do all compressions allow for composition and grounding? Language is a kind of compression. However, it requires many other attributes outside of compression.

Kolmogorov Complexity and Solomonoff Induction — This a favorite of many AGI researchers. AIXI is the ultimate theory. However, it is based on a set of beliefs that aligns computational ability with general intelligence. There is no evidence that increased computational ability automatically leads to intelligence. Rather, certain cognitive skills such as language need to be effectively used. One other problem with this approach is that it demands approaches that are incomputable and lead to broad conclusions wherein certain situations are not possible.

Integrated Information Theory — IIT defines a measure of consciousness that relates to how non-decomposable thought processes are within an entity. Integration may be an advanced cognitive ability; however, there are many animals with higher integration capabilities the humans. Pigeons, for example, are much better at multi-tasking than humans. Integration leads only to greater awareness of the environment; however, it does not demand a requirement to develop strategies to cooperate with other entities. Babies very quickly learn how to manipulate their parents in pursuit of their own goals.

For many people who are involved in developing complex products that require many agents to cooperate, this definition of generalization is very obvious. Surprisingly though, the AI community has mostly adopted overly simplistic notions of generalization. These notions have researchers focusing on the wrong problem that probably has nothing to do with achieving AGI. The ability to effectively perform a conversation with the environment is the essence of AGI. Interestingly enough, what most AGI research avoids is the reality that an environment is intrinsically social. That is, there is other intelligence that exists. This is true in both the human and biological context.

In summary, it is the cognitive ability to dynamically adapt a conversation with new ideas and steer it collaboratively towards innovative solutions. Conversations are the highest reflection of intelligence. It is how language evolves with new concepts and how new memes become viral. It is what the “design pattern” practitioners describe as growing patterns. It is a dynamic and evolutionary description of general intelligence where previous characterizations of intelligence have a focus on static and specialized descriptions. It is not enough to be able to adapt to a single context; general intelligence needs the ability to adapt also the pervasive switching of context.

Additional Reading

Explore Deep Learning: Artificial Intuition: The Improbable Deep Learning Revolution

Google demonstrates Conversational Cognition capability:

Microsoft acquires a Conversational Cognition firm:

Understanding and sharing intentions: The origins of cultural cognition

The Ultra-Social Animal

--

--