The Coordination of Intuition and Rational Intelligence

Carlos E. Perez
Intuition Machine
Published in
4 min readOct 4, 2017
https://unsplash.com/photos/VxtWBOQjGdI

In my writings, I’ve continually emphasized the nature of the human mind as that of consisting of two cognitive systems. That is, the mind consist of an intuitive and a rational system. This is better known by psychologists as the Dual Process theory. However, this brings up an interesting question. If there are two kinds of cognition in our heads, then how perhaps do these two coordinate with each other to get anything done?

Interestingly enough, I’ve stumbled upon a paper that discusses this in greater detail. I found the paper through a recent blog post by DeepMind. DeepMind is known as a big proponent of the use of Reinforcement Learning. They have successfully combined Deep Learning with Reinforcement Learning in their Atari game playing system and AlphaGo. There are two approaches to Reinforcement Learning, one is model-based and the other is model-free. The former performs its actions based on an internal model (that is programmed) and the later performs its actions based on learning through induction ( using Deep Learning as a special case).

DeepMind has a new paper that explores recent discoveries about the brains’ hippocampus (re: The hippocampus as a 'predictive map'). The idea here is that the state of the environment should be approximated by the current state and the possible future states. This is formally known as “Successor Representation”. Apparently, this is a new approach to exploring Reinforcement Learning.

Interestingly enough, there is a paper referenced in the blog post that does discuss the competition and cooperation between model-based and model-free RL. The analogy that is made in the paper is that model-free RL is our habitual system in play (i.e. Intuition) and model-based system is our planning system in play (i.e. Rational). The paper explores the different ways these two system work in both competitive and cooperative interaction. The selection of which cognitive mechanism to use can be based on the need for efficiency versus accuracy.

What caught my interest is the cooperative interaction. The paper discusses three kinds of cooperation:

(1) Intuition can learn from simulations from a model.

(2) Intuition can truncate model based planning.

(3) Intuition can aid in selecting rewarding goals.

The latter two cooperative modes were exhibited to great effect by AlphaGo. AlphaGo used Monte Carlo Tree Search to search the space of good moves. It used Deep Learning to essentially prune the search tree to something more manageable. In addition, the value and policy function used for each game state employed Deep Learning.

The first kind of cooperation, I hinted at in a previous article (see: “A Language Driven Approach for Deep Learning Training”). It is the same mechanism used by Microsoft in its DeepCoder paper. I wrote earlier, this hybrid system of combining traditional algorithms with Deep Learning pattern recognition can be an extremely potent combination. In fact, there are plenty of low hanging fruit application where this approach is extremely effective.

Biological brains as a consequence of adapting to their natural environments have visual-spatial, motion, sequence and rhythmic recognition capabilities. My conjecture is that this is essentially all that we need and that planning and rational thought is emergent from these more basic capabilities. Our biological brains don’t have the kind of specialized logical hardware that you find in computers. Rather, we perform a kind of virtual machine simulation using mechanisms that are not optimized for this kind of task. It isn’t very efficient as compared to specialized computation elements we find in computers, but it is good enough.

The major problem of Artificial General Intelligence (AGI) is bridging the semantic gap between an induction based system and a deduction based system. How do concepts that are learned via data induction become symbolic concepts that a deduction process can process? How can a model-free system create models? How does our brain capture our experiences and create concepts and ideas. How are we able to turn these ideas into invent language to communicate? The key to AGI is in understanding the interface between intuition and rational thought.

References

.

Explore Deep Learning: Artificial Intuition: The Unexpected Deep Learning Revolution
Exploit Deep Learning: The Deep Learning AI Playbook

.

--

--