The Applied Mind-Body Problem for AI Developers (Emotions vs Feelings)

Victor Smirnov
Jan 18, 2017 · 4 min read

Psychological terminology of emotions is very ambiguous, intermixing higher and lower components in a simple phenomena. Higher-level emotions are actually complex emotional states, describing by complicated superposition of other emotions and needs. What is very bad, emotions are often used instead of (in place of) feelings. “To experience emotions” is not a nonsense, but it means self-perception of various behavioral signs of emotions. Phenomenal representation of emotion is feeling. We experience feelings, and behavioural signs of emotions but in the form of sensations.

With the following text I propose simplified but complete functional model of emotions and some preliminary mapping of this model to subjective phenomenology. Feel free to extends it if you think you have something to contribute.

DEFINITION OF EMOTION

Need. Some objectively measurable state of the organism. For instance, low oxygen level in the blood, that is required to be increased.

Needs may be physiological (like low oxygen level), purely psychological like shortage of novelty and combined. Needs may be combined to create new needs. The most generic form of a need is an algorithm, where the most basic operators are “physiological needs”.

A need has system of conditions defining the “level of satisfiability” of the need. This system may be very complicated especially for psychological needs. Each time there ma many different active needs. Each event may satisfy many needs simultaneously, but with different level.

The set of active needs is a portrait of needs (there may be better term for that but I’m not an English speaker)

Phenomenal representation of a need is motivation. Like needs, motivations are highly complicated, interconnected, constructions. Actually, there are N-to-M mapping between motivations and needs because if cognitive biases and partial self-awareness in general. The simplest phenomenal form of motivation is sense of anxiety. You are experiencing it when you haven’t made an inhalation or exhalation for a long time. Note that you can easily find this element in any high-level motivation.

Emotion is a sign that some event may satisfy some need at some level. Or may, from other side, suppress it. The level of satisfaction can be defined though the number of conditions met. This value may be positive or negative. In case of many active needs one even may satisfy one subset of needs and suppress others. Attention is used to activate/deactivate needs to maximize satisfaction and minimize suppression. This may also imply various possible orders of application the event against needs.

Emotions are additive. Many simple emotions can be added/subtracted to create unified reinforcement. This combining is also selective function of Attention.

So in terms of Reinforcement Learning, Emotion is reinforcement and Need is a target function. By I, personally, don’t like RL. Because I doesn’t explain well many essential properties on emotional subsystem.

Emotional subsystem of the human brain is universal AI by itself because it approximates the famous Algorithmic Probability (ALP) formula: http://world.std.com/~rjs/formula.jpg

ALP may be used for universal sequence prediction in a very simple way. Here, Pm(X) is guiding function of emotion for the event X. Emotions guide both attention and execution systems of the brain. s_i(X) is an elementary emotional response (elementary emotion) on the event X against the portrait of needs. 2^{|x|} is weighting function of emotions that is here universal, but in may be also defined via needs ordering (more important needs, less important needs). Note that all these pragmatic ordering schemes may be reduced (and, actually, are reducing) to the universal weighting.

This model of emotions is oversimplified. Schmidhuber created a bit more complicated scheme for curiosity that includes dynamic aspect explaining emotional decay in a universal way. The model I propose is compatible with Schmidhuber’s one. It’s main purpose is to emphasize ALP as “the formula of emotions”. It explains main cognitive function of emotions: universal heuristic guiding. No other theories of emotions, except Schmidhuber’s Artificial Curiosity, have such property. But curiosity is a feeling, that requires also model of experience (consciousness) to be defined. We will try to define it later.

PRACTICAL ASPECTS OF ALP

ALP-bases approaches to AI are usually criticized for being completely impractical and even uncomputable. It’s not true. ALP is well approximiable. If we take only the shortest program (model of environment) under the Sigma operator, we get MDL principle. With many models prediction given by ALP becomes only better. ALP is a universal approach especially efficient for complex environments, where all other specialized heuristic-based decision methods don’t work.

It’s true that ALP is computationally very expensive (yet feasible). But its full power is only required in a relatively small number of cases, when no good model of environment is available. Then we have to use mixture of suboptimal models. If such good model is known, then ALP may be reduced to computationally much more tractable MDL. ALP efficiently fills the gap between power of rational intelligence to operate in structured environments (MDL) and power of emotional intelligence to operate in unstructured ones (ALP).