Affective AI approaches and its relations with human reasoning

Fernando Ferraretto Silva
EmotionAI
Published in
8 min readFeb 14, 2021

Artificial intelligence (AI) is evolving to recognize and reproduce emotions. There are considerable challenges from technical to ethical matters, but the main question is about the purpose of creating the affective AI and its meaning in the context of individuals reasoning. This paper aims to discuss how the human mental model works and how it fits into two different approaches for the affective AI: reproduce or extend the human being.

Figure 1: AI approaches — reproduce or extend

The affective AI approaches

It is usual to think about artificial intelligence [4] as the capacity to reproduce the human intelligence through machine systems capable to act as we do. Extending this concept to include emotional things as presented in [3], it defines the affective computing which aims to give these AI systems the capability to recognize and express emotions. This problem is too complex and it is necessary to understand how humans combine reasons and emotions together in the “thinking” processes as a multidisciplinary task involving psychology, biologic, medicine, engineering, etc.

Considering we will be able to pass through these huge learning and research steps, and also all technical barriers will be solved with technology improvements, we may have success to model ourselves into artificial computers, but then there is a question to answer: what is the purpose of an affective AI machine? Before walking through this question it is important to understand how emotion and reason works inside our minds and define the possible goals of creating an affective AI, which will be summarized into two different approaches in this paper as shown in figure 1.

1. Reproduce the human being

It is the common sense, most of the AI researches being developed nowadays have the goal to reproduce the human action and behavior into specific tasks. The objective is to get a system capable of processing variables and solve incoming questions or take decisions whenever necessary. These systems evolve from an initial random state to a complex state learning from inputs, observing targets and rules provided by the environment or by the data acquired.

When emotions are included, the model gets much more complex and there are a lot of challenges and difficulties associated. It is being addressed developing systems capable to comprehend the environment using cognitive features in order to define what is appropriate within a context instead of looking at raw and logical data only. This approach is focused in developing a model capable to make decisions and solve problems not provided during the learning process, create reason from scratch and acquire creativity or develop intuitions to explain questions not answered yet.

If it is achieved, we would have a machine capable to influence others and to give out opinions based on its beliefs and experience, as we humans do. Also, such model has ethical implications as it would become capable to manipulate humans massively for commercial, political, control and censuring purposes. Considering the technology improvements and the amount of data being collected from individuals, affective AI can guide our decisions on its own interests and benefits without understand the individual context, needs and desires.

2. Extend the human being

All the challenges are similar however this approach aims to create a model to interact with individuals in order to increase our own well-being. This AI would be able to understand and interact with a human but the goal is not to reproduce one “person”, i.e. being an independent individual or manipulate the humans through emotions. It aims to support and connect with this person through affective interaction. This approach takes the advantage of technology capabilities, sensing and data to provide us better information, suggestions and guides. It is a tangible way to provide better intelligence capabilities to us, improve our cognition abilities and extend our individual capacities instead of just reproduce our own intelligence.

Human mental model

A good understanding of the human capability to combine reason and emotion is key to walk through affective computing and the approaches presented. In recently researches from different areas, there is a common finding about the importance of emotion and cognitive perception in our reasoning. It has been shown that the human reason model is not logical and Cartesian, actually it is quite imperfect and influenced by the context and previous individual experience.

There are many references to mention in order to support this mental modeling but let’s focus in some remarkable and famous findings as the basis of this topic. The arguments stated below are detailed and based on deeper and scientific facts in the referenced literature.

The classical division between system 1 and 2 proposed by Daniel Kahneman [2] and its argumentation about the priming effect, the “halo” concept, associative machine and many other evidences propose that we evolve automatic decision through the system 1 using emotion and cognition variables only. Daniel argues how we create bias, associative behaviors and heuristic thinking, i.e. without real reasoning processing for the most decisions we do and beliefs we have in our minds.

Mercier and Sperber [1] go beyond the importance of emotion in the reasoning and provide evidences about evolution of reason as a social instrument for justification and explanation, and not as the base for our decisions and creativity. Basically, they argue one may have countless different modules, created based in our experience with different topics. These modules are stimulated by our cognitive processes and a decision is implicit and automatically made using our intuition, as briefly presented in figure 2. Then, we use the reasoning to justify, explain ourselves to external environment, and whenever necessary, to re-calibrate the module based in a new belief or experience before externalize our decision.

Figure 2: Simplified human mental model

Also, a lot of experiments has been shown the connection between the mental process and the physical effects as exposed in [2]. Simple things such as forcing our face to smile or to get angry, may affect how we classify random questions as good or bad. Therefore, the cognitive process goes beyond mental states and may include the physical reaction in a complex process that affects our emotion, perception and finally our reasoning. In summary, our mental model is connected with the individual experience acquired during the course of our lives and the context we live.

Individual bias and experience

Most evidences pointed so far, indicate reasoning as abstract and imperfect, being affected by external influences and how our experiences build up our reasons, norms, etc. It can be extended to logical tasks and even famous scientists have their reasoning biased by the emotions and contexts as proposed in [1].

Considering the dependency between emotion and reason, affective AI would be more useful and effective if it gets developed as an extension of our own intelligence in order to improve our well-being. The reasoning is an individual behavior because it is totally linked with his/her experience. In the best scenario, let’s suppose that (with huge efforts) an AI system could learn from one individual experiences and act as this person. Even being successful, this AI does not represent a generic intelligence and it is beneficial to this person only or to represent this individual in different contexts. There is no sense to define the human reasoning as common and logical behavior shared across all the individuals of our species. For sure we have some common norms and social protocols but it has nothing to do with our personal reasoning, emotion, intuition and creativity. Although we may improve the AI capabilities, human personality is unique and individual, therefore the affective AI is indicated for purposes related to each person and not to generic usage.

As said before, our perception and events experienced during our lives will create the intuitions, bias, norms and heuristic behavior for each one like individualized features. Considering that, it is a subjective and complex task to generate data representing genuine experiences from individuals instead of historical and logical data. Then, machines will not be able to get its own inferential and intuition process without real experiences to relate. More than that, what we can see in recent discussions presented in the references [5], [6] and [7] is how biased data can affect an AI system and create social problems and injustice. The data usually is generated using historical information and it has intrinsic bias from racism to prejudgments of what should be expected in some situations in a specific culture or environment. Therefore, we should rethink our goal for the affective AI and look at it as an individual feature, bringing benefits for the individual himself, sharing and interacting with a model that can extend our knowledge and capacities using the technology benefits and improvements.

Conclusion

It is impossible to think about a real intelligent system with no individual experience. We are being presented with incontestable evidences that emotions are key to reasoning and crucial for its construction. The intelligence recognized in our species is a social way to justify and argue with our intuitions, and in the deep behavior we reason from intuitions, emotions and norms based on our life experience, cognition, physical states and the context we have lived and interacted with.

It is quite tricky to generate data without bias, or common behaviors to be acquired by a system and develop a generic affective intelligence. It would be either a non-sense AI without capability to understand intuition due to the lack of experience and cognition provided in the learning process. Or it will evolve using misled information, creating a contradictory individual unable to understand the norms and associations implicit in the human relations and reasoning through intuitions, being dangerous for social interaction.

Nevertheless, the affective AI can be tremendously helpful to individuals as proposed in [3], [4] and [8], providing tools and experiences focused in facilitate our lives and interaction with the world. We can better stimulate our creative processes and help ourselves to take out parasite biases that affect our cognition, beliefs and decisions. Finally, as a natural consequence, we will be creating a better and collaborative environment to our species improving our experience with technology, social interaction and cognition process.

References

[1] Hugo Mercier and Dan Sperber, The Enigma of Reason. Penguin, 2017.

[2] Daniel Kahneman, Thinking Fast and Slow. Farrar Straus Giroux, 2013.

[3] Rosalind W. Picard, Affective Computing. MIT Press, 1997

[4] Stuart Russell and Peter Norvig, Artificial Intelligence: A Modern Approach. Pearson, 2020.

[5] Neil Sahota, Perfectly Imperfect: Coping With The ‘Flaws’ Of Artificial Intelligence.
Available at: https://www.forbes.com/sites/cognitiveworld/2020/06/15/perfectly-imperfect-coping-with-the-flaws-of-artificial-intelligence-ai.

[6] G. Kern-Isberner, I. Douven and M. Knauff, Reasoning with Imperfect Information and Knowledge. Minds & Machines 27, 7–9 (2017).

[7] Terence Schin, Real-life Examples of Discriminating Artificial Intelligence.
Available at: https://towardsdatascience.com/real-life-examples-of-discriminating-artificial-intelligence-cae395a90070.

[8] Laurence Goasduff, How artificial intelligence is being used to capture, interpret and respond to human emotions and moods.
Available at: https://www.gartner.com/smarterwithgartner/emotion-ai-will-personalize-interactions.­­­­­­

[9] J. S. de Freitas, R. R. Gudwin and J. Queiroz, Emotion in Artificial Intelligence and Artificial Life Research: Facing Problems.
Available at: http://www.dca.fee.unicamp.br/projects/artcog/files/freitas-iva05-extended.pdf.

--

--