A Primer on Emotion Science for our Autonomous Future

Ben at Sensum
12 min readApr 12, 2018

--

I have released a few stories recently about what to expect from the coming generation of transportation tech, as we train it to measure its user’s emotional state, and potentially respond to the user in an appropriate way. I’d like now to dip into some of the scientific theory that companies like ours (Sensum) lean on to develop this kind of ‘empathic technology’.

Understanding human emotions is like understanding any infinitely complex and intangible concept. There is no perfect toolset. Psychology is a young science, and the obviously scientific study of emotion is an even younger subset thereof. What’s more, the leading theorists in the space have very different views on how to approach the science, even down to the most basic question: what is an emotion?

The theoretical frameworks I will describe here are a few slices through the milieu of emotion science. They are not necessarily the best tools for all possible approaches to the subject. Instead, I hope to summarise the main models our team uses to build empathic technology, and highlight some of their features and limitations.

In particular I want to illustrate three broad areas of emotional modelling:

1) Discrete

Ask most people to tell you what emotions are and they might list common emotional labels such as angry, happy or sad. Indeed, the major model of emotion science to emerge from the discipline’s early days attempted to categorise human emotions into a set of discrete labels, as they are displayed on our faces. The best known of these is the six ‘Basic Emotions’ promoted by Paul Ekman:

This kind of modelling gives us a simple narrative that most of us can understand. But our moods and feelings are fabulously intricate things, and such discrete labelling has its limitations. Theorists disagree on how many discrete emotions there are, ranging between about six and twenty, leading to theories that distinguish between ‘primary’ and ‘secondary’ emotions. Recent science even asks if a universal set of emotions exists across human cultures in the first place.

Thinking about emotion in a discrete fashion also raises an interesting issue that empathic tech companies like ours face daily: which emotions are you looking for? Taking our current area of primary focus as an example, consider what emotional labels are most relevant to helping a driver in a vehicle. How much should we care that the driver is happy, angry or surprised. Maybe quite a lot. But before that we might look for other more important states that most people probably wouldn’t call emotions. Specifically, these could include: fatigue, stress, distraction and intoxication, all of which are clearly correlated with potentially lethal driving scenarios.

In order to make a useful inference about a person’s current state, such as ‘this driver is falling asleep’, it’s not always helpful to concern ourselves with common emotion words. We just need to find a reliable signal in the noise of the incoming data, that can be measured in some way that is valuable to the use-case in hand (ie. not crashing). Empathic tech isn’t just about the standard concepts of emotions, it involves measuring and responding to the full range of human emotion, behaviour and physiology.

2) Dimensional

We can escape the confines of discrete labelling by trying to assign values to emotions on some kind of numerical continuum. A popular version of this dimensional approach uses two ‘dimensions’ of scoring:

  • Valence — it’s a fancy word, as in ambivalence or equivalence. But its meaning is simple: how positive or negative the emotion is.
  • Arousal — this is essentially the intensity of the emotion, ranging from a calm/relaxed state of low arousal to an agitated/excited state of high arousal.
Two dimensions of emotion, scored on x and y axes.

This way we can score the experience of an emotion without having to assign it with a narrow label that could be too subjective or restrictive for our needs. In a dimensional space, an emotion doesn’t have to adhere to a specific locus with an exact score, it could instead be a fuzzy area where scattered scores are concentrated.

This dimensional approach allows for the overlapping nature of the many thousands of names we give to our emotions. Some of those overlapping regions might have names of their own in some languages, and others not. We don’t actually need to care what they are called to give them a score.

Different people might provide different scores for their experience of what they would call the same emotion, such as anger. But a dimensional model allows for such nuances.

And there’s still a home for discrete emotions in this approach. We can imagine our familiar labels superimposed over approximate areas of the measurement space. Anger, for instance, would no doubt exhibit low valence and high arousal.

Classic examples of discrete emotions in dimensional space.

We can also go beyond two dimensions to produce ever more detailed emotional measurement. A popular candidate for a third dimension is typically described as dominance, power or control. This is considered to still be rather more theoretical than the first two dimensions of valence and arousal, and can have different meanings depending on its application.

A further, fourth dimension of novelty is employed by some theorists too.

Our work in extreme sports provides an example of three-dimensional modelling from our own adventures through the bewildering landscape of human emotion. By placing biometric sensors on elite mountain-bikers, and collecting contextual data from the surrounding environment, we were able to extract a third dimension that appeared to correlate with how in control of the action the athlete was. A positive score was considered to be dominant, when the athlete was ‘in the zone’, performing confidently in a familiar setting; the opposite score was interpreted to be vulnerable.

Our recent work in the automotive space has highlighted the desire for drivers to remain in a dominant state at all times. When they experience emotions that feel out of control, the driving experience appears to become uncomfortably effortful, if not dangerous.

3) Appraisal

Labelling and scoring emotions in discrete or dimensional ways helps us to give them some meaningful categorisation but doesn’t explain how or why they occurred. Some of the most recent emotion science has tackled this shortfall by developing models that try to account for the surrounding context. Rather than assuming some kind of real, objective embodiment of an emotion, we are able to view it in the light of factors such as our previous experiences and memories, genetic makeup and cultural background.

These newer models tend to include a factor of appraisal. They take into consideration how the mind interprets the situation/object/event that is prompting the emotion. When a driver sees another car pull out in front of them, their brain appraises the information it is receiving. They think, ‘there is a car ahead, I might crash’. They imagine the outcome, then an appropriate emotion follows — probably what we would call fear.

This image illustrates the Component Process Model, led by Klaus Scherer. It shows how the person experiencing an incident of emotion first appraises what is happening, then exhibits various different types (components) of response — expressive (eg. face, voice, body), behavioural (eg. action, movement) and physiological (eg. heart rate, breathing, muscle tension). All these factors are considered in combination to result in the emotion that the person experiences.

Mixing the Models

So which emotion model is best?

It depends what you want to achieve.

We’ve measured emotions all over the world, from the mundanity of a supermarket floor to the extremity of an active volcano. Along the way we’ve had our hands held by our buddies across the road from us at the School of Psychology, Queen’s University Belfast, guided particularly by Dr Gary McKeown, a specialist in emotion science. In most cases, the best results have come from a mix of theoretical approaches.

Typically we start by seeking emotional signals from a range of both physiological data (eg. heart rate, facial expression) and context data (eg. location, speed). By combining these various components we can apply algorithms to make an appraisal of the scenario in hand. We then convert that analysis into dimensional scoring (eg. valence, arousal, dominance) before applying discrete labels to the target ‘zones’ of emotional data, which provides a human-readable narrative of the analysis.

As empathic AI becomes more widely deployed into all the digital systems with which we interact in our daily lives, we expect to see this kind of multi-level processing being core to high-quality user experiences.

Machine Emotions

The essential function of empathic technology is to mimic the emotion modelling process that our brains and bodies execute constantly every day.

You receive signals from the world around you via your natural senses and experience emotional responses to them¹. The human sensor array (eyes, ears, chemical receptors, etc.) collects data signals such as light, sound and taste, and sends them to the brain for interpretation. Not only do we then ‘feel’ an emotional response, but our physiology also changes accordingly. These changes can materialise in many forms, from facial expressions to a racing pulse, to the release of chemicals into the bloodstream.

Empathic technology functions in an analogous way. Digital sensors (cameras, microphones, wearables, etc.) collect electronic data signals such as video, heart rate and temperature that monitor the human’s physiological changes. They then feed this data to an emotion-processing algorithm.

Just as our brain tries to make sense of the information it receives, companies like ours design software to filter and sort sensor data and feed it to algorithms that interpret it for emotional signals. Jump over here for more about how this works in a practical context (driving a vehicle).

The scientists and engineers building empathic technology use emotion models like those described above to identify meaningful emotional signals from the incoming data streams. A highly empathic machine therefore would be able to wrangle multiple information sources simultaneously and know which ones to pay attention to when. To achieve this it must apply statistical algorithms in a series of steps:

  1. Filter the unwanted noise that is typical of data collected from human physiology and behaviour, especially in the wild, in real-life scenarios.
  2. Weight the available data streams for their relevance and quality at each specific moment and for the current context. One type of data might be far more useful than another depending on when and how each is collected.
  3. Classify these nice, clean data signals using a discrete (happy, sad), dimensional (high/low valence and arousal) or other theoretical framework of emotion.

Filter, weight, classify. These steps require algorithms that employ statistics to infer probabilistic scores from the available signals. Like I said, it’s not perfect. But the human brain is an imperfect statistical engine too — it also makes probabilistic inferences based on limited and contradictory signals. And this generally works just fine, without us even noticing it is happening.

Our brains still make mistakes, they can be tricked by confusing data such as visual illusions, or by poorly adapted processing tools such as bad memories. Or they can make perfectly valid inferences that just don’t make sense in the modern world, like making us feel hungry when we see a chocolate bar.

For better or worse, this statistical approach to understanding human emotions is taking us to some interesting places in the current science of emotions. Some of the field’s leading scientists such as Lisa Feldman Barrett argue that the brain constructs an incident of emotion based on its probabilistic predictions about the information it is receiving. This says that emotion is predictive rather than reactive, contradicting the common idea that the emotion is a natural response to the situation (see the Theory of Constructed Emotionhere’s a video).

There are many ongoing controversies within emotion’s academic community. As a prime example, the function of facial expressions and their relationship to emotion has been challenged for many years. A recent statement of this argument can be seen here (Crivelli & Fridlund, 2018).

Our Empathic Machine Future

Nevertheless, this is an exciting time for the science of emotion. The field is accelerating forward, not least due to the increasing availability of affordable data sources. Tens of millions of wearable devices are sold each year now, putting biometric sensors on consumers all over the world. Video cameras and microphones have become ubiquitous, largely because they are bundled with the mobile tech in our pockets. Our words and actions are tracked by websites and software. The internet, in both its physical and digital forms, can now provide deep insights into the nuances of human emotion.

Devices such as vehicles and smartphones are increasingly being adorned with more and better sensors, and new devices are being connected to the internet every day. By algorithmically interpreting the exploding pool of human data these systems produce, we will see empathy arising in all kinds of human-machine interactions.

What we choose to do with the opportunities provided by empathic technology is already overdue for wide, public discussion. Artificial empathy could prove to be any of the following:

  • Annoying. Yes, your next fridge might ask you if you’re sure it’s a good idea to take another pork pie from its shelves after you’ve had two already and you’re really not in the mood to hear it right now. Yes, your car might suggest you listen to Spotify’s 100 Top Feel-Good Pop Hits if it infers that you are feeling angry that morning and are likely to drive too close to the guy in front.
  • Invasive. Recent incidents like the Cambridge Analytica / Facebook scandal have shown us how a person’s interactions on a social network alone can provide enough data for digital algorithms to manipulate them towards a third-party’s desired behaviour. Now we are beginning to share even more intimate, physical data too: a continuous feed of our bodies’ current states. Ethics and protection are fundamental to ensuring that this rich dataset is not exploited to control or harm us.
  • Benevolent. Ultimately, the positive aim of most of the people working in the empathic tech space is to use digital technology to help us humans understand ourselves better. With machines that are able to measure our emotional state and provide an appropriate response at the right time and in the right way, we can achieve a new level of emotional intelligence. Applications could range from saving lives (eg. identifying driver fatigue or predicting suicidal behaviour) to simply making us more content in our daily lives.

Hopefully in the immediate future we will see some or all of the following:

  • Universal emotion modelling — with a single processing engine that is able to infer accurate and meaningful emotional signals from whatever data sources are available at the time (which is the main aim of our products at Sensum).
  • Universal continuity — of measurement and service. If your vehicle uses one empathic system that is siloed away from the systems used by your smartphone or fitness band, you could miss out on the richer benefits that would come from combining these data sources.
  • Exponential machine learning — applied to the data sets that our daily activities create. Given the right instructions, algorithms could uncover new insights about human emotions that we haven’t yet imagined.
  • Explicit and mutually agreed ethical practices — not just to protect users from abuse of their personal data, but also to inform and empower them in how they deal with the organisations to whom they consent to benefit from its use.

While both the science and technology of emotion are still in relatively youthful states, we have a journey of learning and discovery ahead of us. Just as a dear friend can teach you things you never knew about yourself, the emergence of emotionally intelligent machines can shed new light on the mysteries of what it is to be a human.

Related Reading

Footnotes

  1. Of course our emotions don’t just correlate with external signals, but internal ones too. You only have to think about something scary to feel afraid. Your body can tell you you’re hungry without anything external going on.

--

--

Ben at Sensum

Dabbling & japing with all things tech, startup & creative media. Friendly, verbose, lanky & bald. Writing as COO of Empathic AI company Sensum (sensum.co)