Facial expressions
“Colors, like features, follow the changes of the emotions.” Pablo Picasso
Facial expressions are perhaps the most direct and preeminent way human beings regulate their interactions with each other. From a very young age humans learn to understand and categorise these expressions, however researchers in this field require more objective ways to describe these expressions. The Facial Action Coding System (FACS) is the best known and most commonly used sign judgement approach for human observers to describe facial actions.
This post covers the methods and best practices in facial expression measurement; as well as sharing insights on how the dynamics of facial expressions and their combinations might advance this science in the near future.
6 basic emotions
For more than 40 years, Paul Ekman has supported the view that emotions are discrete, measurable, and physiologically distinct. Ekman’s most influential work revolved around the finding that certain emotions appeared to be universally recognised, even in cultures that were preliterate and could not have learned associations for facial expressions through media. Another classic study found that when participants contorted their facial muscles into distinct facial expressions (e.g. disgust), they reported subjective and physiological experiences that matched the distinct facial expressions. His research findings led him to classify six emotions as basic: fear, angry, disgust, happiness, sadness, surprise.
21 emotions… and more

In a recent issue of the Proceedings of the National Academy of Sciences, scientist report that they were able to more than triple the number of documented facial expressions that researchers can now use for cognitive analysis.
“We’ve gone beyond facial expressions for simple emotions like ‘happy’ or ‘sad.’ We found a strong consistency in how people move their facial muscles to express 21 categories of emotions,” said Aleix Martinez, a cognitive scientist and associate professor of electrical and computer engineering at Ohio State. “That is simply stunning. That tells us that these 21 emotions are expressed in the same way by nearly everyone, at least in our culture.”
Okay so what is next and why does it matter?
So you are probably wondering, does it matter if there are 6 facial expressions or 21 expressions which are shared? Well yes, this is important because the greater degree of commonality there is between people, the greater the insight that can be gathered through creating a model with one group of people and applying it elsewhere. As humans we have an intuitive appreciation that there should be some commonality, we are ourselves able to recognise when people are happy, enthusiastic, tired etc… by watching them. If this were not the case we wouldn’t have actors winning awards for portraying characters whose emotions we agree upon. But due to recent advances in facial analysis technology we should now be able to detect things we could not before
CrowdEmotion unique detectors
1. Facial appearance not points
Past: Up until now the majority of companies have focussed on looking at points on the face, eyes, corners of the mouth etc… and looked at how these move when people make facial expressions. Of course this method requires throwing out a lot of information and risks losing a lot of subtle facial changes
Future: We don’t place points on your face we consider features with the aim to capture changes in face texture, such as those created by wrinkles and bulges, as well as changes caused by facial motion.
2. Non typical head poses/ positions and expressions
Past: The origins of emotional recognition exist in the laboratory setting, where researchers would manually annotate records of the expressions they observed. When the work described above was conducted to look at the facial movements associated with emotions, these could be seen from certain video angles in a controlled environment. However often in real world situations this environment and the angles of observation will differ hugely.
Future: We are fortunate to work with a technology that as described above looks for facial appearance and specifically prominent edges of a face, rather than facial points. This means that while individual points may be obscured or appear different our system is able to learn the characteristics of a given emotion for this different environment, making it applicable in the real world.
3. Dynamics of facial expressions
Besides the configuration of facial expressions, their dynamics play an important role in the interpretation of human facial behaviour.
Past: Typically companies have not focussed on the dynamics of expressions, instead recognising points in time where an expression is seen
Future: An increasing body of research is warning against ignoring the dynamics. Psychologists have found a difference in duration and smoothness between spontaneous and deliberate expressions, e.g. between polite and amused smiles. While facial expression dynamics are seen as essential for categorising complex mental states such as pain and moods.
4. Combinations of facial expressions
Past: Previously this field has focussed on accurate detection of discrete emotions such as ‘happiness’ and ‘sadness’ . Yet, humans do not experience emotions in isolation and often what is important is the emotional journey and combination of these emotions which define an emotional state.
Future: We analyse these combinations of emotions and the sequence in which they occur to understand more subtle emotional states.
Emotion applied
With this powerful detectors we can harness the power of our computing technologies to develop innovative and, in some ways, revolutionary applications.
Taking into account the importance of understanding the implicit communications and emotions we can develop machine learning algorithms that learn the emotional mix associated with a particular outcome, for example advertising effectiveness or to understand attention, brand awareness and pricing.
Applications are endless.