Accuracy of software for facial emotion expression detection

Martina Baránková
EmotionID
Published in
3 min readJun 18, 2017

--

Emotion ID software works on principles of machine learning and face recognition. When the software recognizes a face, on the basis of appearance and geometric approaches computes scores of probability of facial expressions of basic emotions (anger, disgust, fear, happiness, sadness, surprise) and neutral on scale from 0 to 1.

To reach high accuracy in emotion detection, we have to undergo following steps:

  1. have frames with facial expressions of emotions — to teach the software how does a unique emotional expressions look like, we have to receive hundreds of frames with these expressions. Facial expressions are elicited by emotional stimuli which trigger a targeted emotion.
  2. label these frames with corresponding emotions — we need human coders. These hundreds of frames contain emotional expression but also some noise. That´s why human coders are needed. 1st of all they select the frames with emotional expression. 2nd they label the expression on each frame with emotion. Agreement between labelers upon emotional expression is the most important part of this process. One coder is not enough because, despite training, he/she might represent a set of biases. The higher agreement of coders needed to create a label, the more accurate is the label. Of course it is important to consider the amount of time for labeling, hence the number of labelers, because it is a very time-consuming activity.
  3. “teach” software how concrete emotions look like — when we have enough labels (frames on which for example 3 from 4 coders agreed upon emotional expression) in each category (basic emotions), we can train the software in recognizing these facial states. The software calculates the geometric and appearance features on each frame and creates an N-dimensional space for each category.
  4. validate software on a research database of human facial expressions of basic emotions — nowadays we can find numerous publicly available databases with emotional facial expressions for research purposes that are broadly accepted by the research community. These databases are usually validated on lay participants who agreed upon various expressions as signs of concrete basic emotion. These databases include for example KDEF, RaFD, CK/CK+ or MMI. When we have tested Emotion ID software with validated dataset of emotional facial expressions, in order to validate its functionality and compare it to other available solutions.
  5. we have the accuracy of software detection of basic emotions! — by comparing results of Emotion ID software with validated labels we obtain a matching score — this is the accuracy of software. The accuracy of software detection is computed by ROC (receiver operating characteristic) which is the curve that illustrates true positive rates versus false positive rates. ROC of Emotion ID is following: anger 99,5%; disgust 99,9%; fear 98,3%; happiness 100%; sadness 97,7%; surprise 99,8%; neutral 97,5%.

Steps described above approximate in a simplified way the validation process of Emotion ID software for facial emotion detection and how we reach the accuracy numbers. The whole process is highly time-consuming and much more detailed. This is a simplified look inside the validation process. For more information contact support@emotionid.ai.

--

--

Martina Baránková
EmotionID

Chief Science Officer at Emotion ID, PhD. Candidate in Applied Psychology at Comenius University in Bratislava