Datafying Your Emotions: How Would You Feel About That?

Imagine vividly expressing your feelings on an Instagram post or vlogging about an emotion-heavy topic on YouTube. After making these posts, the platforms begin to give you curated advertisements based on the words you used or the facial expressions in your posts. How would you feel about that?

Emotion recognition technologies, although helpful to some extent, are often, if not all the time, invasive to technology users and are often, again if not all the time, implemented into systems without the awareness of users. Without proper regulation of emotion recognition technologies, more harm than good is done.

The history of emotion recognition technologies can be traced back to Paul Ekman and Wallace V. Friesen who, in 1975, identified “basic” and “universal” emotions: anger, disgust, fear, joy, sadness, and surprise. As understandings of emotions and as technologies have advanced, systems have begun to use these emotions so technologies can better perform and provide content to users. But, is it actually helpful for technology users?

Emotion recognition technologies can come with a few benefits. In Under the hood: Suicide prevention tools powered by AI written by Dan Muriello, Lizzy Donahue, Danny Ben-David, Umut Ozertem, and Reshef Shilon in 2018, the authors discuss how AI tools can detect emotions that identify potential suicide and self-injury from social media users’ posted content. These algorithms have also been used to help children with autism better identify emotions and facial expressions from others as discussed in a study published in npj Digital Magazine in 2018.

Although these emotion recognition technologies can allow humans to better identify these complex psychological states and responses, these tools are more harmful than good. It is no surprise that machine learning algorithms, including emotion recognition algorithms, contain bias. According to Chakraborty, Majumder, and Menzies in an article regarding bias in machine learning in 2021, the root causes of bias in machine learning are the prior decisions that control what data is collected and the labels assigned to the data. AI would not exist without those who design and create them, and therefore, bias is directly implemented by its inventors from the first step.

One of the many forms of bias that emotion recognition technologies possess is age bias. A research article on age bias in 2021 found that age is a factor that emotion recognition technologies often overlook. It was found that, although older adults weren’t considered when emotion recognition technologies were initially made, this demographic is becoming increasingly involved in technology-use, but they continue to be disregarded by recognition systems. It’s important to consider various demographic groups to be inclusive and equitable to people from different backgrounds, but AI continues to overlook different demographic subgroups.

What many find (myself included) to be the most striking criticism of these technologies is the inherent dehumanization involved with person-to-AI interactions. People are emotional creatures, and most would agree the intricacies of higher level ideas such as empathy, joy, shame and the like cannot be found within a computer program, irrespective of its complexity. Even more so, technology cannot accurately identify emotions, even based on facial impressions that may indicate so, as described in a study led by Lisa Feldman Barrett.

To further illustrate this point, imagine a hypothetical scenario and attempt to discern what the average person’s reactions may be in a world where emotion recogniton technologies dictate our life. Given the upheaval of the traditional office environment over the past 2 years in favor of a more-if-not-all virtual one, imagine working from home as a website developer. Your company, concerned with worker efficiency away from the office, has implemented a few “productivity” programs on your work computer that analyzes your facial expressions for signs of distraction or fatigue, (i.e. a nap) logs your actions, and penalizes you for infractions on these rules. You, not knowing quite the degree of monitoring your company has subjected you to, is vaguely aware that you are being remotely “supervised”, but believe that completing your work is good enough to stray your company away from seeing how else you spend your time during work hours.

Upon implementation of these emotion recognition systems, your normal work ethos is upended. The program cannot differentiate between leisure and necessity, so something as benign as needing to leave your workspace to take care of, let’s say, your dog, is marked as skipping work, or taking a few minutes away from programming to scroll through social media brands you as a delinquent. You may feel paralyzed. You become petrified of frightening the algorithm, as there’s no courting it, there’s no arguing with it, and you certainly cannot change it’s mind, as the system does not openly allow for appeals of its judgements. It’s not similar at all to when you were in the office, pre-pandemic, where you can come up to your supervisor to mention something not strictly work related (i.e. if you needed a little longer to come in to work tomorrow because you had a difficult day, or anything else that comes inherently with being in the workplace). This was, for a lack of better words, “simpler” because of the person-to-person interaction that some argue is largely more efficient than human to computer interaction. You’re able to engage with and be empathized by another person. Pushed under the guise of augmenting productivity, your every move during the workday is now property of your company, and there’s nothing you can do about it.

Dystopian? Possibly. Unrealistic? Maybe. Impossible? Absolutely not. There is nothing preventing an implementation such as the one detailed above being instituted in workplaces or schools around the world, and, in fact, an inspiration behind the scenario above was virtual anti-cheating measures already widely in use in academic contexts. To the AI’s algorithm, human emotion’s are more than just data, and something about that is just wrong. People need to interact with other people. In an article discussing the effects of technology on human interaction, the author describes how technology has caused the end of intimacy and can be destructive to growing children. We need human interaction because as people, we want to be able to discuss and reason with each other, something not possible with an inflexible algorithm.

Another reason not yet discussed is true of AI broadly, but emotion recognitionally oriented AI more specifically, and that is the point of false objectivity. Computers have a connotation of certainty, in the sense that if a result comes out of a program, most people are inclined to accept it. In Kate Crawford’s “Artificial Intelligence is Misreading Human Emotion,” Crawford describes how tech companies want users to believe their findings regardings human emotion are accurate. Take a calculator, for a simple example. When was the last time you double checked to make sure it produces the correct result? Never, because you trust that its creator was knowledgeable and constructed a device that works, and that doesn’t need your supervision as a prerequisite for its function. Now a calculator is far removed from the topics of discussion here, but the analogy extends onwards. People already rely on technology for areas as sensitive as their own personal health. With this in mind, take the analogy about the hypothetical business from above again, this time from the employer’s perspective. The employer is not going to second guess the expensive software they purchased because it’s software, and the predisposition towards trusting the AI over the person is stronger than ever. This is reminiscent of the point made previously over how argumentation cannot be had with a program such as this, so whatever comes out of the innumerable lines of code must be treated as gospel. Such programs will lead to loss of personal autonomy, as people who are victims of the AI know there isn’t any discourse to be had to verify the results, and those who employ the AI are just as manipulated, as they have been conditioned to accept whatever decisions it reaches as unconditionally correct.

From these hypothetical, yet not far off, examples included in this post, we must consider the ways in which humans should be weary of these technologies. Did you know that even Spotify tried datafying your emotions? More often than not, these systems are implemented into the many social media platforms and technology systems we use day-to-day. Rather than blindly using these devices, take a moment to read literature like this to inform yourself of the harms that emotion recognition algorithms, and algorithms in general, can have on you as an individual.

So, how do you feel about all this?

REFERENCES

Barrett, Lisa Feldman, et al. (2019). Emotional expressions reconsidered: Challenges to inferring emotion from human facial movements. Psychological science in the public interest 20.1: 1–68.

Chakraborty, Joymallya, Suvodeep Majumder, and Tim Menzies. (2021). Bias in machine learning software: why? how? what to do? Proceedings of the 29th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering.

Crawford, Kate (2021). Artificial Intelligence is Misreading Human Emotion. The Atlantic.

Daniels, Jena, et al. (2018). Exploratory study examining the at-home feasibility of a wearable tool for social-affective learning in children with autism. NPJ digital medicine 1.1: 1–10.

Ekman, P. and Friesen, W. V. (1975). Unmasking the Face: A Guide to Recognizing Emotions from Facial Clues. Prentice Hall.

Kim, Eugenia, et al. (2021). Age bias in emotion detection: an analysis of facial emotion recognition performance on young, middle-aged, and older adults. Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society.

Muriello, Dan et al. (2018). Under the hood: Suicide prevention tools powered by AI. Engineering at Meta.

Tanner, Adam (2021). Can Technology Read Your Emotions? Consumer Reports.

Tian, Shuo, et al. (2019). Smart healthcare: making medical care more intelligent. Global Health Journal 3.3: 62–65.

Wardynski, DJ. (2019). What Are the Effects of Technology on Human Interaction? Brainspire.

--

--

Shanley Corvite
SI 410: Ethics and Information Technology

PhD student at the University of Michigan School of Information studying the role of social media in career development👩🏻‍💻