Ethics of Emotion AI — Part 1

Ross Harper
Jun 19, 2019 · 7 min read

Ethics. AI. These aren’t just fodder for late-night musings. Machine learning is fuelling a fourth industrial revolution. Put that fuel in an engine, and we can power great change. Leave it to spread unchecked, and all it takes is one spark for everything to go up in flames. Fuel needs direction; AI needs ethics.

This blog series was inspired by a panel on which I sat at CogX conference. Here, we discussed the ethical implications of a rapidly-growing subcategory of AI: ‘Emotion AI’ or ‘Affective Computing’. This involves using machine learning for the automatic prediction of human emotion. Sound potentially unethical? Good. Let’s discuss.


This post will cover:

Where is Emotion AI Today?

Pretty far along actually. It’s been going for around 25 years in its modern form, kick-started by a seminal paper from Rosalind Picard.

Facial Analysis

Computer vision is probably the most common method of emotion recognition. Deep neural networks (especially convolutional ones) are particularly good at tracking facial landmarks, allowing them to distinguish between a smile and a frown. Some companies claim their algorithms can recognise a suspiciously wide array of different emotions, but the usual suspects tend to be anger, contempt, disgust, fear, joy, sadness and surprise.

Image for post
Image for post
Mystery solved. The Mona Lisa is neither happy nor sad. She’s… neutral. (Photo credit: Realeyes).

Speech Analysis

Natural language processing is another popular weapon in the affective computing arsenal. Naturally, the words we use can be predictive of our emotional state. However, tone, rhythm, and intonation all help disentangle true emotion from more ambiguous phrasing, like for example sarcasm.

Physiological Analysis

Comparatively less research has focussed on the prediction of emotion using physiological signals. This is perhaps unsurprising — humans too rely on their vision and hearing to recognise the emotions of other people. Artificial systems, however, need not be similarly constrained.

A number of labs have shown it is possible to predict emotional states from a combination of physiological signals (heartbeat, galvanic skin response, skin temperature, respiration, electroencephalogram etc.) My company, Limbic, recently went one step further by predicting emotion using only the optical heart sensor in a consumer fitness tracker.

Image for post
Image for post
Similarities between NLP- and physiology-based emotion recognition.

Unethical Applications of Emotion AI

This being a blog post about the ethics of emotion AI, let’s start with some of the obvious misuses of this technology.

Advertising

If you think emotion AI and advertising are a natural fit, you’re not alone. Affectiva — a company founded by Rosalind Picard and Rana el Kaliouby — recently gained unicorn status (valued at over $1B) using emotion AI for market analytics. Inevitably, other companies are now following suit. It’s a bit strong to brand these start-ups ‘unethical’, however they are definitely profiting from our emotions. And the advertising industry isn’t known for its strong moral compass. It already manipulates consumer emotions; just not yet in closed-loop. The mind shudders at the prospect of emotionally intelligent ads aimed at vulnerable buyers. “Didn’t get that promotion? Why not purchase some overpriced vodka?”

The big tech companies are also getting in on the action. In a recent patent (US20180167677), Facebook describe an invention that performs emotion recognition while users watch adverts on their mobile. Amazon have patented emotion AI for Alexa (US20180182380). Samsung, Huawei, Microsoft, Google and Apple are all working on similar ideas. Pandora’s box, it would appear, is open.

Cambridge Analytica 2.0

If you can sell consumer goods, why not whole ideologies? Unethical application of emotion AI is a threat to our own human agency. Real-time feedback on public opinion/feeling/attitude could usher in a new form of dangerous political propaganda.

Police States

How might authoritarian governments leverage emotion AI? Saudi Arabia, for example, already uses websites like Twitter to quash civil liberties. The recent case of Jamal Khashoggi illustrates how that particular regime responds to verbal criticism. In a country that widely implements capital punishment, one can only imagine what ghoulish fate awaits those who feel the wrong way. China too, with over 170 million CCTV cameras, already has the infrastructure for extreme violations of privacy. For a country that implements facial recognition and a social credit score system, emotion AI might not feel like such a culture shock.

Emotional Discrimination

We are the sum total of our experiences. Our emotions therefore define key aspects of who we are. What happens when we are judged by this new standard?

With a firm nod to the debate around genetic sequencing, emotion AI presents an opportunity for discrimination based on our emotional makeup. Health insurers could adjust premiums according to our propensity for depression. A number of insurers already offer financial incentives to customers who are physically fit — might one day we be penalised for a lowered emotional fitness? Or refused a job on similar grounds? It doesn’t seem fair, but if the data exists, we must acknowledge the potential for it to be used in this way. Perhaps we can take inspiration from the genetics industry on how to avoid discrimination moving forward.

How else might emotion AI be used unethically? Please share your thoughts in the comments below.

Ethical Applications of Emotion AI

It’s not all doom and gloom. Like any technology, AI is a tool. It’s neither good nor bad. Humans still have the monopoly on such things. Ethics, then, applies only to how humans choose to use emotion AI. And there are some fundamentally good ways of doing this.

Emotional Prosthetics

Emotion is a critical component of human behaviour, communication and decision-making. It’s therefore no surprise that emotional intelligence is correlated with success in social settings. Unfortunately, not everyone finds emotion easy to interpret. People living with autism spectrum disorder (ASD) can struggle to recognise the feelings of others, leading to serious challenges in every day life.

Emotion AI can act as an ‘emotional hearing aid’, helping people with ASD operate in tough social situations (e.g. school or the workplace). The technology was recently packaged into smart spectacles to help children with ASD improve their social skills. Moreover, Professor Maja Pantic — my co-panelist at the conference that inspired this post — is using emotionally intelligent robots to help children with ASD better-understand emotions. Heartfelt stories of the families whose lives have improved should go some way towards tempering the critics of emotion AI.

Image for post
Image for post
Emotion AI-powered robot teachers and smart glasses. Photo credit: Bruce Adams and Stanford University.

Treating Mental Illness

Depression and anxiety cost the global economy $1 trillion annually. 1 in 4 people will experience a mental health issue this year. And according to the World Health Organisation, close to 800,000 people take their own life every year. This is a massive problem that needs solving.

However, unlike other areas of medicine, mental health relies heavily on qualitative data. Psychology doesn’t really have the equivalent of an X-Ray machine. This can lead to inefficiencies in therapy and makes it hard to evaluate treatment response. Emotion AI has the potential to bring quantifiable metrics to the space, leading to better patient outcomes.

I should disclose that I am a little biased. Bringing better data to mental health is the mission of my company. We’re using optical heart data from consumer fitness trackers to measure emotion in patients undergoing therapy. We then smart-prompt patients to journal their mood during relevant moments of their day, making it easier for them to communicate valuable information with their therapist. We plan to extend our system to diagnostics and crisis prediction. Watch this space.

How else might emotion AI be used ethically? Please share your thoughts in the comments below.

Protecting the User

Given the risks around emotion AI, there are strong arguments against further research in this field. However, the technology exists, and is a natural extension of machine learning more generally. Even if we were to criminalise it, computer science is notoriously hard to police. Furthermore, being ‘bad’ isn’t so different from obstructing ‘good’. To deny the potential benefits of emotion AI would itself be unethical. Instead, we must build safeguards to protect the end user.

Edge Computing

Both the inputs and outputs of emotion AI involve sensitive data. How can we give people the benefits of this technology while keeping their data secure? One promising way is through edge computing. Advances in computer hardware make it possible to deploy machine learning models on smartphones and PCs (a.k.a the ‘edge’). This contrasts typical models of a centralised cloud server crunching numbers. Edge computing means that sensitive data need never leave the user’s device, making it harder to exploit for malicious purposes. Services like Apple’s FaceID authentication use this concept to ensure the privacy of facial data.

Regulation

Barring explicit criminal activity, regulations are usually a good way to keep companies in check. Ethics come into play here. What do we as a community find acceptable? What is unacceptable? It’s then up to policymakers to incorporate public opinion into law.

However, legislation is only as good as the legislators. The emotion AI community has a responsibility to educate government on technical developments in the field. Likewise, government has a responsibility to source expert opinion and act on it.


The risks are certainly great. However, it feels wrong to stand in the way of progress. As ever, an active dialogue is key. What risks are worth taking? Where medicine says ‘yes’ but history warns ‘no’, which is the right way to jump? With public engagement, and responsible stewardship by the expert community, I believe emotion AI could be a force for good in the world. Let’s stay on top of it.

Image for post
Image for post
Photo credit: Stanford University

Limbic AI

Bringing better data to mental health

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch

Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore

Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store