Algorithms Can’t Accurately Interpret Human Emotion and That’s Probably a Good Thing
Imagine living in a world where machines can read our minds and emotions; where your thoughts and your feelings are connected, shared, stored and controlled; a world where you become as transparent as air.
If that doesn’t alarm you, you’re not thinking hard enough about the consequences — say goodbye to lying to your mother-in-law; no more flirting with that cute bartender; better not kiss butt to get that promotion and no more poker face. For those of us who value privacy, the only hope will be to imitate the blank page.
The bad news first: you already live in a world where the likes of Amazon and Facebook are frantically trying to decode your thoughts and feelings, whilst Walmart studies your facial expressions to match products to your mood. Are we sad today? The wine’s in aisle nine.
The world’s largest social network, this year, bolstered its efforts to push a technology that lets users think the words they want to type. That’s great news for people unable to communicate or can’t use their hands to type, but it presents uncharted territory in terms of data abuse. “To me, the brain is the one safe place for freedom of thought, of fantasies, and for dissent,” said Nita Farahany, a professor of law and philosophy at Duke University. “We’re getting close to crossing the final frontier of privacy in the absence of any protection whatsoever.”
Amazon is less interested in reading your thoughts than it is in studying your face for signs of happiness, sadness, anger, disgust, calmness, confusion, surprise, and fear. Its Rekognition technology lets advertisers categorize faces by their emotional expression and demographics. But it has come under fire for mismatching 20% of legislators in California to mugshots and giving marketers the option to discriminate against certain demographics.
Tesla’s Elon Musk is not one to dismiss the urgency with which we need to speed up regulatory efforts in the area of artificial intelligence (AI). He’s called AI “a fundamental risk to the existence of human civilization”. But regulatory concepts are far from being finalized because exactly what AI entails is not well defined. The potential to misuse such technologies could be huge — from manipulation to discrimination. When our faces become data, perhaps we should be worried?
But here’s some good news: machines aren’t great at reading our emotions — at least not yet. According to new research by the Institute for Creative Technologies at the University of Southern California, algorithms designed to predict intentions from facial expressions aren’t doing a good job. “Both people and so-called ‘emotion reading’ algorithms rely on a folk wisdom that our emotions are written on our face,” says Jonathan Gratch, professor of computer science at the USC Viterbi School of Engineering. “This is far from the truth. People smile when they are angry or upset, they mask their true feelings, and many expressions have nothing to do with inner feelings but reflect conversational or cultural conventions.” Your white lies may be safe for now.
Government agencies are particularly interested in reading facial expressions to detect threats. The technologies are also of interest to employers to study candidates during job interviews. Unilever has been using HireVue’s technology to screen body language and mood in candidates to gauge confidence and find out if applicants lied to get a job. But linking our expressions to our true feelings is much harder than it seems. “We’re using naïve assumptions about these techniques because there’s no association between expressions and what people are really feeling based on these tests,” adds Gratch.
Take recent developments at the University of Colorado and Duke University where a team developed a neural network that can classify selfies according to 11 emotions. Dubbed EmoNet, the model has been tested on more than 24,000 images and 400 videos. The scientists found that emotional categories such as craving, entrancement, sexual desire, and horror were most accurately classified. But EmoNet wasn’t quite as adept at deciphering awe, confusion or surprise. If the best we can do is to get broad emotional categories right, is it fair that companies employ such premature technologies?
For Lisa Feldman Barrett at Northeastern University, the human emotional landscape is too diverse to accurately squeeze into an algorithm. “People scowl when angry, on average, approximately 25 percent of the time, but they move their faces in other meaningful ways when angry,” she explains. “They might cry, or smile, or widen their eyes and gasp. And they also scowl when not angry, such as when they are concentrating or when they have a stomach ache. Similarly, most smiles don’t imply that a person is happy, and most of the time people who are happy do something other than smile.”
And inaccurate matching of emotions to facial expressions has serious consequences. “This information could be used in ways that stop people from getting jobs or shape how they are treated and assessed at school, and if the analysis isn’t extremely accurate, that’s a concrete material harm,” Barrett adds.
Outcomes from such studies aren’t necessarily deterring investment. Emotion detection has become a $20 billion industry and it is widely believed that by capturing the facial expressions of billions of people, algorithms can be trained to accurately detect feelings. But the existing models are rudimentary at best — often focusing on core emotions such as anger, happiness, sadness or disgust. And it’s not just our facial expressions that capture our mood. There’s body language, vocal expressions, sweating, a varying pulse, and heart rate. Emotions are a “product of human agreement”, Barrett argues. It takes humans to decipher human emotion accurately.
That’s not to say algorithms can’t be improved. If we feed machines enough data, we could fine-tune their prediction. But are the classifications we’re training the algorithms on too narrow? Are we too focused on stereotypes rather than the subtle nuances of human emotion? It may just be a matter of time before facial expression algorithms are saturated enough to accurately predict our feelings. But that doesn’t mean we won’t be able to escape detection; there’s always pretending. Ring in the evolution of our emotions.
Ultimately, it is not the technologies themselves that pose the biggest threat, but the people who use them.
Illustration by Anne Freier.