Susan Etlinger, speaking at TOA Berlin 2016

How computers can see emotions, brands — and the flaws that make us human

TOA.life Editorial
TOA.life
Published in
5 min readJan 20, 2017

--

  • Leading tech industry analyst Susan Etlinger explains that computer vision AI technology is about more than scanning images — they uncover our deepest prejudices.
  • A computer that “sees” an image of Kim Kardashian will automatically compare her to images of hourglass, and label her “erotic.”
  • AI allows us to examine images better — but AI also replicates human biases and becomes much more efficient at implementing them too.

Susan Etlinger is an analyst for Altimeter who studies emerging technologies. She’s been focussing on computer vision and recently published Image Intelligence — a report drawn from interviews with startups, brands, industry experts, and academics.

In an age when we share 3.25 billion photos a day on Snapchat and Facebook’s platforms alone, Susan sought to answer difficult questions about images, their uses and how they are analysed in a machine-first era.

Susan’s research is a fascinating and surprising journey, laying bare the power of the image. She discusses how we are now using images in novel ways, how machines can read our emotions, and how this is making us reconsider our social behaviour and our business practice.

Here are her most fascinating take-aways — and her whole wide-ranging talk can be viewed below.

We make and share more images now than ever before in history — but images have always had incredible power.

“Single images have changed markets. What’s notable about this photograph of John F. Kennedy at his inauguration is not what’s in it, but what’s not in it: he’s not wearing a hat.

“The fact that he wasn’t wearing a hat is generally attributed to the complete and utter fall of the hat market in the early 60s. Men stopped wearing hats.”

The fastest growing language in the United Kingdom is emoji.

Images are so ingrained in our lives that they are true languages on their own merit — and you’re using it already.

According to a 2015 study, the fastest growing language in the United Kingdom is… emoji. Susan thinks that this is inevitable in a world full of images:

“Emoji are a very economical way of expressing emotion. In this study researchers found that teenagers actually prefer to use emoji to convey deep feelings.

“They’re more comfortable using emoji than they are using “language”, and intuitively, that makes sense.”

So what does this novel interpretation of imagery mean for businesses and their all-important branding?

There is a problem for marketers when brand logos are everywhere: everyone sees them, but no-one knows how much people react to them.

Susan said: “In the conversations that I had with people who work in marketing or branding, they said that there are lots of images in media and online that include their brand — but at scale it’s really impossible to understand what that means.”

“So, the brand is evident in a photograph, but it’s not actually being mentioned by name. For instance: you wouldn’t say “Oh look here’s Ana Ivanovic at the U.S. Open, standing in front of an Olympus logo!” It’s not not really the way that people communicate.”

Thus computer vision technology could read photographs and identify where a brand’s logo appears — in the media or on your social feed.

According to Susan, work is already being done to apply this technology: “All of the companies that I spoke with for this research are trying to figure out some sort of monetary value of the work that they’re doing with images.

“They can take this data and create image intelligence — to develop predictive models and act on emerging trends.”

There are limits to what computers can see, and they need to be trained by us — for now.

“Artificial Intelligence — and the neural networks in the computers, which are basically computer simulations of how the brain works — learn like children.

“We know innately that a backpack is a backpack: it can be any colour or print but it’s still a backpack. You can show a computer an image full of backpacks, but you’d have to train it to understand this subtlety.”

When AI compares Kim Kardashian to other images it’s seen there’s a dramatic lack of something relevant — i.e., another living human woman — in its interpretation.

Our in-built biases are taught to the computer too. Show a computer a photo of Kim Kardashian, and it’ll label her “erotic” and compare her to an hourglass — because that’s what we do.

Susan: “I thought it would be really interesting to analyse the famous “Break The Internet” image of Kim Kardashian. When AI compares her to other images it has seen there’s a dramatic lack of something relevant — i.e., another living human woman.”

Much like men have done for millennia, it drew comparison between a woman and curvy objects: ornate lamps, a hookah pipe, a harp — and an hourglass.

And revealingly, the AI knew enough about how humans interpret images to automatically tag Kim with the words, “Glamour,” “fashion,” “exotic,” “love,” “erotic,” and “romance.”

Finally — there’s good news and bad news when we have computers that reveal our biases.

Susan paints a picture of a future where we need to learn some hard truths: “Here’s the bad news: artificial intelligence replicates human bias. So if we have biased data and biased answers, not only does AI replicate them— it makes them much more efficient.”

“There’s something in the U.S. called ‘predictive policing’ — where you use algorithms to determine the probability that somebody will commit a crime. If you have a lot of data, then that’s demographic data — and you could very easily be targeting people in a way that furthers discrimination and bias.”

“However that’s also the good news. This technology actually shows us our biases much more efficiently than other types of technologies.”

“So now we can look at our data and identify that a certain group of people are missing, or that another group of people aren’t represented. It’s exciting — and you’ll be hearing a lot more about it.”

If you enjoyed this article, please consider hitting the ♥︎ button below to help share it with other people who’d be interested.

This talk was edited for clarity and length.

Get TOA.life in your inbox — and read more from TOA’s network of thought-leaders:

Sign up for the TOA.life newsletter

The security leak in your pocket and encrypted chat with bots: Alan Duric, Co-Founder of Wire explains our chatbot-filled, encrypted future

Personalised and Decriminalised: cannabis advocate Ben Larson on how cannabis will change the way we think, work, and heal

--

--

TOA.life Editorial
TOA.life

Welcome to interdisciplinary knowledge exchange. Welcome to Tech Open Air.