You Were Made As Well As We Could Make You: The Traps of AI in Marketing to Women

Kasia Krn
Synerise
Published in
6 min readApr 17, 2019

“Women Power Day” is an annual conference about marketing and advertising addressed to women. For this year’s edition, I was invited to give a speech about AI and hyper personalization in marketing. The original title of the presentation started with a question: “Is AI female?”. The truth is that I didn’t know if I could answer this question in any coherent way.

In the process of research, my initial idea for a presentation filled with examples of applying AI in marketing gravitated towards a different topic. Does AI know about women as much as it knows about men? Is it correct in its assumptions, and if not — why is that so? Let me share a few thoughts I gathered while exploring this topic.

What Garry Kasparov, Motoko Kusanagi and Siri have in common

Let’s take a short trip to the past to realize how much has changed. I was born in 1987, the year of probably the most famous chess game of all time, when Garry Kasparov was defeated by an IBM computer named Deep Blue.

The movie “Ghost in the Shell” was released in 1995, based on a manga published at the end of the 1980s. Back then, I was running around the house in diapers and absorbing my first English words from Disney videotapes sent by my mom from the US.

You could say that it was the time when artificial intelligence as we know it has reached levels that moves the imaginations not only of scientists, but also ordinary people. Thirty years later, AI is so ubiquitous we don’t think about it anymore.

AI in everyday life

Source: Gallup/Northeastern University

We tell machines a lot about us, and they return this knowledge in the way they affect our life. We let Spotify choose our everyday soundtrack. We let Netflix shape our film taste. We ride around with Uber telling each of us a different fare for the same distance. There’s a certain price for making our life easier. Sometimes it may get close to a boundary between comfort and creepiness, but it seems like the first one always wins. As much as 83% of customers are ready to share their data to enable a personalized experience.

“If the man realizes that technology is within reach, he achieves it. Like it’s damn near instinctive.” (Ghost in the Shell)

AI in marketing

Artificial Intelligence has sunk not only into our lives, but also business and industry areas, including marketing and sales. Since customers and users expect hyper personalized communication with machines, how can we, humans, fall behind?

After some initial wariness and suspicions, AI has become the Holy Grail of marketers. The pre-AI customer communication suffered from three main drawbacks: it was too slow, too static and too anonymous. With large amounts of data processed in real time, algorithms can help us react immediately to customer behavior by sending an adequate, personalized response using different touchpoints:

After taking a closer look at the possibilities of AI in business growth in marketing, you’ll see a certain paradox: using artificial intelligence helps us act and communicate with customers in a more customer-centered, humanly manner.

“You were made as well as we could make you”

Artificial Intelligence is a wonderful thing, but it’s too early for fireworks. As a technology created by humans, it won’t protect itself from copying human errors. Unfortunately, many severe cognitive biases are connected with women and the way they are depicted in culture and society. AI copies or even intensifies gender stereotypes. It shows in different algorithms and content they process.

Rachael, the Android girl heroine of “Blade Runner” (quoted in the title and above)

Word2vec: a king and a lioness make a great couple

Before I tell you about the first unnerving example of gender bias in an AI algorithm, let’s make a very simple introduction into what this system does (in this particular context, of course). Word2vec is a neural network that processes text in order to transform words into vectors and group the vectors of similar words together. It detects similarities mathematically and is able to establish a word’s association with other words. For example:

Rio de Janeiro” is to “Brazil” as “Paris” is to “France”.

Man” is to “king” as “woman” is to “queen”.

It looks pretty exciting and ingenious until you dig deeper. Research from 2016 showed a dangerous tendency in an algorithm that amplified gender bias or created rather peculiar analogies:

Man” is to “king” as “woman” is to “lioness”. (I’m hearing Elton John in my head as I’m writing this.)

Man” is to “doctor” as “woman” is to “nurse”. (Elton John stopped abruptly.)

Man” is to “computer programmer” as “woman” is to “homemaker”.

Considering the fact that these biased similarities are based on real-life data analyzed by the algorithm, a question arises: should we teach systems to perform in an aspirational way by changing the reality they “see” or let them reflect the statistics in the world? Let’s leave the answer to data scientists and move on another example.

Image recognition: the dinner’s ready

If you’re wondering how AI systems that recognize objects and people in pictures are trained, here’s a somewhat simplified answer — they process large image collections and “learn” about different connections and dependencies they identify. The not-so-funny thing is that the algorithms treat the image content as objective truth while the reality is, well, different.

Vicente Ordóñez, a professor at the University of Virginia, discovered that the algorithm he was working on made a frequent mistake in tagging the gender of the person pictured in particular interiors. It turned out that men in the kitchen were labeled as women because the algorithm associated them with cooking.

Source: Ordóñez et al.

I wonder if the same thing would happen with the pictures of women in car workshops, but that wasn’t covered by the research, unfortunately.

Image tagging: what you see is (not) what you get

Recognizing (or mistaking) objects in pictures is one thing while assigning meaning to them is another. Meaning involves opinions, and algorithms having opinions sounds like “too much, too soon”.

The tweet below was shared by Bonnie Kamona, an entrepreneur, MBA graduate and former Miss Botswana. It shows a rather unusual way Google algorithms were taught about the looks of a female professional at work:

I hope Bonnie laughed long and hard when she saw this. Even though it’s not funny at all.

Final thoughts

In my presentation, I asked:

“Is AI female?”

After the whole research and thinking I’ve done, it started to seem like a wrong question to ask. I believe this one is more adequate:

“Should AI have gender?”

Considering the fact that algorithms should provide an unbiased and individual approach to every user or customer, I believe the answer is “no”. I wish AI could teach us intersectionality one day, but there are many, many changes that must occur before this happens.

If you would like to see my presentation from Women Power Day, check it out on Synerise Slideshare.

--

--

Kasia Krn
Synerise

Sending good vibes from the seaside. 3city-based fan of UX, content design, movies, books, makeup, and poms.