There’s Apple’s Siri, Microsoft’s Cortana, Amazon’s Alexa, and Nuance’s Nina. Sure, Facebook has “M”, Google has “Google Now”, and Siri’s voice isn’t always that of a woman. But it does feel worth noting that (typically male-dominated) engineering groups routinely give women’s names to the things you issue commands to. Is artificial intelligence work about Adams making Eves?
The response to this critique is usually about the voices people trust and find easy to understand. Adrienne LaFrance over at The Atlantic does a good job discussing those points, so go read her article. I’m going to shift to talk about other representations beyond the Big Six.
First off, the major players clearly have gendered things. But what about otherchatbots? I gathered ten lists of chatbots/digital assistants, which gave me 223 unique chatbots to evaluate. Many of the names are obviously gendered like Santa Bot and Ella. When they weren’t, I looked at the images and pronouns their creators chose.
The largest group of these are gendered as females: 79 total. There are 66 that are male, so that’s closer than we might have expected. Importantly, there are 78 like Cleverbot and SimSimi that don’t have any discernible gender. So while it is a strong trend to gender AI, there is a strong theme among these (disembodied) agents to be genderless.
That said, even when creators refer to their chatbots as “it”, people can still assume a gender–as people do with Cleverbot. This can come from more subtle cues about names and images, but it can also come from the training data used to create the bot. There are plenty of platforms and topics that could be chosen for a chatbot that will make it seem gendered.
A very different trend in cinematic AI
Let’s look at another place of imagination: artificial intelligence in movies, which ends up much more regularly being embodied (e.g., robots). I grab a few lists to get 77 different major AI characters across 62 films. The movies range from 1927 (Metropolis) to 2015 (seven films, including Ex Machina).
Unlike chatbots, there are almost no ungendered AI characters in films. Basically just V’Ger in Star Trek: The Motion Picture and BB-8 in Star Wars: The Force Awakens. Sico in Rocky IV is an interesting case of a robot that is first voiced by a man and then reprogramed by a human in the film to have a female voice. And note that even people responsible for BB-8 are interested in giving the cheerful stacked spheres a gender — but aren’t sure which one.
Aside from these three examples, the rest of the on-screen AI characters have gender. And a big majority are male (57 versus 17).
If you expected there to be more female AIs, check out Jessica Nordell’s piece on Siri, Viv and TV/movie representations of women. Meanwhile, you may have been thinking of Samantha from Her and Ava from Ex Machina. And female AI characters may be increasing over time. The median date for when the female characters were created is 2003. For the male characters it’s 1987. Or to put that another way, while 50% of the male AIs were created before 1987, only 29% of the female AIs were.
Most artificial intelligence characters are either good/sympathetic like David in A.I. or bad/lethal like the Terminator T-X. There are a handful of mixed or neutral ones, but AI characters are usually a pal or an enemy. The gender breakdown by goodness/badness is about equal — 8 clearly good female AIs and 8 clearly bad female AIs; 29 clearly good male AIs and 24 clearly bad guy AIs. So the male AIs may be slightly more likely to be good, but the counts are pretty low so I’d rather not jump to any strong conclusions about that.
At this point, maybe you are tl;dr’ing this to be “no gender problem in AI”. That’s a mistake. We’ve been looking at the representation of gender in non-humans, but what about the representation of women in artificial intelligence? How do percentages of AI digital assistants, chatbots, and movie characters compare to the people with the skills to build artificial intelligence?
What percentage of AI researchers are women?
I have heard from companies who have gotten thousands of applications for artificial intelligence/data science roles and reported that only 0.1% of applicants have been women. They are men who want opportunities for their young daughters but see the situation as simply: we’ll hire the best but we’re not going to go out and seek diversity. It’s a mantra of excellence and a perception that diversity is orthogonal to excellence. Here’s poet Claudia Rankine in a similar situation:
You are in the dark, in the car, watching the black-tarred street being swallowed by speed; he tells you his dean is making him hire a person of color when there are so many great writers out there.
As usual you drive straight through the moment with the expected backing off of what was previously said. It is not only that confrontation is headache-producing; it is also that you have a destination that doesn’t include acting like this moment isn’t inhabitable, hasn’t happened before, and the before isn’t part of the now as the night darkens and the time shortens between where we are and where we are going.
You aren’t hiring the best if you have no diversity. Thinking probabilistically, something is really weird with a number like 0.1% since women make up about 50% of the people in the world. Obviously not everyone is qualified to work in artificial intelligence or data science. Let’s go back to the data to see how far off 0.1% is.
If you go to LinkedIn and just take a look at people who list “artificial intelligence” in their profiles and are in the US, you’ll get 81,921 people. This search misses a lot of people who basically do AI but don’t mention it explicitly and it lets in too many people who maybe are attached to AI systems but aren’t developers/data scientists.
Still as a rough number, it’s worth reporting that scrolling through over 500 people who are three or more connections away from me, the overall percentage of women is probably around 15% unless there’s something tricky about how LinkedIn is returning results among people I’m not connected to. In other words, if 0.1% of your applications are women then there’s something deeply wrong with your recruiting since the number should be 150 times higher than that in the US. For more thoughts along these lines and ways to fix things, check out Kieran Snyder’s post about gendered language in job descriptions.
The Taulbee Survey is another way of seeing whether application and hiring rates reflect people with skills. The survey shows that last year, women got 24.9% of master’s degrees and 19% of PhD’s in computer science. The National Science Foundation reports even higher numbers from their 1993, 2002 and 2012 data. Maria Klawe, the president of Harvey Mudd College, contextualizes these and other numbers in her piece on closing the gender gap.
Why does representation matter?
While gendering digital assistants is problematic (c.f. Cortana’s almost-naked tube sock outfit that comes from her origin in the Halo video games). There are problems that go beyond names and movies. Kate Crawford in the New York Times highlights that artificial intelligence systems that are built by white guys can easily enshrine biases. For example, increasing surveillance of minority communities or making it harder to get loans.
It’s not that systems are built out of malevolence in most cases. But there’s almost always a problem when a homogenous group builds a system that is applied to people not represented by the builders. This comes from not adequately interrogating project goals as well as what’s being used as training data to build the automated systems. For example, if you get data about everyone that the police stop, you could build a model to automatically identify suspicious people. But it’s clear that a statistical model built from that data is going to re-iterate and exacerbate what communities of color know very well.
Artificial intelligence systems are built by humans using training data that one way or another comes from humans, optimized for goals that humans set. The classifications and actions AI systems take then affect humans. So there’s a human-in-the-loop at every level. As we build machines to make our lives better and easier, it’s always important to ask whose lives it’s making better and in which ways. AI systems are not free from ideology and ethics. Most optimistically, working on these systems can teach us more about all of the different ways to be human. Representations and models can feel like they simply reflect the world. But the world is a place of many perspectives. Representations and models do not simply reflect the world. They maintain and create it.
Tyler Schnoebelen is the former Founder and Chief Analyst at Idibon, a company specializing in cloud-based natural language processing. Tyler has ten years of experience in UX design and research in Silicon Valley and holds a Ph.D. from Stanford, where he studied endangered languages and emoticons. He’s been featured in The New York Times Magazine, The Boston Globe, The Atlantic, and NPR.