“What Can I Help You With?”: The Feminisation of AI.

Sara Da Silva
tobiasandtobias
Published in
4 min readMay 15, 2018

Artificial intelligence has been seamlessly integrated into modern every day life, however, despite this technological progressiveness, there is one fundamental question that lingers: why on Earth do they all sound like women?

AI, in itself, is genderless. “Animals and French nouns have genders. I do not” responds Siri to the question “are you male or female?”

Despite this, Amazon’s Alexa, Google’s nameless Assistant and Microsoft’s Cortana are each programmed with female voices. Why?

Should we be offended?

Countless consumers have been accusing tech firms, who choose to program their AI with a female voice, of being sexist. One major argument stems around the idea that virtual assistants perform functions that are historically “female”.

Global society has long since dictated that administrative and secretarial positions are generally assigned to females. Supporting this theory, according to the United States Department of Labor Report, women held more than 94% of 3 million administrative and secretarial positions in the US. Thus, when picturing an “assistant” you tend to visualise their voice as female. Michelle Habell-Pallan, an associate professor of Gender, Women and Sexuality Studies at the University of Washington, underlines that this “has to do with the way that labor is gendered and stratified”.

There are countless other examples that reiterate this argument. For instance, United Airlines and Alaska Airlines’ “assistants” are both female, maintaining the stereotype that flight attendants are predominantly female.

Overall, this argument concludes that consumers rely on gender stereotypes in order to make unfamiliar technologies seem familiar, whether consciously or subconsciously.

Many individuals claim that this may be enforcing a harmful “old fashioned” culture.

In Germany in the late 1990s, BMW were forced to recall a female-voiced navigation system on its 5 Series model after a surge of male drivers refused to take directions from a woman. Research conducted by Clifford Nass, a professor at Stanford University, argued that iPhone users trust answers given by male-voiced Siri more than answers from its female counterpart, claiming that “female voices are seen, on average, as less intelligent than male voices.”

Despite the potential implications on society that the use of female voices within AI may have, whether it truly does enforce a harmful culture, many believe that it is unfair to claim sexism in individual companies’ choicesas many decisions are the result of extensive market research.

Do the answers lie with science?

The opposing argument, supported by multiple case studies, suggests that the use of female voices within AI is driven by user comfort.

One study, carried out by Karl MacDorman, a professor at Indian University, used a sample group of 485 participants, 151 male and 334 female, to determine whether female or male synthesized voices were more “pleasing”. It was found that both groups reported that female voices sounded “warmer”. In addition, it was further concluded that women had an overall “stronger implicit preference for the female voice” and men did not have an implicit preference to either.

An experiment conduced within Stanford University took a different approach, trying to determine what content consumers prefer to receive from either male or female computerised voices. It was argued that individuals prefer male voices when answering questions (in this study, regarding computers), but preferred female voices when it came to receiving love and relationship advice.

Overall, the correlation can be made between both studies that a female voice sounds overwhelming more “understanding”.

Professor Nass has attempted to understand the psychology behind these results, claiming that it is a “well-established phenomenon that the human brain is developed to like female voices”, linking this to studies in which foetuses were found to react to their mother’s voices, but had no distinct reaction to their father’s voice.

In addition to the overall research conducted into this topic, firms that offer virtual assistants partake in individual market research before settling on a voice and name. Following extensive Amazon testing with internal beta groups, the voice of “Alexa” was chosen, and named in homage to the Library or Alexandria.Similarly, the female voice of the London Tube was chosen by a focus group after being exposed to three male and three female samples.

So, after all this research, surely the obvious choice is female?

A rock and a hard place?

Designers now face an ethical dilemma – when it comes to the decision making process should market research take precedence over the possibility of reinforcing problematic stereotypes and an out-of-date culture? Or should they be challenging these stereotypes? Or perhaps maintain a sense of neutrality?

Have Apple figured it out? By offering the choice between a female and male voice for its virtual assistant, Siri, are they circumventing the issue?

We’d love to hear your opinion, get in touch!

--

--