Our interactions with AI teach and train it, but we are also shaped by these experiences.
Ask your phone, Echo, or computer something. Or call your bank and talk to the automated menu. I’ll wait.
Whatever you asked, a synthesized version of a woman likely answered you, polite and deferential, pleasant no matter the tone or topic.
That’s because Siri, Alexa, Cortana, and their foremothers have been doing this work for years, ready to answer serious inquiries and deflect ridiculous ones. Though they lack bodies, they embody what we think of when we picture a personal assistant: a competent, efficient, and reliable woman. She gets you to meetings on time with reminders and directions, serves up reading material for the commute, and delivers relevant information on the way, like weather and traffic. Nevertheless, she is not in charge.
When performed by humans, these tasks have sociological and psychological consequences. So one might think that using an emotionless AI as a personal assistant would erase concerns about outdated gender stereotypes. But companies have repeatedly launched these products with female voices and, in some cases, names. But when we can only see a woman, even an artificial one, in that position, we enforce a harmful culture.
Still, consumers expect a friendly, helpful female in this scenario and that is what companies give them.
“We tested many voices with our internal beta program and customers before launching and this voice tested best,” an Amazon spokesperson told PCMag.
A Microsoft spokesperson said Cortana can technically be genderless, but the company did immerse itself in gender research when choosing a voice and weighed the benefits of a male and female voice. “However, for our objectives — building a helpful, supportive, trustworthy assistant — a female voice was the stronger choice,” according to Redmond.
Consider that IBM’s Watson, an AI of a higher order, speaks with a male voice as it works alongside physicians on cancer treatment and handily wins Jeopardy. When choosing Watson’s voice for Jeopardy, IBM went with one that was self-assured and had it use short definitive phrases. Both are typical of male speech — and people prefer to hear a masculine-sounding voice from a leader, according to research — so Watson got a male voice.
Women, meanwhile, use more pronouns and tentative words than men, according to Psychologist James W. Pennebaker. Pronoun use, particularly of the word “I,” is indicative of lower social status. AI assistants are very prone to using “I,” particularly in taking responsibilities for mistakes. Ask Siri a question she can’t process and she says, “I’m not sure I understand.”
It’s critical that we challenge stereotypical gender roles in our personal assistants. Our interactions with AI teach and train it, but we are also shaped by these experiences. It’s why parents are concerned about unintentionally raising rude children when Alexa does not require a “please” or “thank you” to carry out a task.
As our relationship with technology enters a new stage of intimacy, it’s worrying to think of what will happen when some people’s primary sexual experiences will be with a sexually acquiescent robot. Sexually harrassing Siri for a YouTube video might be amusing to some, but it’s unsettling to hear how similar that language is to what women hear from street harassers. There is the same societal expectation that both just accept it.
Humans aim for linguistic style matching in their social interactions, meaning they try to match the language patterns of the human — and now AI — with which they are speaking. But as AI enters our physical realm, there are serious personal and social consequences for treating it in a degrading manner. The companies behind AI are cashing in on bias and that is not the way to a utopia, tech or otherwise.
Read more: “AI Has Been Creating Music and the Results Are…Weird”
Originally published at www.pcmag.com.