Should you design for anthropomorphism in AI?
When conducting research on voice assistants and chatbots, some of the most frequently asked questions we received from clients and conference attendees focus on anthropomorphism. Namely, how should we handle anthropomorphism? For example:
- Should we give the digital assistant a stilted, computer-generated sound or a human sound? Does it even matter? (Have a listen to example sounds here.)
- Should the digital assistant take on an avatar, having basic visual suggestions of eyes, nose, and mouth?
- So tell me, just how do you enculturate a bot? (This is my favorite question from a skeptical UX designer. Read to the end for a short answer.)
The first step in determining your digital assistant strategy and approach to anthropomorphism is understanding what it is. Here, we surfaced evidence from psychology, HCI, and mixed in everyday observations to offer an understanding of what anthropomorphism means. While a discussion around anthropomorphism is often like stepping in quicksand, descending into highly variable individual differences, we provide a list of recommendations and pitfalls for how to deal with anthropomorphism that apply to a broad audience, no matter what product you are working on in the AI assistant domain.
The Ubiquity of Anthropomorphism
Anthropomorphism — the word doesn’t roll off the tongue, but we all do it. We anthropomorphize in two ways:
- We ascribe human form or personality traits to nonhuman things. All humans anthropomorphize, though the things we anthropomorphize vary culturally. We see faces in the clouds and treat pets as people with motivations and personalities. People anthropomorphize their digital assistants — especially voice assistants in smart speakers like the Amazon Echo and Google Home.
- The other way that humans anthropomorphize is by projecting our own stories, feelings, and thoughts onto things that are not us. We do so with nonhuman things and other people, including AI. There is nothing odd about this. Technology, like many things, is an extension of its user; it gives expression to a user’s pre-existing experiences. On the flip side, the makers of AI technology have pre-existing values, stereotypes, and expectations about gender, race, and capital just as the users do, and so can inevitably embed these values in a digital assistant they’re building. For the reasons above, the feelings we derive from our interactions with digital assistants can vary a lot depending on each observer’s personal stories, feelings, and thoughts.
In fact, by being human, users will inevitably anthropomorphize one way or another even with no signs of human features in the digital assistant. There’s no point in fighting it. The psychologist Louise Barrett shared this personal anecdote in her book:
“A vegetarian friend of mine has a rule for what she can eat, which she expresses as ‘nothing with a face,” which led to her forever receiving carrots from people that look like Arnold Schwarzenegger or potatoes that look as if they’re smiling, in an attempt to expose hypocrisy.”
Anthropomorphism itself by users isn’t something you can control. So designers and product owners should not fight it, either.
In the history of social robots research, designers and engineers have explicitly utilized humans’ anthropomorphizing tendency to facilitate social interaction between robots and people, e.g., by adding features like eyes or a mouth to a robot head to invite connection. Conversely, many brands intentionally avoid designing for human-like features. Are there any best practices, then?
There are clear shoulds and shouldn’ts when it comes to anthropomorphism:
- When designing interactions that are human, technology should behave as humanly as possible. This does not mean mimicking the human form, but instead, mimicking how humans naturally interact with each other. Consider the rich linguistic, social, and physical contexts in which humans interact with their environment.
- We should pay attention to how anthropomorphism affects users’ expectations of a digital assistant’s capabilities and the emotional impact. Consider the potentially negative emotional impact of anthropomorphism by gender. Whether to give a digital assistant a female or male gender is a popular topic this year. As we previously discussed our ingrained need to anthropomorphize, the choice of gender for a digital assistant invites questions and concerns. For example, in the top search result for ‘Why is the digital assistant’s voice always female’, the author urged us to challenge the stereotypical gender roles as represented by the predominantly female-voiced digital assistants. Indeed, there is a rich history of the role of women in domestic labor, where jobs such as personal assistants were frequently relegated to women. Female-voiced digital assistants trigger that association.
In our study, we also saw that giving digital assistants human-like features increased users’ expectations of a system’s overall competence, which led to a higher likelihood of user frustrations than if the digital assistant was perceived as less human-like. Such features could be as simple as the choice of wake word for a voice assistant. In this quote, when the participant called Alexa by ‘Alexa’ instead of a more neutral term ‘Computer,’ he reported a higher expectation for Alexa’s abilities, and so a higher chance to feel disappointed.
Morteza: “[The wake word] Alexa makes it sound like a person. So when it makes a mistake, I feel mad and I yell at it, I feel like I’m rude to a child. When I changed its name from ‘Alexa’ to ‘Computer,’ I’m not mad at it when it makes mistakes.”
- We should never imply that the technology is human. Deceit by lying is not only unethical but also a key eroder of trust. We saw this with the strong negative reactions people had when Google Duplex did not disclose it wasn’t a person — and Google’s subsequent decision to clarify that a bot is a bot. Trust is the connective tissue that ties the users to the product. When stakes are high, build trust by setting expectations and communicating values and limitations. It can be as simple as an AI assistant disclosing that it’s a bot.
So, to the question earlier ‘Just how do you enculturate a bot?’ the answer is that it is impossible to not enculturate a bot. We are already enculturating a bot even by doing nothing, as anthropomorphism is unavoidable and that’s OK.
This article contains selected content from the white paper ‘Elements of a Successful Digital Assistant and How to Apply Them’ published recently by AnswerLab. We had users nationwide try 9 voice apps and 12 chatbots over two weeks. The digital assistants represented industries spanning financial services, healthcare, and retail. Download the full findings here.
Connect with me on Linkedin!
Last but not least, thank you Real Weird Art, for all the inspirational discussions on the topic and permission to use your work!