Voice Assistants & the Uncanny Valley: The More Lifelike, the Less “Real”

How real is too real?

Gaby Gayles
Voice Tech Podcast
3 min readJan 31, 2019

--

Listen to this article

In fact, marketing Research company Ovum has predicted that these digital helpers will outnumber people by 2021. Voice assistants like Apple’s Siri, Amazon’s Alexa, and Microsoft’s Cortana are already widely used, and they are amazingly good at mimicking human behavior.

The more lifelike, the better, right?

Not always. As voice assistants attempt to simulate humanity by cracking jokes and teasing users, problems arise. Voice assistants that try to mimic human behavior too closely (but fail to do so due to underdeveloped technology) end up entering the realm of the uncanny valley. Their speech seems almost human, but when pushed too far the strange and artificial nature of their behavior unsettles users and generates mistrust by creating cognitive dissonance.

The movie The Polar Express is a perfect example of the uncanny valley phenomenon. The computer animated characters (like the one below) look eerily realistic, but not quite realistic enough to be real humans.

Hyper-realistic animated character from the Polar Express

CNN movie reviewer Paul Clinton was particularly creeped out: “Those human characters in the film come across as downright… well, creepy. So The Polar Express is at best disconcerting, and at worst, a wee bit horrifying.” Viewers couldn’t empathize with The Polar Express characters like they could with less realistically rendered cartoon characters, like SpongeBob. The Polar Express had entered the uncanny valley as its characters trod the space between “barely human” and “fully human”.

Build better voice apps. Get more articles & interviews from voice technology experts at voicetechpodcast.com

How Voice Assistants can Avoid the Uncanny Valley

The same principle applies to modern voice assistants. Because of their realism (many, like Alexa, seem to have personalities), the chatbots are expected to behave as we would. Their failure to exactly replicate human behavior creates wariness in users — we view chatbots as abnormal humans instead of high-functioning machines. The uncanny valley concept suggests that there is a point where making a voice assistant more “human” will make it seem “less real” if it still lacks certain capabilities — we cannot empathize with the chatbot when seems almost human, but not quite. This creates a worse experience for consumers.

How can we prevent AI from entering the realm of the uncanny valley? One way is to ensure that voice assistants own up to their technological limitations instead of continuing the charade of agency. For example, instead of responding with “Sorry, I wasn’t listening. What did you say?” when asked a question it doesn’t understand, a chatbot should say something like “Sorry, I’m not very smart! I can only answer simple questions.” Nikhil Mane, an AI engineer at Autodesk, argues that popular workplace communication app Slack does a great job in staying away from the uncanny valley by acknowledging its fallibility. Slackbot reminds users that “I’m only a bot, but I’ll do my best to answer!”, stating that “If I don’t understand, I’ll search the Help Center.” This prevents users from experiencing dissonance when the seemingly “human” bot cannot answer certain questions.

A voice assistant’s name may also have uncanny implications. Take the name Alexa versus the name Cortana. Alexa sounds much more human — it could be the name of a friend or relative — while Cortana feels rather clinical and robotic. Human-sounding names like Alexa may augment the eeriness of voice assistants as we expect them to act more like us. Conversely, names like Siri or Cortana could lower expectations and reduce the uncanny valley phenomenon and associated cognitive dissonance. Until technology improves enough to make AI essentially indistinguishable from real humans, choosing a “robotic” name could keep voice assistants out of the uncanny valley.

--

--

Gaby Gayles
Voice Tech Podcast

Documenting insights about humanity, culture, and design. // Self-experimenter, UX Designer @ Google.