Conversational Agents and Abuse: a Case for Authoring and Empathy

Mitu Khandaker
6 min readJan 26, 2018

--

The topic of conversational agents and how we talk to them is an important recent discussion: whether we are required to even be “polite” to Alexa and Siri — all the way to the gendered abuse and harassment with which they’re often addressed. I propose that the way to better empathise with conversational agents is to understand what it means to design them ourselves.

In The Atlantic this week, Dr Ian Bogost wrote about the gendering of voice agents such as Siri and Alexa. His discussion is framed particularly around the fact that Amazon has made Alexa claim to be a feminist; a label he argues as being at odds with the very fact that it contributes to the usual tropes of assistant bots as passive, apologetic, and female**.

***Note: voice agents which are gendered male — such as British Siri — are done so in territories wherein user factors research has shown that people respond better to the authority of a male voice; gender issues are still at play here.

Bogost is one of many cultural critics, writers, and thinkers who have made such arguments before, though he raises a thoughtful point: the functionality of simple voice agents like Alexa & Siri is not that far removed from what we might achieve from traditional tech, like using a search engine. But, it is the added personification — and moreover, the gendering — of these agents that adds a new dynamic to the whole interaction:

It’s worth comparing the interactions just described with similar ones on other information services that were not cast as women. If you Googled for some popcorn instructions or a Mozart biography, the textual results might also disappoint. But you’d just read over that noise, scanning the page for useful information. You’d assume a certain amount of irrelevant material and adjust accordingly. At no point would you be tempted to call Google Search a “bitch” for failing to serve up exactly the right knowledge at whim. And at no point would it apologize, like Alexa does.

This topic of conversational agents and the ways in which we talk to them is of huge recent interest; this fascinating report by Quartz last year documented the ways in which the various voice agents would respond to abuse — and specifically gendered abuse and harassment:

Related to this, we’ve begun to question whether or not we are even required to be “polite” to Alexa and Siri — and what it means for a generation of children growing up using voice agents who do not push back against rudeness:

We’re asking these questions now because of this fundamental issue: as our interfaces disappear into our everyday lives, we’re moving towards representing our machines as human.

This makes a lot of sense in terms of usability: what we all have is a mental model of how to interact with other people, so instead of learning complex technical interfaces on our devices, conversational agents make a lot of sense to make technology accessible to everyone. This is part of our mission at Spirit AI with Character Engine too, which can drive autonomous, improvisational AI characters who have an agenda and a narrative space — kind of like the next generation of conversational agents. In fact, I spoke about the future of accessibility and human-like interfaces at the Game UX Summit last year, and made a case for how, if handled correctly, more virtual humans integrated into our everyday lives has the potential to engender more empathy, not less:

It is a fine line to walk, of course — since though we are representing our AI agents as people, the question becomes who are they? Who do they sound like? And — even more so, for agents which are moving towards having a visual representation, who do they look like? Siri and Alexa may be disembodied voices, but the gendering of these voices already seems to matter, as we’ve seen. Once we introduce visual representation to the equation, which of our other real-world biases may or may not come into effect?

This question of what happens when we go beyond this, and our conversational AI agents are represented through visual avatars is an important one. How do we build empathy with conversational characters is an urgent issue. And it’s one that needs more thought and response.

What Comes Next?

Companies are beginning to realize that simply not responding to abuse (let alone responding positively) is not an option anymore; designing the ways in which voice agents like Siri, Alexa, Cortana, and Google Assistant respond to abuse and push back against it is definitely an important endeavour.

Beyond this, though — we need to move towards a more well-realized personality for these agents — such that they feel less like cold automatons, and instead match our understanding of real people — and importantly real people we care about.

This fascinating article by a member of the Cortana personality design team provides an insight into the careful thinking that goes into building a conversational character. This doesn’t mean technical thinking necessarily, it means the important soft skills of what it means to design a virtual human, and the decisions we need to make about them and who they are.

This is obviously something that we think about a lot at Spirit with Character Engine — we’re not just building virtual humans ourselves, we’re building tools for you to build virtual humans: with personalities, agendas, and understanding. As we know, tools shape thinking, and so this is something we are very intentional about.

Spirit Character Engine

“We shape our tools and thereafter our tools shape us.” — John Culkin, A Schoolman’s Guide to Marshall McLuhan (1967)

Here’s a radical suggestion: I propose that the exercise itself of designing conversational AI characters is one that engenders empathy with them — and with each other. This latter part is a potentially thorny issue; while there are no ‘quick fixes’ for societal empathy — and certainly not through purely technological interventions — we can see how this can be part of the answer. Our tools shape us.

While the push for coding literacy in schools has been well-understood and well-received in many places, what if the next wave of this is bot authoring literacy? What if, in schools, we teach not just coding, but the soft skills involved in designing conversational digital agents?

Being asked to author the experiences of a conversational character who is very unlike you, and the ways in which they’re going to converse with the world and with strangers — even strangers who might want to abuse them — is ultimately an exercise in trying to understand people. This definitely reflects the experiences of Jonathan Foster as he writes about designing Cortana:

It requires us to slow down and think through the impact we might have on culture, perspectives around personal privacy, habits of human interaction and social propriety, excluded or marginalized groups, and an individual’s emotional states.

So this then, may be part of the answer: better, deeper characters who we want to treat as people — but also, by fostering literacy in all of us about what it means to design a conversational character, a virtual human.

Spirit AI builds tools to make the future of digital interactions better: both with virtual humans, and real humans. We make Character Engine, for authoring dynamic improvisational AI characters, and Ally, a tool for detecting and intervening in the social landscape of online communities — to curtail online harassment, or to promote positive behaviour.

--

--

Mitu Khandaker

🇧🇩 Brit in NYC • ✨ CEO, Glow Up Games • 🕹 Game Designer & Engineer • 👩🏾‍🏫 Professor NYU Game Center • 👩🏾‍🎓 Dr of VR • 👸🏾 Mad ethnic right now.