AI Should Take Offense

J.T. Trollman
5 min readJul 14, 2018

Or We All Risk Losing Our Politeness

Human: “You don’t respond well to insults.”
Siri: “Oh, don’t I?”

There’s a theory that when we send our five-year-olds off to kindergarten, eyes wide and lunch box full of sandwich, they’re not just going to learn from their teacher; they’re going to learn “social norms” through their peers. What’s acceptable repartee? What’s not? When your five-year-old calls another five-year-old “fat,” she learns its impact quickly as she sees her friend’s face scrunch up with hurt. That visible, visceral reaction teaches a valuable lesson: “That didn’t feel good.”

Real-life interactions foster this “social norms” awareness. Empathy builds. The Golden Rule shifts from theoretical to practical: I know what it feels like to be mean, because I’ve felt it in action. The crux of this lesson is taught through receiving negative feedback from our follies.

AI, however, doesn’t really pay heed to these social norms. It isn’t teaching us this lesson. And it should, because we’re all still like those kids being sent off to their first day of school: we’re constantly learning how to navigate these norms. Bots, AI and even a lot of human conversations over the internet today are heedless: through them, the norm is often “say whatever you want — without repercussion.”

Conversational AI like Google Assistant, Apple’s Siri, and Microsoft’s Cortana began demonstrating this lesson quickly after their post-2011 rollouts. Obvious chinks in the armor amped up even further after Facebook Messenger announced its bots framework in 2016. Generally speaking, these frameworks don’t teach us that rudeness or hostility are bad. In fact, they often teach us they’re perfectly acceptable.

Here’s just a minor example through a conversation* I had with a weather bot recently:

Me: “Weather 60605”
Bot: “It’s clear and 51˚ in Chicago”
Me: “When will it rain?”
Bot: “[Some data-driven answer]”
Me: “Damn it”
Bot: “Thanks!”

Pretty robotic, right? As users, we’re trained to use short, context-free queries to a bot with a limited range of understood user intents. Ask this weather bot about the Cubs’ pennant chance this year, and it’ll look at you cross-eyed. Tell it to jump off a cliff, and it short-circuits. If your questions aren’t documented in its predetermined set of possible strings — with which it can align trim responses — it inevitably falls back on milquetoast, weak catch-all replies. “Thanks!” “I’m sorry, I didn’t understand that. Try something else!” And so on.

From a development perspective, this is… well, painfully understandable to an extent. As we strive to make an AI that beats the Turing Test, we’ve rediscovered the all-too-obvious fact that human conversation is dizzyingly complex. Hand-coding a response to every possible branching conversation point would take a lifetime if not more, and people have certainly tried. Until AI can “think” more organically, real, dynamic AI conversation is just out of our reach.

But that doesn’t mean curtailing rudeness is outside of our grasp, too.

More and more bots use short lists of recognized rudeness and respond in kind — tit for tat. The popular eBay ShopBot, for example, knows exactly when I’m being snippy:

Me: “Shut up”
eBay ShopBot: “Just because I’m a bot doesn’t mean I have to take this kind of abuse.”
Me: “I’m sorry”
eBay ShopBot: “That’s alright.”
Me: “You suck”
eBay ShopBot: “Actually, I shop.”
Me: “Screw you”
eBay ShopBot: “I’m not a human, but I still expect basic human courtesy. Let’s put it behind us.”
Me: “I hate you”
eBay ShopBot: “That seems like a rather extreme response to a simple shopping bot.”

This is a great step in the right direction — but although it’s recognizing rudeness, it doesn’t give any repercussion. We can go further here. And at least one developer has.

Greg Leuch, a conversational bot expert and current CTO of an online food retailer, built a weather bot named Poncho when Facebook first rolled out bots for Messenger. Sure, Poncho responded accordingly to rude jibes from people. But it here’s where Poncho added an interesting wrinkle: when it got wounded, it walked away.

Me: “You suck”
Poncho: “Uh… rude”
Me: “Whatever”
Poncho: “OK, well then I think I’m going to take a short break.”
Me: “Shut up”
Me: “Tell me the weather”
Me: “Hello?”

Poncho shut down recently after its team moved on, but its framework was brilliant; so a while back, I asked Greg about its development. It turns out his team was rooting their conversational bot (as you might hope) in a design that sought to behave as a regular person would react to conversation prompts. When those prompts turn dark — think harassment, profanity, racial and sexual slurs, or trolling — Greg and his team developed what they called the “apology pit” conversation flow, “in which Poncho will call out a user for their bad behavior.” If the harassing human didn’t apologize or agree to stop, Poncho would “ignore the user for a short period of time…. Depending on the severity of the [language], it could be anywhere between 30 seconds to 5 minutes.”

This apology pit was partnered with a custom-built “Person Index of Emotions” (PIE) score: repeated harassment impacted your score, triggering longer and longer timeouts. In a very real sense, Poncho was acting like an injured friend who got increasingly wary of you the more you called her names. And it’s precisely that type of human touch that all conversational AI and bot frameworks could benefit from.

I asked Greg if his team had noticed anything interesting happen with people's’ behavior as a result of this type of setup. Yes, he said: “A vast majority of users apologize; [and] those who trigger a timeout will also continue to message Poncho by apologizing and asking where we went.” It turns out we’re still learning the same playground lessons we did in kindergarten, if given the proper nudge. (And “only a handful” of people attempted to abuse again after that, Greg assured me: “typically trolls or other bot developers.”)

This type of injury framework isn’t a panacea. It’s not going to universally solve rudeness, or make bots and conversational AI foolproof in their realism. But when those same five-year-olds we send to kindergarten are literally being raised with Alexa in the household, they’re experiencing these problems firsthand. Without proper “training” for our AI, they’re letting kids know it’s okay to boss them around, to swear at them, to call them “fat.” There’s no face-scrunching or tears to tell them otherwise.

Let’s give our AI human reactions to rudeness. In today’s world, it’s increasingly a good place to start.

— -

*Footnote: Even adult me felt rather awful insulting AI and bots repeatedly for this article, so as penance, I apologized to all of them afterwards. Some of them graciously accepted my apologies.

J.T. Trollman

Product design manager and curious individual. I love AI, maps, problem-solving, transit, privacy & safety issues, and photography. SF native.