Loading…
0:00
8:24

Mornings in my kitchen run more smoothly with Siri around. Like a well-trained butler, the persona of my iPhone is always on hand to deliver the day’s weather forecast, play my favorite podcasts, and send vaguely accurate voice-dictated messages when I’m busy making breakfast.

After Siri has listed my appointments for the day, I sometimes find myself saying, “Okay, thanks.” But while even the starchiest of butlers would acknowledge gratitude — if not with a breezy American “you’re welcome,” then with a chilly bow at the very least — Siri does nothing.

Of course, Siri isn’t really a butler. “She” is a robot. (For reasons about which one may darkly speculate, Amazon, Google and Microsoft have all chosen female personas for their obliging artificial assistants. I have chosen a British male voice for my Siri as a minor act of protest.)

Obviously, I’m aware there isn’t actually a helpful little man living in my phone. But I still feel a bit put out when Siri fails to acknowledge my instinctive “thanks” at the end of an interaction. To get a response — which encompasses the banal (“I aim to please”), the confusing (“Why, thanks, Jenny”), and the downright sinister (“I live to serve”) — I have to say, “Hey, Siri. Thanks.” And that feels uncomfortably aggressive.

Assistants weren’t always female by default. French Cardinal Georges d’Armagnac and his secretary, by Titian. Image: Wikimedia Commons

There are lots of questions we could ask about this. Should companies program Siri and other personal assistants to respond naturally to gratitude? What should the response be? Should we attempt to curry favor with our future robot overlords? These sorts of questions are all downstream from a more fundamental one: Why do we sometimes feel moved to thank our artificial assistants in the first place?

It can’t be just because they’re handy around the house. For as long as there have been machines, they’ve been doing all sorts of useful things for us, ranging from the merely tedious to the humanly impossible. But the Romans didn’t say “gratias tibi” to their abacuses. And when you mumble a groggy “thanks” over your shoulder as you shuffle toward the air bridge, it’s not the plane you’re talking to.

When does a machine start to seem like an appropriate target for gratitude? And could a machine ever genuinely merit thanks?

What Makes You Want to Thank a Robot?

Encounters come in roughly two varieties. When you have an instrumental encounter, you manipulate something in service of your goals. Sometimes the entity in question is inanimate and doesn’t have goals of its own, and so you manipulate it without thinking too much about it.

Before you put the bread into the toaster, there’s no need to ask yourself, “But does the toaster really want to make toast today?” Sometimes that entity does have goals, but you ignore that fact for the purposes of the encounter — think of a drill sergeant barking at recruits.

In a noninstrumental encounter, on the other hand, you don’t view the thing you’re interacting with solely as a means to an end. You recognize that it has goals of its own, and that you don’t have an automatic right to expect it to pursue yours. When you want that entity to do something for you, you don’t poke it or twiddle its knobs or shout commands at it. You ask.

Catherine of Aragon pleading her cause before King Henry VIII, by W. Ward after R. Westall. Image: Wellcome Library no. 674067i

And that’s what’s interesting about artificial assistants. They’re machines — the kinds of things we usually interact with instrumentally. But we find ourselves saying that we asked Siri what the weather is like or requested that Alexa turn on the lights. We sometimes interact with AIs noninstrumentally—in other words, treating them as though they were free agents, entities with goals deserving of respect. But why?

When Do We See Machines as Free Agents?

Getting gadgets to respond successfully to linguistic input is enormously complicated. The machine must recognize what you mean to convey before formulating a response that is not only coherent but also appropriate given the context. If the machine has a voice, it must be intelligible: no drastic pitch discontinuities, no off-putting DOES-NOT-COM-PUTE spacing of syllables. And the encounter itself must somehow flow: prompt timing of responses, recovery from misinterpretations, no awkward failures to respond.

Get all that right and you’ve made a sophisticated machine. But you haven’t yet made something that seems like a free agent. For that, the device needs to give the impression of being something more than a passive hunk of silicon. It must seem like something that can choose its actions.

There’s a deliberate drive afoot in Silicon Valley to make bots respond to us in ways that seem surprising. Theater graduates are being hired to write quirky rejoinders to things like “Siri, can you beatbox?,” “Alexa, what is the meaning of life?,” or “I love you, Slackbot.” It’s all just a bit of fun, of course. But it’s entertaining because when the bot delivers its lines, there’s a brief instant when it seems like a spontaneous agent.

When an interaction with an artificial assistant veers in a direction you didn’t anticipate due to the creative input of a clever scriptwriter, you have the fleeting impression that the bot itself has gleefully steered you off-piste.

Within those little moments might lie the reason that being rude to Alexa feels different from hurling expletives at a TV. When a bot seems funny or thoughtful, we can’t help feeling that it’s an active conversation partner with its own ideas about us.

Future interaction with an Alexa descendent? Knoop: Rokoko-Kavaliere im angeregten Gespräch. Image: Wikimedia Commons

Does Alexa Really Deserve Thanks?

But it’s still silly to thank Alexa, you might suspect, no matter how many whimsical witticisms a bunch of liberal arts majors write for her. Everything she says and does is still fully determined by the decisions her designers made. And surely an entity has to be capable of acting freely in order to deserve gratitude for something it does. It needs to have chosen to help you, despite the fact that it could have done otherwise.

But there’s a wrinkle. Although nobody disputes the legitimacy of gratitude expressions among human beings, many philosophers have argued that we may not, in fact, be genuinely free agents. We might be more like Alexa than we think. And if we can be legitimate targets for expressions of thanks despite merely seeming to be free agents, then maybe machines like Alexa can be as well.

There is much theoretical ground-clearing to be done, and philosophers aren’t exactly renowned for settling disputes quickly (or at all). In the meantime, it might be prudent to be kind to your assistants, be they human or artificial.