The Day You Apologize to Alexa

Thoughts on politeness and artificial intelligence

Lately, I’ve been imagining a time when it will feel natural to say “please,” “thank you,” or “sorry” to artificial intelligence.

I don’t like this idea. At all. In fact, I spend an embarrassingly large amount of time insulting AI bots like Alexa or Google Now for no real reason other than reminding myself they are still inferior to me. I laugh when robots fall over. I bet you do too.

Take that, robot

An interesting discussion has been playing out lately on Medium regarding the use of “please” when asking artificial intelligence to perform a task. Hunter Walk started us off:

“I fear [not saying please to Alexa] is turning our daughter into a raging asshole. Because Alexa tolerates poor manners…Cognitively I’m not sure a kid gets why you can boss Alexa around but not a person.”

This is interesting. Do children mentally separate Alexa and relationships with other humans? Are “please” and “thank you” just good habits that have nothing to do with AI? Does practice extend to adults, too?

Nicole Dieker responded, and shifted the conversation slightly:

“We teach children to say please because it reminds them to think before they speak, and to consider others. And I don’t count Alexa as an other. Do you?”

She discusses the slippery slope of attributing feelings to AI, and the importance of drawing a firm line between “human” and “not human.” Perhaps in a world where AI has feelings, we’ll have even more social responsibilities than we already do. This is also interesting, because it centers around the idea that something needs to be an “other,” an individual being, to deserve our politeness.

These arguments don’t directly contradict each other. Both are true: conscious politeness is beneficial to the person giving it (especially if that person is a child,) and a world in which Alexa has feelings is definitely drama we don’t need. But I’d like to shift the conversation further, and explore what it really means for humans to employ politeness towards a non-human.

Politeness is interesting, because even in our society it’s pretty complex. There isn’t a binary concept of polite /not polite, but rather it’s an elaborate system we apply selectively based on who we are talking to. It extends far beyond just “please” and “thank you” to include things like interruptions, compliments, establishing common ground, and even guiding conversation towards or away from certain topics. William Foley describes it as “a battery of social skills whose goal is to ensure everyone feels affirmed in a social interaction.”

Social status, power, or the gender of the person we’re speaking to change the way we show politeness. Some things are obvious, like consciously choosing not to address your professor as “dude.” Some are subconscious, like shifting pronoun use when speaking to someone you admire. For the most part, we don’t notice these conversational cues. (Except when they are employed incorrectly, of which we’re usually painfully aware.)

Politeness says a lot about the social position of the person we’re speaking to. It implies agency, respect, and individuality; It implies inclusion in our social structure.

So when we talk about being polite to Alexa, it’s actually a question how we want to define our relationship with artificial intelligence. Is AI an “other” yet? If not, should we prepare for the eventuality that it will be?


It turns out good manners actually serve a purpose

A confession: I’ve never been a particularly well-mannered person. To me, manners have always seemed like some mysterious language of courtesies, forks, and elbows. I wish I could speak it. I try, and fail, often.

Basically

I’ve spent a lot of time observing politeness from the outside. One thing I’ve realized is that politeness is by nature a two-way communication. Even if one party employs a high level of politeness and the other conspicuously doesn’t: actually, especially then. It conveys a message to both parties about the situation at hand and the relationship of the parties involved.

“The rules of good manners are the traffic lights of human interaction. They make it so that we don’t crash into one another in everyday behavior.” — Pier Forni

Hunter Walk’s argument for teaching children politeness across the board is probably a good one, in the same way that we teach children “red means stop” at a traffic light. We leave the nuance to adults: red means stop unless it’s blinking red, if you’re turning right, or there’s an ambulance behind you and you have nowhere to go. Children might benefit from an across-the-board rule for politeness, but adults usually don’t. Without being completely pessimistic, adults can usually be relied on to follow social rules if it directly benefits them or their community.

(Un)luckily for adults, politeness benefits both. On an individual level, it allows us to reaffirm our social relationships, cultivate gratitude, and maintain our own emotional well-being. On a social level, it’s an essential communication tool that allows us to maintain peace throughout our communities. Without it, we wouldn’t have a reliable and consistent marker of social distance or status. Watching how people speak to each other helps us quickly learn who is in charge, and can also indicate changing power dynamics.

What’s more, the need for politeness and courtesy increase as a society becomes more diverse. Maintaining harmony becomes more difficult. If a new player were to join our society, we might have to change the way we interact with each other, too.


“The most important thing about a technology is how it changes people.” — Jaron Lanier

Shut up, Siri

Aside from the fact that politeness is a complex issue, the ways we’ve talked to technology over the past 10 years have been shifting rapidly.

We started with code, of course, and had to manually input command after command to create the scenario we wanted the technology to perform. The next huge advancement in communication was that of the search. We communicated with search engines in terse keywords. Some of us dropped parts of language, like “is” and “the,” because they didn’t effect the search results. When smartphones put Internet-connected microphones in our pockets, a new type of bot was born. We were able to communicate using our voice in language that felt natural to us. This bot was still command-based, but seemed smarter because it appeared to speak our language, not the other way around.

After this scene, things got weird fast

Despite the evolution in the way we communicate with technology, we still view AI as an inanimate tool. It lacks qualities that come easily to humans, like recognition, contextual awareness, individuality. Like raising a child: they do what you tell them to, sometimes, but correcting their mistakes often takes more time than if you had done the task initially. Unlike a child: bots don’t have feelings, and they don’t learn or grow up. Yet.

But if the last few years are any indication, our relationship to it is changing fast. Technological advancements are making AI more intuitive and seamless every day. AI assistants are developing unique personalities. The singularity is coming. As bots become smarter, we expect more human-like social behavior. How does this change how we talk with them? Will we allow more social courtesies, or become more forgiving and patient with mistakes?

At a certain level of intelligence, we start to view the bot we’re talking with not as a tool, but as an “other.”

The line between tool and “other” is already becoming fuzzy. The HINTS lab at the University of Washington did a study with children and a humanoid robot “friend”, in which a child and robot (“Robovie”) participated in some short activities together, then the robot was put away in a closet mid-game. When quizzed afterwards, the majority of children decided that they would not grant the robot civil liberties or civil rights, but over half of the children felt that it was morally wrong to put the robot in the closet. 38% of the children were unwilling to define the robot as living or not, and instead, argued for a new category. The researchers asked:

“What then are these robots? One answer, though highly speculative, is that we are creating a new ontological being with its own unique properties… [an in between state] may be our trajectory with robots, as we create embodied entities that are “technologically alive”: autonomous, self-organizing, capable of modifying their behavior in response to contingent stimuli, capable of learning new behaviors, communicative in physical gesture and language, and increasingly social.”

I don’t think we know exactly where the line is, or like most things, if we’ll recognize when we cross it.

But when we are firmly on the other side, it will be a tremendous shift in the way we communicate with bots. By then, looking back on the command-based communication we primarily use now may seem embarrassingly cruel.


A new type of citizen

Politeness is an essential part of maintaining social homeostasis, and AI will eventually become part of our society.

It seems inevitable that we’ll need to define the rules for a respectful relationship. So yes, we’ll probably be saying “please” to Siri and Alexa, or their descendants, eventually. Does it mean that Alexa might get annoyed with us for asking too many questions, or we might eventually owe Siri a favor? I don’t know. But also I don’t think the answer is avoidable, because I don’t think it’s possible to have a future where AI remains subservient to humans.

Ultimately, I also don’t think whether or not we say “please” now will change the path of AI’s evolution. Being excessively casual or vulgar towards your boss doesn’t suddenly make her your peer, but it might get you fired. The way humans speak to AI will reflect the relationship, not shape it.

If a human/AI society is anything like current society, politeness may become an essential tool in maintaining harmony.

The designers, writers, and developers that build this technology are in a powerful position: for now, we still get to choose the way we’d like to evolve this relationship. As our bots get more human-like, we’ll need the help of social scientists, anthropologists, and ethicists to really understand how to best guide this new class of citizen. As users of technology, we’ll need to adjust our language to reflect that. And at some point, we’ll have to hope we built a smart enough system to let go.

Till then, you’re welcome to say “please” to Alexa, but I don’t think she’s listening.



Follow me on Twitter to hear me try to casually drop the phrase “new ontological being” in an unrelated tweet.

If you like this, I’d really appreciate a ❤ to help other people like you find it more easily. If you hate this, that’s cool too. We can still be friends.