Why Do You Say ‘Please’ to Siri?

Kara Hanson
The Startup
Published in
6 min readFeb 10, 2020
Image by silviarita via Pixabay

My friend picked up her iPhone: “Hey Siri, please….”

“Why did you say ‘please’?” I asked her.

She thought a few seconds before she responded, “Well, it’s polite. I was taught to say please when asking for something.”

But Siri is not a person, I argued. It’s a computer program embedded into a device of plastics, metals, and silicon. She shrugged.

Saying please and thank you to digital assistants like Siri, Alexa, Google and others is a common practice, but few people have stopped to think about why we do it. These are machines, not fellow human beings. Why do we really feel the need to be polite to them?

Image by Andi Graf from Pixabay

Manners?

Parents, psychologists, and others have debated whether users, especially children, should use please and thank you when making requests. Many parents teach their kids to say please and thank you to their Amazon Echo Dot (kids’ edition) because they say it helps reinforce good manners. The makers of digital devices must think so, too. You can enable Alexa not to respond to your kids unless they say please, and Google redesigned its Assistant to say thank you and praise you if you preface commands with please. People opposing the practice say it’s important for kids to learn the difference between speaking to human beings and speaking to devices.

But none of those opinions questions why this is even an issue. To understand that, we need to closely examine our relationship to our electronic devices.

Social Scripts

It’s possible we don’t even notice when we say please or thank you to our digital devices. My friend was right. It’s just what we were taught. Our reflex to be polite is so deeply ingrained in us that we “mindlessly apply social rules and expectations to computers,” according to the late Clifford Nass.

The social rules Nass is referring to are called social scripts. These are small, culturally-specific behaviors that help us conform to the norms and expectations of our communities. All children, beginning in infancy, learn social scripts that we generally take for granted as common sense or good manners, such the proper way to greet friends vs. strangers, how to eat politely, and yes — saying please and thank you. These social scripts become so internalized that we do them without stopping to think about it. They are mindless behaviors.

“People were polite to a computer!” — Clifford Nass, researcher

To test whether people extend the social script of politeness to computers when interacting with them, Nass and his colleagues conducted an experiment. They asked people to take a programmed lesson from a computer and then evaluate the computer’s performance. All participants took the same tutorial, but afterwards they were divided into three groups for the evaluation survey. The first group took the survey on the same computer they used for the lesson. A second group took the survey on a different computer, and a third group filled out the same questionnaire on paper.

The results? The first group — the one that completed the evaluation on the same computer that taught them — rated their computer’s performance significantly higher than either of the other two groups (Nass, et al. 1999). The researchers believed the learners were reluctant to directly criticize their computer-teacher. “In other words,” Nass et al. concluded, “People were polite to a computer!” (Nass and Moon, 2000).

Yet, Nass and his associates recognized that their explanation wasn’t complete. They wrote (my emphasis), “To elicit mindless social responses in this context, individuals must be presented with an object that has enough cues to lead the person to categorize it as worthy of social responses while also permitting individuals who are sensitive to the entire situation to note that social behaviors were clearly not appropriate” (Nass and Moon, 2000).

So computer users apply social scripts when interacting with computers. Why? And so what?

That brings us to anthropomorphism.

Anthropomorphism

You may remember the term anthropomorphism from studying literature. It refers to the practice of attributing human characteristics to non-human entities or inanimate objects.

Most of us do this. We treat our pets like children, dress them in clothing, and swear that they smile at us. We give hurricanes and blizzards human names and regard them as opposing armies that are attacking us. We’re even more likely to anthropomorphize things that move or have autonomous features, such as vehicles. People name their cars and refer to them by gendered pronouns.

Anthropomorphism has grown along with the digital age. It’s no coincidence that the homicidal computer in the movie 2001: A Space Odyssey has an acronym that sounds like a human name, HAL. Similarly, it’s not uncommon for people to name their computers or treat their mobile phones as friends. You probably find yourself calling your car’s GPS “she,” as in “She took me to the middle of nowhere.” (Why GPS and other digital assistants are almost always programmed as female is also a fascinating topic.)

Nass had a name for this kind of anthropomorphism: computers as social actors (known as CASA). Nass and his colleagues believed that the role of computers and other digital devices is “fundamentally social” (Nass, et al., 1994). They also touched on a possible reason why. They concluded, “Traditionally, when interface agents have been created, they have been endowed with faces, personalities, and a rich human representation” (77).

And now we get to heart of the matter. We say please and thank you to computers because they are designed to work in our lives as fellow human beings.

Image by Andi Graf from Pixabay

The Quasi-Other

Nobody’s saying that people mistake their digital devices for real people. Still, it’s a fact that these devices are purposely programmed to display human characteristics. Take Siri as an example. We say something, and Siri responds to us, speaks in a female-sounding voice, and uses personal pronouns (I, me) to refer to itself. Ask Siri if it’s real, and it may answer, “I’m a virtual assistant, not an actual person. But you can still talk to me.” Siri is also programmed to have a sense of humor. (Ask “how much wood could a woodchuck chuck?”)

Apps and features can be added to enhance the human-like interaction of these devices. The Amazon Echo can tell your kids bedtime stories and sing them lullabies. Engineers are working to make devices even more interactive by creating robot bodies for them.

We say please and thank you to computers because they are designed to work in our lives as fellow human beings.

It’s no wonder, then, that we say please and thank you and otherwise tend to respond with the same courtesy that we give other people in our lives. (Or rudeness; have you found yourself cursing at Alexa or Siri?) These digital assistants are what philosopher Don Ihde calls quasi-others: they are almost like other people, but not quite. He explains: “Technological otherness is a quasi-otherness, stronger than mere objectness but weaker than the otherness found within the animal kingdom or the human one” (Ihde, 1990, 100).

Ihde gives the example of a video game. You’re playing against the computer, but it feels like your opponent is another person. He states, “In competition there is a kind of dialogue or exchange. It is the quasi-animation, the quasi-otherness of the technology that fascinates and challenges. I must beat the machine or it will beat me” (100–101).

Ihde developed his theory in the 1990s, well before virtual digital assistants were developed. These days, Siri, Alexa, and other assistants have further blurred the line between electronic devices and humans. Do we still consider them quasi-others? Or have we dropped the quasi part?

We’re not sure. It’s like, when you’re dining with a group of friends and you accidentally kick something under the table. It could have been your friend’s foot, or it could have been the table leg. Just to be safe, you apologize.

So we say please and thank you to Siri and Alexa. Just to be safe.

References

Ihde, D. (1990). Technology and the lifeworld: From garden to earth (№560). Indiana University Press.

Nass, C. & Moon, Y. (2000). Machines and mindlessness: Social responses to computers. Journal of Social Issues, 56(1), 81–103.

Nass, C., Moon, Y., & Carney, P. (1999). Are respondents polite to computers? Social desirability and direct responses to computers. Journal of Applied Social Psychology, 29(5), 1093–1110.

Nass, C., Steuer, J., & Tauber, E. R. (1994, April). Computers are social actors. In Proceedings of the SIGCHI conference on Human factors in computing systems (pp. 72–78).

--

--

Kara Hanson
The Startup

I study the interrelationship of technology, media, culture, and philosophy. PhD Humanities, concentration in philosophy of technology. Journalist. SF fan.