It’s Like I’m Talking to a Machine:

Psychology of Talking to Virtual Assistants

Julia Mitelman
On Products
5 min readJan 13, 2015

--

This is a colloquial exploration by a former social psychology major with no official linguistics training. Please regard as a starting point for thought, rather than a researched paper.

We’re finally starting to converse with our machines. We’re not really having conversations yet, but commands are already morphing into natural speech. This is a pivotal moment: the way we speak to our virtual digital assistants may set the tone for our relationship with artificial intelligence.

Thus far, it’s likely that brand managers and product managers have been driving interaction decisions. But even small choices compound to create very distinct types of relationships. Expectations set now are defining the progress and prevalence of this sort-of artificially intelligent technology by shaping its place in our lives and our demand for it.

The Greeting

Let’s compare Siri, Cortana, and Google Now. All three let you summon your assistant with just your voice.

“Hey, Siri!” / “Hey, Cortana!” is informal, suggesting you’re getting the attention of someone in the room. Maybe you’re about to ask for a favor.

“Ok, Google” suggests you’re about to give instructions. You’ve established a power dynamic.

But how does this matter? The first treats your assistant as a peer, while the second treats them as a subordinate. This sets the stage for very different relationships: will you ask for advice or for research? How forgiving are you of miscommunication? Will you treat clarifying questions as considerate timesavers or a sign of incompetence? How confident are you of quality of results?

A seemingly tiny choice — “Hey” or “Ok” — can completely change our perception of the humanity of assistants. If you feel like you’re asking for a favor, you’re spending social tokens (a psychological term referring to social credits gained by completing favors) and thus creating subconscious burden. However, you can also gain social benefits by practicing expressions of empathy. On the other hand, establishing a dominant, professional relationship early on may hinder relationships with creative AI, for instance, that learn and improve by encouraging less patience for an assistant that needs to be taught. Professionalism, however, may help create a healthy barrier and discourage attachment. We start to get philosophical here about what benefits we should optimize for, so I’ll move on.

The Command

You’ve gotten your assistant’s attention. Now, you want something. The way you should ask for it has great impact.

Let’s take an example: you want to read the news. You say:

  1. “Open Flipboard.” / “Read news.” — very mechanical and awkward, not how you have a conversation. These commands won’t last very long.
  2. “Show me the latest news.” — direct request and clear power dynamic. It suggests an established relationship, since your assistant knows which app you’d prefer to read the news. A decent approach, but not all users may want to feel like they’re so clearly in charge, as this suggests constant choices, which are mentally tolling.
  3. “I’d like the latest news.” — intent here is more nuanced, as your assistant must conclude a request from this statement. However, this is a more human interaction, possibly how you’d speak to an employee.
  4. “What’s the latest news?” — a question greatly lessens the hierarchy, taking some of the pressure off. But intent here is hardest: would you like to be brought to an app, or a summary, or a search? Some actions also won’t lend themselves well to questions, like if you want to add an item to a grocery list; should we optimize for consistency or humanness?

The command style should map with the desired relationship. However, varied styles create opportunities to convey meaning. For example, if you ask for something, you might be more open to clarifying questions, but if you directly request it, you may want the assistant to stick to best guess.

The Magic Words

Should you thank your assistant? Should your assistant react differently if you use “please”? Yes! Such interactions encourage empathy and habituate us to better social exchanges with other humans.

A simple starting point would be mirroring: if you’re kind and sociable to your assistant, you should get wittier and slightly more eloquent responses. If you’re curt and direct, you should get minimal, more formal responses. Mimicry is an effective social tool for flattery — people subconsciously register it and tend to like those who subtly copy them more.

The Closing

Once the task is completed, is your interaction just cut off? Perhaps. Without feeling a physical presence, users may not be as compelled to formally end the exchange. Consider the awkwardness of ending a phone meeting — should we subject users to this all day?

On the other hand, users may want the assistant to stop listening. We don’t yet feel an omnipresent background task, but once we start conversing, this may get annoying. We may want to feel the peace of a true “silence”. Some potential endings:

  1. “That’s all.” — fits the professional power relationship model. Unclear when it should be used: is your intent is for your assistant to fade while you work or to close what you’re looking at, since you’re done working?
  2. “Thanks!” — fairly standard and probably already habitual for many users. Though seemingly indirect, the social assumptions behind this suggest pretty clear intentions: the conclusion of the exchange.
  3. “Great!” “Cool.” “Awesome!” [insert your favorite exclamation here] — this is perhaps a great customization opportunity for assistants. People highly vary in reactions between one another, but often have certain phrases or sets of phrases for feedback. This is an opportunity for the assistant: don’t ask the user to configure it, but rather listen for cues and learn.

Rock on, AI

We’re finally at the brink of research meeting application in machine learning and natural language processing; an oncoming explosion of features is inevitable.

Personal assistants are quite likely to become a dominant interaction paradigm, whether in evolved or simplified forms, but this type of interaction is special. While media interpretations like The Matrix, I, Robot, or Her may be far-fetched, we are indeed dealing with far-reaching consequences. These interactions are highly psychological, utilizing cooperation systems we’ve evolved over thousands of years, and thus we might find ourselves falsely attached, manipulated, or addicted.

As we move forward, it’s absolutely critical to consider the impact of design choices on human-assistant relationships. Johnny 5 is soon alive.

--

--