Affective Experiences With Conversational Agents

Xi Yang
ACM CHI
Published in
6 min readApr 24, 2019

This article summarises a paper authored by Xi Yang, Marco Aurisicchio and Weson Baxter. This paper will be presented at CHI 2019, a conference of Human-Computer Interaction, on Tuesday 7th May 2019 at 16:00 in the session Conversational UIs.

Photo by Thomas Kolnowski on Unsplash

Emotion is essential to conversational UX

Emotions are at the heart of human experience. This is especially relevant for conversational UX (in this article, it refers to the experience that the user has with their conversational agents such as Google Assistant, Alexa, Siri or Cortana). Conversational agents are becoming ubiquitously available in our daily lives. They could easily affect our day in a positive or negative way.

Like starting the day with a cup of coffee, many of us start the day with a “good morning” to conversational agents. We listen to news, check the weather, or simply play our favourite morning music. A good conversational experience would boost our day making us feel energised and positive. However, a poorly designed conversation may just have the opposite affect.

So, how to design a positive experience with conversational agents? Two questions need to be answered: 1) what are users’ affective experiences and 2) what factors influence their affective responses.

To answer these questions, we conducted a survey study in which we collected 171 user stories with their conversational agents and the emotions they experienced (for details, please read our paper). In this article, I share five key insights into users’ affective experiences and how to design positive ones.

How to design positive experiences with conversational agents

1. Consider the diversity of scenarios and contexts

The thematic analysis has shown four main use scenarios: 1) to request basic information, for example, asking for weather or definition of a term; 2) to search for answers, which involves more complex information than the previous scenario; 3) to get recommendations, for example, on places to eat; and 4) to access external services, for example, controlling smart home devices. These scenarios differ in terms of how users asked questions, what they expected from the response, and whether there were follow-up interactions.

The diversity of contexts (e.g. location of use, social context) is increasing too. Although home was shown the dominant location of use, a considerate amount of user experiences happened outside home (in a car, at a workplace or public place such as a bar or café). Moreover, our study showed that about half of the uses happened when the user was alone, and the other half with other people around (family, friends or colleagues).

To do this, we could:

  • Situate design in the scenarios of use. These scenarios are useful in helping to define user needs and expectations and to understand user behaviours in different situations.
  • Consider the surrounding environment. The environment is affected by both location of use (e.g. at home or in a car) and the social context (e.g. the user uses the agent by themselves or with their family).

2. Introduce complexity and playfulness

Our data showed that interest is the most salient emotion experienced by participants, followed by joy and activation. Particularly, in the searching for answers scenario, participants experienced significantly higher level of interest than the other scenarios. And they experienced significantly higher level of joy when accessing external services.

This finding may be explained by the emotional design theory that interest is often stimulated by the appraisal of novelty-complexity and the user’s coping potential. And Joy could be elicited through playful interactions. So, the searching for answers scenario may elicit more interest because the tasks were more complex than the other scenarios; The accessing external services scenario may elicit more joy because it created an interactive experience which makes it playful for the user.

To stimulate positive emotions such as interest and joy, we could:

  • Increase the complexity of tasks and services while keeping the coping potential high, for example, to introduce more challenging tasks.
  • Introduce playful interactions between the product and the user, for example, to create interactive experiences with external services or devices.

3. Deliver helpful as well as proactive responses

Inaccurate or irrelevant answers were found to cause negative experiences. At a minimum, we should avoid such answers and provide responses that are accurate or useful in solving users’ problems and getting things done.

What’s more, participants enjoyed proactive responses which refer to the additional information delivered to the user in addition to what was requested. For example, a participant who used the agent to “look up information about the recent solar eclipse said “the interaction was very positive. It was suggesting what I may be interested in and this made my study fun”. Proactive responses could often help users proceed with their activities.

To generate proactive responses, we could:

  • Anticipate needs based on contextual information. For example, in a good morning routine, the agent responds to the “good morning” request with a wide range of services, and these services are all relevant to the specific “morning” context.
  • Suggest relevant topics based on the search query. As in the above solar eclipse example, the participant was offered suggestions relevant to his subject of interest, and this was what made his study fun.
  • Provide actionable links based on the search result. For example, when giving recommendations, the agent could provide actionable links such as “make a reservation” or “order online” along with the search result.

4. Provide fluid, seamless and responsive interaction

This means an interaction that maintains a continuous voice conversation (fluid), integrates seamlessly with external services (seamless), and responds fast to user requests (responsive).

A negative experience may happen when fluid interaction was broken. For example, most of the frustration happened when the agent could not understand the user. Seamless interaction was usually appreciated by participants: “[the agent is] helpful for things like adding events to the calendar and even ask for an Uber car”. Lastly, our data showed that delay in the response could cause negative affect. For example, a participant reported getting annoyed when the system was “slow in answering”.

To achieve this goal, we could:

  • Focus on enabling robust voice interaction as this can help reduce frustration due to weak comprehension.
  • Anticipate users’ preferences for external services and launch them accordingly.
  • Attempt to address the delay time (if there is any), for example, by explaining to users why an answer cannot be delivered immediately.

5. Consider hedonic (task-unrelated) factors

These factors are: 1) comfort in human-machine conversation; 2) pride of using cutting-edge technology; 3) fun during interaction; 4) perception of having a human-like assistant; 5) concern about privacy; and 6) fear of causing distraction. These factors usually do not directly affect the task at hand. However, they were shown to influence how a user felt about the whole experience.

Lack of comfort was caused by the difference between a conversation with a machine and with a real person, for example, “having to speak very clearly and loud” or “talking to no one feels like talking to self ”. This may also happen if the user was not familiar with the agent, for example, “I was used to talking to Siri so it [Google Assistant] was very different. I was uncomfortable because I wasn’t used to it”.

Participants also expressed their connectedness to the agent: “I felt well taken care of ”, and “I feel someone is there to help me all the time. I never feel alone”.

Besides, some participants mentioned their concern for privacy. Others worried that interaction with the agent may be a distraction to the task at hand, especially when multitasking.

To do this, we could:

  • Educate users about talking to the agent to help them set right expectations and reduce discomfort.
  • Design the agent with caring personalities to provide traits of a human assistant.
  • Build trust with the user to reduce concerns for privacy and distraction.

For more details, please read our paper, or contact me directly:-)

Xi Yang, Marco Aurisicchio, and Weston Baxter. 2019. Understanding Affective Experiences With Conversational Agents. In CHI Conference on Human Factors in Computing Systems Proceedings (CHI 2019), May 4–9, 2019, Glasgow, Scotland UK. ACM, New York, NY, USA, 12 pages.

--

--

Xi Yang
ACM CHI
Writer for

Product designer in tech. PhD in Conversational AI, HCI.