Best practices: designing a persona for your assistant

Lessons in the theory and practice of designing a virtual assistant from IBM Data and AI Customer Care

James Walsh
IBM watsonx Assistant
7 min readFeb 3, 2023

--

All images copyright IBM Corp 2022

Every day, the customer care experts in IBM Data and AI work with some of the world’s largest enterprises on designing and implementing virtual assistants to handle the never-ending flow of customer service requests. During initial workshops, clients and IBM customer care experts spend two full work days engaging in enterprise design thinking exercises geared toward producing a robust plan for what the assistant will do and how it will do it.

The keystone of the planning process is defining a persona or, as the customer care experts like to call it, a character for the assistant. Here, we outline some of the theories and design principles that define IBM’s conversation and voice user interface (VUI) best practices.

Character first

Before writing a single line of dialog, experienced conversation designers will define a character for their digital assistant. This might seem counterintuitive; best practices teach us not to present virtual assistants as if they’re real people, so why bother to design a character in the first place?

There are two overarching reasons:

  1. Users will unconsciously and automatically infer a character or personality when interacting with your assistant.
  2. Character definition guides conversation designers to write consistent, coherent dialog that sidesteps the risk of jarring shifts in the assistant’s tone and character, what conversation analysts refer to as the “split personality” effect.

Research consistently demonstrates that users anthropomorphize technology, including and especially conversational interfaces. Failing to imbue your assistant with character means that users will project one onto it.

Not accounting for character up front in the design process also leads to inconsistencies in the assistant’s output, resulting in the aforementioned “split personality” effect. Users find it jarring and unpleasant when the voice of the assistant shifts in illogical, inconsistent ways.

This doesn’t mean that it’s appropriate to create a full-blown character with a name, a backstory etc. Conversation designers should always be transparent about what their digital assistant is and what it can and can’t do. That said, users bring the same standards to conversations with assistants as they do to those with humans, and assistant builders must design accordingly.

Computers are social actors (CASA)

The research paradigm CASA posits that, although people consciously don’t believe technology deserves polite social treatment, they apply and expect the same basic standards of social interaction from machines as they do from other humans. Studies have shown that users will respond favorably to flattery from technology, respond favorably to computers whose personalities match their own, and are generally predisposed to treat computers like humans.

Further, users in research settings have demonstrated a tendency toward politeness and social etiquette regardless of the conversational medium. Research has demonstrated that users will display the same social behavior during interactions with technological interfaces whether the interface was in the form of voice technology or simple text.

As the Stanford researchers Byron Reeves and Clifford Nass put it in their groundbreaking article The Media Equation: “The presence of voices was apparently no big deal, at least as far as creating a social presence. It doesn’t take virtual reality to create the sense that another person is present; people don’t need much of a cue to respond socially”.

The implication for conversation designers is that users expect a coherent, consistent persona when they’re interacting with a chat or voice interface, and the designer needs to provide one.

“Escape through design”

That’s the title of one of the subsections of Masahiro Mori’s seminal essay The Uncanny Valley. Professor Mori hypothesized that people would switch from empathy to revulsion as technology approaches but does not achieve a human appearance.

Replica of Fig. 1 from The Uncanny Valley by Masahiro Mori from Energy, 7, 33–35, 1970.

Mori advised designers to “create a safe level of affinity” by pursuing design that does not approach human form too closely. Mori’s focus was prosthetics, but his thesis of “escape through design” has become a fundamental tenet of conversation and VUI design.

With this in mind, the thrust of conversation design becomes, in the words of IBM’s Dr. Lisa Precht, “to make the assistant not appear human, but still create an interaction that is as natural as possible.” A well-designed conversation fosters trust between the agent through a natural linguistic exchange without deceiving the user about the nature and capabilities of the assistant.

User-centered design

Dr. Precht writes that experience has taught her that “people interacting with digital assistants are primarily interested in achieving their goals quickly and in having a very transparent experience.” This observation echoes Cohen et al. In their seminal work Voice User Interface Design where they articulate the two principles of VUID as:

  1. Understanding basic human capabilities is key to the design of effective user interfaces
  2. Understanding the user’s needs and goals, in the context of business goals, is key to the design of successful applications

Conversation designers, especially ones building assistants for enterprises, must craft their assistant’s character to be consistent with their brand’s identity, appropriate for the customer service situation, and coherent with the position of the assistant relative to the end user. An assistant that uses humor and empathy can engender a positive user experience, but the trust can get compromised if the assistant has to deliver unwanted news and does so in a way that is incongruous with its previous behavior.

You don’t have to strain your imagination to think of a situation where a user is let down by inadequate design. If a banking assistant begins an interaction with a user regarding a loan inquiry, presents to the user as exceedingly friendly, then has to turn around and tell the user that they don’t qualify for a loan in a flat, purely factual manner, the user will feel tricked and angry.

This is why crafting a character before writing dialog is paramount. The assistant’s persona must be consistent across all user scenarios in order to engender trust in the user and ensure that the user will continue to use the assistant to self-serve in the future.

Designing the assistant’s character

IBM Customer Care brings all of this theory to bear in workshops with clients where they generate a character appropriate to their virtual assistant. They set up a series of continuums with two poles at each end:

  • Funny vs. Serious
  • Respectful vs. Disrespectful
  • Natural vs. Artificial
  • Expert vs. Novice
  • Formal vs. Informal
  • Enthusiastic vs. Calm
  • Empathic vs. Distant
  • Warm vs. Cool
Dr. Precht’s visualization tool for the various continuums of character.

Clients will initially start the workshop by placing the dial very far in one direction, but customer care experts usually guide them toward a more mediated direction. Dr. Precht cites the example of “Natural vs. Artificial” in the context of building an IT help assistant as an example where the initial instinct may not be one where effort and benefit are in the right proportion.

Obviously, the gut reaction is to want the assistant to behave in a natural manner, but that would mean consistently varying its responses to questions, the same way a human being would. While the answer to the assistant’s questions should be formulated using natural language, designing and implementing an assistant that endlessly varies its responses might not be feasible or necessary, so the expectations of how “natural” the assistant will seem should be adjusted accordingly.

For clients with complex use cases, for example where the assistant is designed as a brand representative, a more natural approach can be appropriate and Customer Care conducts a more comprehensive character development exercise.

All participants in the workshop name several real people and fictional characters and settle on a handful that they believe would work as avatars for their assistant. The participants then add key quotes and generate a list of adjectives for the assistant. A matrix of associations is now captured, both in a planning mural and in the imaginations of the participants, and a clear picture of the character of the assistant begins to emerge.

Learn more

We’ll be covering designing a conversation flow and how designing a character dovetails with it in forthcoming articles. For more on the pre-build process, Part I of our Getting Started series outlines how to plan your assistant so that you can start conservatively and launch quickly.

Below is some of the research and writing that has influenced IBM Data and AI Customer Care’s practice. All articles are available via Google Scholar.

Cohen, M. H., Cohen, M. H., Giangola, J. P., & Balogh, J. (2004). Voice user interface design. Addison-Wesley Professional.

Lee, E. J. (2010). The more humanlike, the better? How speech type and users’ cognitive style affect social responses to computers. Computers in Human Behavior, 26(4), 665–672.

Mori, M. (1970). Bukimi no tani [the uncanny valley]. Energy, 7, 33–35.

Reeves, B., & Nass, C. (1996). The media equation: How people treat computers, television, and new media like real people. Cambridge, UK, 10, 236605.

Smestad, T. L. (2018). Personality Matters! Improving The User Experience of Chatbot Interfaces-Personality provides a stable pattern to guide the design and behaviour of conversational agents (Master’s thesis, NTNU).

--

--

James Walsh
IBM watsonx Assistant

Boston born. Virginia alum. Austin based. UX/UI, LLMs, and other acronyms.