UXing for chatbots — continuing the conversation
Chat bots, or conversational interfaces, were once heralded as an alternative to apps. “Bots are the new apps” claimed Satya Nadella, the CEO of Microsoft. “People-to-people conversations… That’s the world you’re going to see in the years to come”.
Skip forward to 2018 and a somewhat different narrative is starting to emerge. “Facebook’s virtual assistant M is dead. So are chatbots” announced a Wired headline. It’s a contentious claim, and deliberately so, but it contains a truth — far from taking over the world, chatbots have struggled establish themselves.
The purpose of this article is to look at why a once promising development has been declared “dead”, and what it means for the future of bots. In the first part, I delve into the history of M, while in the second, I list some of the design issues that are currently affecting the design of bots, and suggest ways in which we at Kainos are addressing them.
When the conversation about chatbots first kicked off, I was sceptical. Two of those doing the most talking were Facebook and Microsoft, to whom I’ll return in a moment. They were soon joined by a drumroll of think pieces that saw conversational interfaces as the condition to which all UIs should aspire. At its height, it resulted in writers going off the scales with wild and fanciful ideas of what they thought would happen. This, for example, from a Medium post:
Alongside these voices, there were also those of people like Chris Messina, who coined the term “conversational commerce” — I really admire Chris and enjoy his writing, even if I don’t always agree with his conclusions. And then there were Facebook and Microsoft. Why, I wondered, were they going all-in on a technology that, until then, had been a playground for tech enthusiasts? Did they know something that the rest of us didn’t?
The answer was not long in coming. When the first wave of Messenger bots arrived, the reality of where we are hit hard:
- AI technology was not mature enough
- in their haste to get ahead, Facebook failed to bring their users with them by setting clear expectations — or, as Wired recently commented, “they put no bounds on what M could be asked to do”.
It’s the second of these that, for me, is the most striking — there should have been a lot more guidance about what the platform could do. Or, to put it a different way, when there are limits on what you can do, you need to be clear on what those limits are.
As designers, we all know the devil’s in the detail — it’s the implementation that counts. In those early bots there was a big problem with implementation. Much of it seemed to lie at the doorstep of free-text conversations… with one report suggesting that they “failed to fulfil” 70 percent of user requests. That’s a big metric.
Since then, Facebook have changed their advice to developers. Together with other leading bot platforms, they’ve recognised that natural language processing (NLP) isn’t a silver bullet for creating intelligent chatbots. The simple reason for this is that the technology is still in the proving stages, and will be for some time.
But there’s a second and equally important message that Facebook were keen to get across:
Simple as it sounds, Facebook were telling their developers that the experiment in “free-form typed responses” wasn’t working. Instead, they (their developers) should focus their efforts on a different approach, one that makes use “of buttons, quick replies and the persistent menu to structure user input”.
It was a much welcome change. My hope is that it proves enduring and that we can now get on with the job at hand — building bots that are genuinely useful to users.
Now for the design principles. First though, a caveat.
You’ll notice that the title of this piece is UXing for bots. As we know, UX is used very broadly — too broadly in my opinion, but the one thing it doesn’t include is content design, which is a separate discipline. While some of my points are relevant to content design, they don’t address the detail of what a content designer would have to say. It’s an omission that needs to be addressed as, arguably, content design is the area that most impacts the design of bots.
1. Design your bot
1.1 Give it a personality
There are at least two people in any conversation. Each one has their own character traits, gestures and motivation, which together make them impactful and meaningful. In short, they each have a personality. The same goes for a bot. It too should have qualities that makes it impactful and meaningful.
Personality, however, isn’t just a function of voice and tone. In his book, Designing for Emotion, Aarron Walter says that if a product is to be successful, it needs to be functional, reliable and usable. Without these, perceptions of the product are likely to suffer. Recent research seems to back this up — when asked about the characteristics they most rated in a bot, a majority of users said that an “AI needs to be smart and efficient first… witty and personable second”.
1.2 Consider giving it a gender
If we’re going to give something a personality, should we also give it a gender? Not always. A recent survey of chatbots found that gender distribution was evenly split between bots that were female, were male or had no specific gender. What seems clear is that all of the bot’s phrasings should be gender neutral and factual, ie they should avoid gender stereotypes.
2. Writing welcome messages
2.1 Set expectations
A bot that aims to be all things to all people is not helpful and will most likely end in disappointment. A far more helpful approach is to manage your users’ expectations by setting out your stall — that is, highlight the areas of conversation your bot can comfortably manage. This is best done at the beginning of an interaction and could, for example, take the form of a menu that gives the user some choices about the kind of conversations the bot can handle. The menu should be thought of as a way of getting the user used to the app and what it’s capable of.
And when things go wrong and the user finds themself down a rabbit hole, you can return them to the menu (of featured conversations) — it will reiterate your bot’s capabilities and give the user a chance to start over again.
In time, you should also consider the option of making the menu contextual, or intelligent. An example of this would be a user who had previously searched for advice on STDs — the bot would be intelligent enough to know this and only display quick replies that are relevant to that user.
2.2 Flag useful keywords
It’s very common for a user to switch the topic of conversation or change their mind about an input they have just entered. Not unusually, this will happen in the middle of a conversation, when the bot is expecting a different response.
Depending on what platform you’re using, you’ll find that a change of topic is handled in different ways — either the bot will listen for a change of intent or the user will signal the change by entering a keyword (such as “help”,“cancel” or “start again”).
If we’re relying on the user entering a keyword, we need to ensure that they’re familiar with these terms and how to use them. One way of doing this is to draw attention to the keywords when welcoming a user to the bot, together with some messaging on how to use them.
2.3 Give instructions on how to start over
A user should be able to exit a conversation and start again. The welcome message should include a brief instruction on how to do this.
3. Conversations
3.1 Use structured input, where possible
Structured inputs are conversations that move sequentially down a predetermined path. They begin with a keyword or phase, which is normally displayed as a button or menu. On selecting a button, the user is guided through a series of steps that allow the bot to collect information or take the user through a task. One of their key strengths is that they save the user time. Another is that by signposting the user, you save them from ending up at a dead end, which is one of the main frustrations of free text input.
Importantly, free text and structured input don’t cancel out each other. They’re not binary. Instead, they should be used together in what Tomaž Štolfa calls a “blended” solution. When this happens they can enrich a conversation and take it to a new level of engagement — or what Štolfa calls “the best of both worlds”.
3.2 Keep the user informed about what the bot understands
In some conversations, and especially structured ones (or what we’re calling ‘structured input’), it’s the bot that determines the subject.
With unstructured conversations, it tends to be the other way round — it’s the user that decides the subject. When this happens, the bot should communicate back to the user what it understands is the subject of the conversation. For example, if I want to book an appointment, the bot should know that “appointment” is the subject of my request and send an acknowledgement.
3.3 Consider abandoned conversations
If a user leaves or abandons a conversation and doesn’t return for say, 2 weeks, should the conversation be continued or abandoned? There are no optimal answers to this — I recommend testing it to see what works best with your users.
3.4 Fail gracefully
As with everything, there are limits to what a bot can do. From time-to-time they fail, either because of a technical problem or because the bot doesn’t understand a request. When this happens, always provide a fallback. This could take several forms — either the user is prompted to return to the main or persistent menu or you give them the option of taking to a human (see below).
3.5 Make it easy to talk to a human
What if the quick replies don’t quite fit with a user’s question or intent? One option is to allow them to enter their question directly, using a text field. This will often be enough and the user will get the answer they’re looking for. But, inevitably, there are going to be occasions when the user finds themselves stuck with a generic “Sorry, I don’t understand that” response. What do they do? They could exit the conversation and try again. The risk here is that the bot will respond exactly as before, with another generic message.
Alternatively, instead of putting them through the same loop, you could provide an option for sending the question directly to a human, either in the form of a live chat or, if it’s out of hours, as a request that can be picked up later.
4. When things go wrong
4.1 Handling technical issues
The risk of things falling apart is ever present, especially when the bot is pulling in data from other web services. When this happens, be honest. Let the user know there are technical problems and what, if anything, is being done to return a service to normal.
Over to you
While it’s not an exhaustive list, and I’m sure there are things I’ll have missed, these design principles will hopefully help you get started with the design of your own chatbots.
If you’ve already done some work in this area, or if you’ve put these principles into practice, get in touch and let me know how it’s going.