Conversational UX for chatbots
An overview of essential discourse patterns, part 1
Here at Nu Echo, we’ve been involved in the conversational space for quite some time now. One of the things we learned is that while creating a simple chatbot may take a few days (or even just a few minutes), creating one that is truly conversational requires a lot more time and expertise.
The purpose of this article is to present a list of the most important discourse patterns required to build what we consider a good conversational chatbot. This list is not exhaustive, but even then, it was quite long, so we decided to split it in multiple parts. This one will focus primarily on error handling and error messages.
Please note that we will only talk about task-oriented chatbots (also called transactional chatbots), i.e. bots that are designed to accomplish a task or a set of tasks, as typically opposed to chit-chat bots, whose primary objective is to maintain an organic conversation as long as possible. That second type of chatbot presents its own set of very interesting challenges, but it will not be the subject of this series of articles. We also won’t talk about implementation, as it can greatly differ depending on the technology that is used for development.
Contextual and progressive error handling
Have you ever tried to interact with a bot, only to hit a conversational wall?
It’s a frustrating experience, one that can drive users away from your bot. You want to avoid that kind of situation at all cost.
Let’s analyze the previous example and see what could be done better.
The obvious problem is that the bot doesn’t understand what the user is trying to say (maybe because they don’t fully master the language). Here, the bot is expecting a date, and the user gives an integer. The risk of this kind of thing happening can never be fully eliminated, but we can at least design the bot to alleviate the negative effects of this kind of situation. Of course, one strategy is to improve the understanding capabilities of the bot. But beside that, another way is to write better, more detailed error messages that can guide the users and help them provide information in a form that will be properly recognized by the bot. These messages need to be tailored to the moment in the dialogue where the error is happening. In other words, they need to be contextual.
Generic error messages like “I’m sorry, I don’t understand.” should never, ever be used, because they don’t provide meaningful information to the user and don’t contribute to the progression of the dialogue.
Even worse, this kind of message can cause the user to be stuck in an infinite loop, the kind that can only be broken by rage quitting the conversation. Yikes.
This leads us to the other strategy that can be applied to improve the way the chatbot deals with mismatches: progressive error handling. Simply put, your error messages should get more detailed (and, hopefully, more helpful) after each consecutive user error. If the user struggle to convey a specific information, it may be because they’re not giving it in the form the bot is expecting it.
Also, to avoid the possibility of infinite loops, bots should have a threshold regarding consecutive errors. Once it is passed, you should consider escalation (like transferring to a human or scheduling a callback). If all fails, or if escalation is not possible for some reason, the conversation should be terminated altogether, the idea being that an interrupted conversation is less painful for the user than an overly long yet fruitless one.
The error threshold should be the same throughout the conversation, in order to make the bots responses consistent and (to some extent) predictable, as any user interface should be.
Let’s combine these two concepts (contextual and progressive error handling) in another example:
Sure, you could always implement a special case in your dialogue, and treat any integer received at this point in the conversation as a date. But remember, this is just an example. And it’s only one case; any complex dialogue is bound to contain many more. While it is always a good idea to ensure that your bot is flexible and responsive enough, writing error handling messages that are of actual help to the user can help you avoid a lot of terrible situations.
Another thing to keep in mind while writing error messages: never patronize the user. For all you know, they may have entered a perfectly valid query that your bot miserably failed to recognize.
Task-oriented chatbots, by nature, guide users through tasks that can require multiple interactions. In order to do that competently, they need to be able to provide detailed and contextualized explanations for each step. It is ok (and actually strongly recommended) to have a global level help response: this is where the bot will present to the user its main features. But global help is not enough.
Contextual help is a little different from global help, as it is based on the current position of the user in the dialogue. Ideally, you want to write a response for each situation where the user can ask for assistance, even if it seems trivial or self-explanatory. Don’t forget that conversational interfaces tend to be fuzzier than visual ones. Users don’t always take the shortest and easiest path towards their goal.
While global level help can be considered as a response to the question “What can you do, bot?”, contextual help answers something else entirely: “I’m stuck, what should I do here?”.
Here is an example of what contextual help messages can look like:
Contextual help can also be a great way to regain control of the dialogue. Instead of simply explaining what the user should do, the chatbot can be proactive and suggest actions or responses for the user. This is a subject that we will touch in the second part of this series.