Conversation Patterns with IBM Watson
Following on from an earlier article, where I introduced some common patterns used to build chat bots, we’re now going to look at building some of those patterns using IBM Watson. If you haven’t used the Watson Assistant service before, you may want to read about the basics of building a bot with Watson in “Getting Chatty with IBM Watson”.
One of the things I talked about was providing guidance at the beginning of the chat. To provide this before the user says anything you can add a “welcome” condition to a node. A welcome node is automatically generated for you when you start a dialog.
You can add more conditions to “welcome” nodes if you want to have different introductions depending on some external factor, e.g. it could say “good morning”, “good afternoon” or “good evening” depending on the time.
You could also provide different guidance depending on whether this was the user’s first time using the bot.
Don’t repeat yourself
It’s not very natural to always give the exact same responses every time. You can provide different responses in a node by choosing a selection policy. By default it is set to “sequential”, but you can also make it random.
“Sequential” is useful for providing an initial detailed answer and then more concise answers after. “Random” is good for mixing it up.
Watson Assistant allows you to store information. This is done using the ‘context’. For example, when asked where something is, you may wish to show a map. So your dialog can store the location in the context, and your app can use it to show the location on a map.
You can set variables in the context using the context editor, which is accessed via the 3 dot menu in the response section of the node editor.
Often it is necessary to remember the topic that is being discussed, as users will not always repeat the entities. For example, a user might ask “Where is the bar?”, and then follow up with “and what time does it close?”. This second question is still about the bar but there is nothing in the user input that tells us. In this case, we can store the topic in the context, and then have nodes conditioned on that context variable in the places where that topic makes sense.
You could deal with this situation by having follow up nodes, however you would then need to account for the different orders in which the user could ask, which would add a lot of extra nodes. Using the context means we can keep our dialog tree simpler and more manageable.
Follow up questions
Follow up nodes are useful when you ask your users a question as part of a response. For example, if a user asks what your bot’s name is, you might answer and then ask the user’s name.
Be careful with using an “anything_else” node here. Sometimes users don’t respond to your questions, and by having an “anything_else” node here, we are assuming that the user’s next input is a response to our question. This is something to consider in each use case as you may want it in some situations but not others.
Often you need to gather a set of information, e.g. payment details, personal information, etc. The bot should be able to handle both taking all that information in one go, and taking some and prompting for what is missing.
There is now an easier way to gather information than what is described below. Have a look at “Gathering Information with IBM Watson Assistant”. I’ll leave the method below as maybe it will still be useful in some cases.
So first let’s create a node that responds to the user asking to subscribe to a service. As a response to #subscribe, we ask for the user’s name and date of birth.
As a follow up, we check if that information has been stored and finish the information gathering if it has. Initially, of course, we won’t have the information, so this node will be skipped.
Then we check to see if the user has provided the name. If they have then we store it in context and Jump To the node below. Similarly we then check for date of birth in the input and if it’s there store it and jump to the next node. We can use the system entities @sys-person and @sys-date to recognize these pieces of information.
Now we check for missing information. If we don’t have something then we prompt for it. After prompting we Jump To the user input before for this section, to allow the user to answer the question.
Finally we have an “anything_else” node to catch other inputs and return us back to the first node.
You can create an “anything_else” node at any level, so you can have specific fallbacks for particular parts of the dialog. For example, if you have a node that responds to a “where is…” question, and nested nodes for each location you know about, then you can add an “anything_else” node to catch those things you don’t know the location of, and give a better response than the top level fallback. “anything_else” nodes are added using a condition of “anything_else”.
This shows that the bot has understood the question and just hasn’t been told where something is.
Update: A new feature in Watson Assistant enables conditioned responses on each node, so you may not need to nest your nodes as described above. For details, see “Differentiating the same with Watson Assistant”
How you implement a particular business process depends on what the process specifies, so it’s not really possible to go into details here. It’s likely that for each step in the process you will progress onto another level of the dialog using follow up nodes, and you will probably need to use “Jump To” to skip or reuse parts of the dialog. It will also require careful use of “anything_else” nodes to make sure the user doesn’t accidentally fall out of the process.
Sometime users don’t know what to say to a chatbot, and so it can be useful to provide some help. This is easily done by creating a #help intent, and then providing a response which explains the topics that the bot knows.
Collecting feedback for when you bot is doing well or not doing well is useful for improving it. There are two ways to collect this:
- implicitly by understanding when the user’s indicates whether or not the bot responded well
- explicitly by asking the user for feedback
In the first case, you will need to add intents to understand good and bad feedback, e.g. “awesome thanks!” might be an example of positive feedback, and “you stupid bot!” might be an example of negative feedback.
In the second case, you can add an option to your app for rating each response. Alternatively, the bot could specifically ask if a response was good or not as a follow up question and then record the user’s response. The benefit of adding an option in the app is that the feedback is less in your face - the user doesn’t feel that they need to give feedback. The benefit of having the bot ask is that you can build it in the dialog and not have to write more code in your app. My preference is to use a less intrusive feedback mechanism in the app to avoid annoying your users.
You can implement both implicit and explicit feedback mechanisms in the same bot.
There is more detail on getting feedback in “Bot Feedback with IBM Watson”
Don’t talk about…
Sometimes you may want to catch references to topics you don’t want to engage in regardless of what the intent is, e.g. you may want your bot to respond with “No comment” to political topics.
A simple way to do this is to define an entity, e.g. @dont-talk-about, with values for each of the topics, e.g.
You will then need to create a node with the condition @dont-talk-about. This node needs to be put before any nodes that check for intents, so that we don’t start matching those.
Understanding the tone of the user’s input can allow you to respond differently, for example if the user is angry you may want to be very nice to them or pass them to a human.
The IBM Watson Tone Analyzer service can enable this. You will need to call this from your app and pass the result in as part of the context, which you can then condition on in your dialog.
See Santa’s Little Twitter Bot for an example of using Tone Analyzer in a bot.
FAQ (Frequently Asked Questions) can be built by having a flat dialog and providing answers to a set of questions. While the FAQ is dealt with by having many nodes at the root level, you can still use nested nodes, follow up questions and context to improve the experience.
Long tail is how you handle topics that your bot hasn’t been trained to deal with. For example, if you have a bot that is used for controlling things in your car, and the driver asks about how to check the oil level, then the bot could pass that onto a system that is loaded with the car’s manual and can search through it for relevant information.
The IBM Watson Discovery service can help with this. You will need to call Discovery from your app. You could do this when your Assistant API returns saying it can’t answer, or by detecting certain topics in your dialog and setting a value in the context to tell the app to call Discovery.
Go build a bot
Hopefully I’ve given you enough information to start on your bot making journey. So go build some cool bots - I’ll be interested to see what you create!
Find more of my Watson articles in the Conversational Directory.