Getting Chatty with IBM Watson
Following on from my last article where I introduced some common patterns used to build chat bots, we’re now going to look at the basics of building chat bots with IBM Watson.
The Watson Assistant service is available on IBM Cloud and enables you to build apps that include natural language processing and a structured conversation. The service provides an API which you can call from an app or website to hook into your chat bot.
Language processing is achieved through defining intents and entities. In brief, intents are the possible meanings of what the user says, and entities are the parameters in the user’s text, e.g. objects, places, etc.
In Watson Assistant, you define an intent by providing a number of examples which have the same meaning (i.e. the same ‘intent’). So, for example, if you wanted Watson to understand when someone was asking where something is, you might provide some examples such as:
Now of course you don’t want to give the same response to every “where is” question as they might be asking about different places. This is where entities come in. We can define the locations we are interested in as an entity:
Here we have a “location” entity, with a number of values, each of which is a different location. Each value can also have synonyms. So with this I can now provide a different answers for “where is reception?” and “where is the toilet?”, which would certainly help avoid any mess at reception…
Engaging in dialog
So we have defined an intent and an entity which will allow Watson to process the user’s input, but how do we provide a response. This is where the dialog builder comes in. It enables us to create structured conversations which provide responses to particular intents and entities.
When you first create a dialog, it has 2 nodes pre-built. The first is the Welcome node which provides a greeting when the conversation starts. The second is a catch all node which triggers when no other node matches.
To provide a response to “where is the bar?”, we can create a node in the dialog which triggers when it sees that intent and entity:
This node triggers when there is an intent of #where-is and an entity of @location:bar. We add text response to define how to respond.
When Watson receives the user input, it will determine the intent and entities, and then look through the dialog for a node that matches. The nodes are checked one by one starting from the top.
So imagine the user asks, “where is the bar?”, and our bot responds “The bar is opposite reception, near the front entrance.” What happens if they ask a follow up question, such as “What time does it open?” Our bot at the moment won’t be able to handle that.
So, we can add another intent:
Using this intent, we can add to our dialog, as a follow on (child) node:
So after if asking “where is the bar?”, the user asks “what time does it open?” our bot will respond with “The bar is open from 6am until 11pm.”
If after asking “where is the bar?”, the user asks something else, the dialog would fall back to the first level, and end up on the “Anything else” node.
We could have added this node beneath the #where-is node, rather than as a follow up node, but then it wouldn’t have known which location unless it was explicitly mentioned in the second input.
One more thing to be aware of in the dialog is the “Jump To” option. This allows you to jump to a node from somewhere else in the dialog. If you click on the 3 dot menu in a node, you can select “Jump to…” This will allow you to jump to one of three places on another node:
- User Input: If you go to the user input, then the bot will wait for input from the user, before continuing from that point in the dialog
- Condition: If you go to the condition, then the dialog will immediately continue processing from that node, i.e. it won’t wait for any user input.
- Response: If you go to the response, then the dialog will immediately add the response of the node to your current response
In our dialog, imagine you wanted to be able to answer the opening hours question for the bar without having to first ask where it is. We could use a “Jump To” to provide two ways to the same response:
So now if someone asks “when does the bar open?”, it will trigger the highlight node above, and then jump to the node at the top right to provide the answer.
Scaling it up
Update: A new feature in Watson Assistant enables conditioned responses on each node, so you may not need to nest your nodes as described below. For details, see “Differentiating the same with Watson Assistant”
Hopefully, you are beginning to see how to build a dialog, however to deal with a large number of inputs, there is a better way to structure this. Look at the top level, we have two nodes just to deal with the bar. Now imagine adding more locations, we are going to add a node at the top-level for every question and location combination. We can de-clutter the top-level by using nested conditions:
Now we just check for the #where-is intent, and then “Skip user input” to jump to the nested conditions to check for which location. I’ve also added a node at this level to provide a specific fallback answer. “Anything else” nodes are added using a condition of “anything_else”.
You may have noticed though, that we have lost our follow on question about opening hours. In this case, we can manage that better by remembering what location they asked about, and then checking that as part of the nested condition. So first we need to remember the location using the ‘context’. You can set context using the context editor. To show the context editor, click on the 3 dot menu in the response section of the node editor, and select “Open context editor”. Now you can add a context variable that saves the location:
Then in the #opening-hours section, we can use nested conditions again and also check for that context being set — notice the condition $location == ‘bar’:
Go forth and botify!
So now we’ve looked at some of the basics of building a bot with Watson Assistant. “Conversation Patterns with IBM Watson” dives into some of the conversation patterns we discussed previously, and looks at how to implement them with Watson.
Find more of my Watson articles in the Conversational Directory.