Detecting Names And Locations With Watson Assistant

Dan O'Connor
May 28 · 7 min read
Image for post
Image for post
Photo by Jorge Salvador on Unsplash

Farewell sys-location & sys-person; Hello contextual entities!

As seen from the recent in-product announcements the sys-location and sys-person system entities have been deprecated in Watson Assistant. From July 15th, 2020 support for these entities will be removed from Watson Assistant, skills currently relying on these entities should make plans to remove them before that date.

These entities were supported in beta for the past two years and were available in English dialog skills (workspaces). Over this time we’ve found that customers often needed the ability to customize what Watson detects as a name or a location. Customers reported great results by adopting the Contextual Entities feature to implement name and location detection. By using contextual entities assistant authors can customize their own entities based using real examples from their end users which may be quite specific to their domain.

Core concepts

Most users of Watson Assistant are likely well aware of the concepts of intents and entities. Users will be familiar with entities, and how entities are defined by providing Watson Assistant with a list of values and their associated synonyms to train an assistant to detect the words that are most important to your assistant. Entities are used for example to detect product names, nearby amenities, days of the week, brand names, etc. In the majority of these cases, the entity can be described using a small list of words (dictionary). There are cases where it is impossible for you (the virtual assistant author) to count all the possible values the assistant will need to understand to respond to the end-user. Locations and names are excellent examples of these types of entities. For these entities, we will use contextual entities.

Contextual entities, as their name suggests, rely not only on the entity value (words) but also on the context of the words within the sentence. Common words like orange, pond, lake, river, and others can have completely different meanings depending on the context in which they are used: “I live at 122 Pond St” vs “we walked around the pond”.

Detecting the user’s name

To get started, it is best to think of a use case we can all relate to. The “pizza ordering assistant” is one that is often used as an introduction to Watson Assistant. It helps to explain the difference between concepts such as intents (“order pizza”) and entities (“toppings”). We can imagine expanding our standard pizza assistant use case to support collecting a delivery address. It is not difficult to do these using Contextual Entities.

Our pizza ordering assistant likely has a conversation flow that starts out with the user saying something as in “I would like to order a pizza” to which the assistant responds asking which toppings the user wants and whether he would like any sides etc… Towards the end of the conversation, the assistant will ask the user if they would like to choose “Pickup” or “Delivery”. If the user enters “Pickup” the assistant will want to collect a “name”. If the user enters “Delivery” the assistant will need to collect an “address”.

We will begin with the “Pickup” branch of the conversation. For this branch we will imagine the assistant has asked “Would you like pickup or delivery” and the user is presented with two buttons named “Pickup” and “Delivery”. Once the user presses the “Pickup” button the assistant prompts him asking “What is your name”. At this point most users will say one of a few things:

> My name is Dan

>dan

Or

>Dan O’Connor

There are a few other ways users might enter this data such as “People call me Dan” etc… We will add all as examples and a few more to our #collect_name intent.

collect_name intent examples
collect_name intent examples

Once we’ve entered 10 or so examples we can begin annotating the names in these examples. Annotating is the process through which we teach Watson Assistant to understand the context of the entity we want to collect within the sentence. For example, in our #collect_name intent, we really care about the “name” in the sentence. In the example “My name is Dan”, what we really want to extract is “Dan”. We start annotating by clicking on the “Annotate entities” toggle on the top right-hand side of the examples table. Once toggled we are in the “annotation mode”, where each word in the examples appears outlined with a faint border. By clicking on any word a popup appears with the list of available entities:

Image for post
Image for post

In this case, we actually want to create a new entity called “customer_name”. To create a new entity we simply type its name into the entity name field in the popup and select the @customer_name (create new entity) option in the popup:

Image for post
Image for post

This will create a new entity with a single value (“Dan”). We will repeat this process for all of the names in the examples in the intent. By annotating all of these names we are telling Watson which words are typically names so when Watson sees these words and words like these in the future it knows that most likely the word is a name (or more specifically a @customer_name).

Note:

You can select multiple words by clicking each ‘dotted’ word individually.

Image for post
Image for post

Once we’ve annotated all of the names in the #collect_name intent we can test out our updated dialog flow:

Image for post
Image for post

As can be seen from the above the assistant successfully recognized “Alex” although it is not one of the values it was originally trained with. We have now successfully trained our virtual assistant to understand names. The @customer_name entity can be further enhanced over time to include more name variations as needed to further expand to meet the assistant’s needs.

Detecting An Address

We will next move onto the address entities. So far we’ve covered the case where the end-user enters their name and the assistant captures that name correctly. But what if the user selects the “Delivery” option, in this case, we need to prompt for and collect their address.

Similar to what we did when capturing the name, we will create a #collect_address intent. We will enter as many of the ways as we can imagine our end users responding when prompted by the assistant with “OK, what is your delivery address”:

Image for post
Image for post

Again we enter ten or so intent examples. Turning on annotation mode we start highlighting all of the occurrences of a street name (including house/apartment number) in the examples. We will annotate these words using the @street_address entity. We want to collect the user’s street address and city/town address separately.

Image for post
Image for post

Once we have all of the street addresses annotated we can start annotating the town/city names:

Image for post
Image for post

Again, here the process is trivial, simply highlight all of the town names individually and annotate them as @city_address entities.

Image for post
Image for post

Once these annotations are in place we update our dialog nodes to expect @street_address and @city_address and viola:

Image for post
Image for post

Again, as you can see the assistant is able to recognize street and town names that have not been entered as a part of the training data.

Conclusion

As can be seen from this brief overview it is clear that Contextual Entities provide assistant authors with a simple, viable alternative to sys-person and sys-location. Using Contextual Entities provides the assistant author with the benefit of being able to heavily customize how the entity behaves based on their domain knowledge, e.g. you know whether the user is likely to just enter their name, of their name in the context of a sentence, etc...

If you are interested in learning more about entity detection read these excellent articles by Burak Akbulut:

About the author

Dan O’Connor is a senior software engineer and engineering manager on IBM Watson Assistant. Dan has worked on Watson Assistant (formerly Watson Conversation) since it was first introduced as a Watson service in 2016. Dan has worked in the Watson group since 2013 previously working on the initial Watson product offering, namely “Watson Engagement Advisor”. Dan’s primary focus is split between the Watson Assistant user experience and the infrastructural components that house the Machine Learning components of Watson Assistant. In his free time, Dan enjoys following his local Boston sports teams and undertaking various DIY projects.

IBM Watson

AI Platform for the Enterprise

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store