All About Entities (Part 2): Contextual Entities

Don’t let your assistant miss a word!

Burak Akbulut
IBM watsonx Assistant
6 min readApr 26, 2019

--

Contextual entities allow your virtual assistant to detect entities based on the context of the user utterance. Instead of creating entity definitions with exhaustive dictionaries, you can train your assistant by providing examples of entities in the user examples of intents. To learn more about the dictionary approach, see part 1.

Let’s continue using the office supply assistant example from part 1. Your assistant has a #makePurchase intent, and an @officeSupply entity using dictionary and pattern values as follows.

As you start monitoring production logs to improve your assistant, you notice that your assistant’s users are referring to your products using phrases that are not expressed as synonyms in your entity model. And failing to recognize these products, which are modeled as entity values, is causing your assistant to miss sales opportunities!

You also notice that your users are asking for office products you are not selling yet. Even though you do not offer these products yet, you may want your assistant to recognize these as an @officeSupply entity. That way, you can refer them to a partner, or maybe make a search in a product catalog to offer similar products you sell.

How can you accomplish this in Watson Assistant? The answer is contextual entities!

Instead of providing long lists of synonyms, you can annotate phrases in your intents’ user examples as entities.

Let’s look at the intent editing page where we can see user utterances for the #makePurchase intent. We can annotate the word “note pad” as @officeSupply:notebook.

Now, let’s check what happened to our @officeSupply entity. Since we specifically picked @officeSupply:notebook value, the term “note pads” has been added as a synonym.

And by annotating an entity, you have formally asked Watson Assistant to start using context to recognize entities. Your assistant will now go beyond simple text matching and start using machine learning to extract entities.

If you look at the Annotations tab of @officeSupply entity, you can see all user examples that were annotated with that entity.

Now, let’s test our entity using the Try It Out panel with a simple sentence that has the synonym “pen” in it.

This utterance worked before we annotated an entity in a user example, so what happened to my entities?

Annotating an entity is a signal to Watson Assistant to start using machine learning. However you need to provide a sufficient number of examples (as entity annotations on intents’ user examples) to properly train the entity detection algorithm.

In the example above, we only provided one training example to @officeSupply entity. Just like humans, machine learning models need more than just one example to learn how your users will mention products they want to buy.

Even though @officeSupply entity has several values and synonyms, only the entities that are actually annotated are used as training data. This is because Watson Assistant needs to use the context of the annotated phrases to understand whether or not a given word is an instance of the entity or not.

To make it work, we have to give more training examples to Watson Assistant by annotating more @officeSupply entities. The more annotated examples we provide, the better Watson Assistant will get at detecting contextual entities.

Also, since Watson Assistant will be using the context, as opposed to an exact string or pattern match to recognize the entities, the precision of your assistant will increase. For example, consider an utterance like this:

I want to buy a notepad, so I can pen my new composition.

Synonym and pattern matching would identify both “notepad” and “pen” as @officeSupply entities. But if you provide a sufficient number entity annotations, your assistant will eventually learn that “pen” in the context of this sentence is not an @officeSupply entity, hence the name “contextual entities”.

Normalization with contextual entities

Let’s continue to annotate more examples, where the entity mentioned is not one of the values (i.e. “pen” or “notebook”) we have modeled.

Since we did not specify a specific value in the annotation, “sticky notes” will be added as a new value to the @officeSupply entity.

Let’s add a few more synonyms, to complete definition of @officeSupply:sticky note.

You may wonder why we bothered to add synonyms to @officeSupply:sticky note entity value, since contextual entities do not perform exact dictionary matching. There are two reasons why you may want to do this.

  1. Synonyms can provide hints to the machine learning algorithm to improve its entity recognition performance.
  2. You may want to normalize detected entities to a well-known value. If Watson Assistant detects an entity but does not find an exact match for a synonym, it will only report the entity type @officeSupply. However, if the recognized entity matches a synonym, Watson Assistant will report the normalized value too as @officeSupply:sticky note.

Dealing with unknown entity phrases

If you have experience with a live production assistant, you know that users find creative and unexpected ways to interact with your assistant. Let’s say your assistant gets the following request:

I need to buy those yellow sticky things for my home office

If you have enough training examples, your assistant can easily detect “yellow sticky things” is an @officeSupply. But the entity type is the only conclusion it can reach. When normalization to a value is not possible, Watson Assistant will return the entity mention as the value (@officeProduct:<phrase>). Let’s try this out.

Now, your assistant knows the user is asking to buy an officeSupply entity. But its dialog only knows how to deal with each specific value, namely how to order a pen, notepad and sticky note. Understandably, you did not think of “yellow sticky things” as a possible synonym for any of the products you sell.

What can your assistant do with this information? A typical approach is to search an external system using a query based on what your assistant knows. For example, you can search a database table of products, whose description column may have the phrase “yellow sticky things”. You also may have a searchable product catalog, and try to run a full-text search on these documents with search terms “yellow sticky things”.

Let’s give this second option a shot, using Watson Assistant’s new beta search skill! A search skill can be added as a second skill to your assistant. Then, you can configure a dialog node to instruct the search skill to perform a document search in a Watson Discovery Service collection.

Let’s configure our dialog to call a search skill if the entity phrase cannot be normalized a known value of @officeSuppy entity.

Then, let’s get the entity-mention and use it as a query for the search skill.

If your assistant has both a search skill configured and a relevant document is indexed in a collection, your assistant will search your document collection and display the search results as cards.

This concludes our two-part discussion on everything you should know about Watson Assistant entities! We started with simple dictionary-based entities to get our assistant to work quickly. Then, we explored the power of contextual entities to deal with unseen entity phrases.

Let us know if you have ideas to make Watson Assistant better, and join our beta program to see new features sooner.

--

--

Burak Akbulut
IBM watsonx Assistant

I work for IBM Watson Assistant to create chatbots that improve customer service using ML. If not on Slack, I’m in Cape Cod kitesurfing, at gym or reading.