Ambiguity in natural Language Processing Part II: Disambiguation

In the first part of this essay, we discussed some of the key characteristics of ambiguity in natural language processing(NLP) systems. Considered one of the most challenging aspects of NLP solutions, ambiguity encompasses a borad spectrum of forms from lexical and semantic ambiguity to more complex structures such as metaphors. Handling ambiguity requires sophisticated NLP techniques but is also an essential capabilities of robust conversational applications.

In artificial intelligence(AI) theory, the group of techniques used to handle ambiguity is known as disambiguation. From a conceptual standpoint, disambiguation is the process of determining the most probable meaning of a specific phrase. Typically, disambiguation leverages statistical model to reflect the probabilities of an assertion. Even though it sounds very simple, the algorithms that deal with disambiguation are some of the most complex areas in NLP applications. In order to be effective, disambiguation techniques should combines knowledge from the following types of models:

— World Model: This type of knowledge reflects the probability of a specific utterance based on global knowledge of the world. For instance, if we say “John’s ears are burning”. we should infer that somebody was speaking about John rather than John catching fire.

— Mental Model: This type of knowledge reflects the probability of the speaker intentions of communicating a known fact in the context of a specific conversation. For example, in a dialog about baseball, if we say “the pitcher was painting the corners”. the model will assume that the pitcher was effectively throwing strikes rather than painting a window.

— Language Model: This type of knowledge express the likelihood that certain strings of words will be chosen considering the speaker’s intentions. Language models use advanced linguistic analysis in order to infer the correct meaning of an utterance.

— Acoustic Model: This type of knowledge is very similar to the language model for it focuses on spoken communications and factors in aspects such as accents, sentiments and many other relevant elopements of audio communications.

Some Best Practices to Deal with Disambiguation

Most modern NLP stacks include sophisticated, general-purpose disambiguation techniques. However, AI agents should contribute to the disambiguation knowledge in their specific domain. Some of the following ideas are relevant to deal with disambiguation:

— Context, Context, Context: In order to be effective, disambiguation processes require rich contextual information about a conversation. Actively populating contextual metadata throughout the different stages of a conversation is essential to handle ambiguity.

Global vs. Domain Specific Disambiguation: While NLP stacks are very effective interpreting utterances using global knowledge, AI agents should complement that ability with domain specific disambiguation models that include knowledge of specific industries, ethnic group, etc.

— Continuous Training: There is no more effective way to handle ambiguity that the continuous training of AI agents. Recording ambiguous utterances from part conversations with users and using them as a training data source is an effective way to handle ambiguity in the long term.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.