As automation continues to be a top business priority in 2020 and beyond, digital AI assistants that leverage the power of natural language to engage with customers across multiple channels are proliferating.
So how do organizations manage the growing number of different conversational AI bots to ensure that they can collaborate efficiently and provide a consistent and superior user experience?
This article introduces a novel architectural approach to orchestrating multiple chatbots across departments, regions, and use cases. It is especially relevant for businesses that have advanced conversational AI solutions or have multiple bots working across the organization as well as the situations where the intent limitations of the NLP engine have become a barrier.
Collaboration Means Better Conversational AI Experiences
From using chatbots for simple Q&A-like use cases, businesses have now matured to using conversational AI to automate many different customer and/or employee interactions across the engagement life cycle. The use cases range from initial interactions in a sales or hiring process, to operational interactions like onboarding, knowledge management, payments, renewals, and user support.
Typically, as a business starts their AI journey they choose their first use case to address a pain point that also offers the potential for significant business impact. The first task-oriented digital assistant can be focused on converting online customers to a sale, or on automating common customer service interactions, or it could be digitizing their customer onboarding journey. This varies according to the unique circumstances of the business.
The initial chatbot solution tends to be narrowly task-oriented and is trained to respond to and automate very specific requests and workflows. Just as a human worker is skilled and trained to be a subject matter expert, individual bots are also ideally trained for a specific skill set. This enables them to execute their role well rather than being overloaded with too many capabilities. Where a business use case requires many different skills, it is the combination of these that gets the job done. Whether these skills are associated with human workers or bots or a blend of both, the overall quality of the experience hinges on collaboration.
Collaboration across several skilled AI bots has emerged as both a requirement and a challenge for enterprises as they expand their conversational AI projects.
NLP Challenges as Bot Projects Expand
The two most common problems that companies can face on their automation journeys are 1) the problem of intent overload as they look to expand the capabilities of their solution and the complexity of their use cases and 2) the proliferation of individual standalone bots from different vendors, often on different NLP technology.
1. The Problem of Intent Overload
Intents represent the purpose of a user’s input or conversation. The designer defines an intent for each type of user request they want their application or bot to support. Intent overload occurs when the designer adds too many intents in an attempt to deal with a wide range of topics that the bot should handle or when the designer is trying to cover all the different permutations that can occur with a complex use case. Packing too many intents into an NLP model can lead to what is termed “overclassification” in the AI world — this is essentially a rise in false positives, where the bot sends the wrong response to the user query. When this happens the conversational experience is impacted, even to the point of bot failure.
A bot may be able to handle the initial requirements of the business use case but often a business owner decides to add additional capabilities such as language detection and translation, data protection, or just simply add new intents to extend its scope.
When a bot is first launched, it often handles the initial requirements of the business use case quite well but then begins to degrade over time as the business decides to add additional intents, features, and capabilities. These then begin to overload the intent engine. Consider small talk — the short questions and responses that are designed to make the conversation with the bot feel more human. They add no real business value but yet they can gobble up a significant number of intents, reducing the eventual number of productive intents designed to solve the business problem.
Natural human conversations can often take many different twists and turns or detours in the course of a single dialog. If the bot is designed to handle this, the NLP needs to be able to keep track of the conversation and maintain context, even as the conversation shifts to a different question or task. To achieve this, it needs to grow in terms of its ability to do more things and manage more conversations. It also needs to deliver better business results and make appropriate decisions for the user. This ultimately can test the intent limits of the bot.
Depending on the underlying NLP engine used and how it is configured, the limit of intents that can be handled by a single bot can be surprisingly low. For example, Amazon Lex has a soft limit of 100 intents that can be associated with each Lex bot or up to 500 intents per application with Microsoft Luis. Solving complex business problems or handling multiple product lines can quickly result in hitting the barrier on intents for any given NLP engine, thereby reducing the capability of the bot or the quality of the experience.
One approach to dealing with this is to divide up the problem amongst a number of bots with different skills (or products) being distributed across multiple bots. For example, if a particular business use case needs 3,000 intents to deliver a good experience, then this can be handled by anywhere from 5–30 bots depending on the NLP engine limitations. However, in order for this to function well these individual bots need to collaborate with each other and with a master dispatcher to ensure a single conversation with the customer. Hence the need for a good orchestration model.
2. The Proliferation of Standalone Bots in the Business
Besides the issue of expanding the capabilities of a single bot, different departments in a company often create their own standalone digital assistant independent of each other. For example, sales may have a digital assistant that helps online customers through a purchasing or renewals process, customer service may have a bot to answer order status queries, and operations may have one to manage payments and collections.
Different approaches, NLP engines, tools, and conversational user experiences emerge, all under the same brand. This impacts the ability to maintain a consistent brand experience and a single access point for the user no matter what their interaction may be. Also by having individually managed AI bots in their departmental silos, agility and economies are difficult to realize.
Multiple bots also can also proliferate within a single department. Take the example of an IT Helpdesk where a bot may have been deployed to help employees order their laptops and accessories, another to let them access the technical knowledge base, and one to help with password reset and other support issues. In this case, there are several, yet related, use cases that ideally should be universally accessible. Since packing all this functionality into a single bot isn’t feasible due to intent limits and overload, it makes sense to layer these with a super bot that can orchestrate across all task-oriented bots in ways that make for a seamless conversation, consistent experience, and better management.
Evolving a Multi-Bot Orchestration Solution
The way to overcome this is through multi-model NLP orchestration that enables the business to blend independently managed bots into a unified experience.
The brain behind this is the bot orchestrator and dispatcher that:
- Enables the deployment of multiple — of the order of 100s — of NLP bot models in parallel, each with its own training data, intents, and utterances, regardless of the mix and match of different NLP engines.
- Navigates operations across bots and is at the forefront of conversations and routes to the appropriate AI bot or bots according to the intent.
- Centralizes capabilities like language detection and translation, authentication and verification, small talk, data redaction, and escalation avoiding duplication of these across all bots, making it faster and easier for the business to build and scale bots.
To understand the significance of this model, consider the analogy to parallel computing. For years, computer designers built bigger monolithic computers in an effort to be the biggest and fastest supercomputer in the world. This changed in the mid-80s when Caltech put 64 off-the-shelf microprocessors in parallel and showed how to scale using parallel compute rather than building monoliths.
Using a similar approach of parallel scaling, we developed a Machine Learning algorithm to scale 100s of standard NLP engines from Google, Amazon, IBM, Microsoft, and Facebook to allow an order of magnitude increase in the number of intents, utterances, and entities that a conversational solution can use to solve complex problems in business.
The ServisBOT multi-model orchestrator can be deployed on Amazon Lex, Google DialogFlow, or on the open-source Rasa AI engine. A single orchestrator can manage and direct hundreds of individual NLP solutions running on different NLP engines — Dialogflow, Amazon Lex, Microsoft Luis, IBM Watson, Wit.ai as well as on-premise solutions like Rasa or even custom models.
A Common Bot Definition Format allows individual NLP models to be ported from one NLP engine to another, a so-called lift-and-shift e.g. from IBM Watson to Google DialogFlow or from Microsoft Luis to Amazon Lex. This makes it easy to compare the performance and accuracy of individual NLP engines and move workloads to the engine best suited for the application.
For businesses that have witnessed the challenges mentioned above or for those that are looking at more scalable and efficient conversational AI architectures, the key benefits of this multi-model NLP orchestration model are hard to ignore.
- The number of intents handled by an individual conversational AI solution can be increased by order of magnitude. For example, you can increase the current Amazon Lex limit of 100 intents to 10,000+ intents.
- More complex conversational AI solutions can be built and managed without the need for multi-million dollar budgets and teams of data scientists.
- Optimal performance and outcomes can be achieved through the mix and match of different NLP engines in a production environment.
- Large cost savings can be achieved compared to building and scaling custom monolithic models.
As enterprises increasingly adopt conversational AI solutions across their business and become more sophisticated in their application of the technology, they are looking more seriously at NLP orchestration and bot collaboration in an effort to further enhance conversational experiences and get bots to market faster and more efficiently. A clever multi-bot architecture will help them get there.