Why Chatbots Suck
FB Messenger Commerce bots and Google Assistant went big, nearly every company released its own bot and Amazon Echo Dot and Google Home Mini became top holiday gifts - chatbots and assistants finally became mainstream in 2017. Perhaps you’ve already used one for getting store hours, fixing your internet or even buying a spare turkey.
Although chatbots made dramatic improvements in NLP, their biggest shortcoming is that they’re still scripted and handcrafted; nothing more than a glorified IVR or worse, a query based search. Sway too far from the script and you get the dreaded ‘I don’t understand the question’.
To understand why chatbots suck, you need to understand how they’re built.
Step 1) Decide what scenarios you want the chatbot to service and gather all the related questions it will answer. Each of these questions defines an intent. All these intents are then built into a tree for the NLP engine.
Step 2) Each question and intent is represented by a number of ways in which a user could express it. e.g. checking weather with ‘Whats the weather in Mountain View’ vs ‘How cold is it today’. These are called variances. All of a chatbot’s variances are translated by the NLP engine into a corresponding intents.
Step 3) Finally the chatbot script or conversation ties it all together. e.g. if you’re buying a shirt - enter gender, size, color. If you go off topic with ‘Is this shirt made of cotton?’, tough luck.
Chatbots suck because all of these steps have huge shortcomings.
To do a great job at capability coverage, you need to know all the questions users might ask. Without user data, you run into a cold start problem and get a beta chatbot with limited capabilities. It gathers data to learn but frustrates users. But very few companies have copious amounts of data at the start. So the vast majority of chatbots released are mediocre with the goal of improving over time.
To classify questions really well, you need great variants of questions that include the wide domain knowledge of that industry e.g. ‘internet does not work’ is the same as ‘my browser is not loading’. Today, these variances are mostly manually generated - besides reducing coverage for user questions this can be cost prohibitive. New ways to auto create these variances at scale are becoming available though it would take some time for this intelligence to have domain context.
Finally to excel at a rich conversational script, the chatbot needs to capture all the ways in which the user might want to traverse the intent scenario with ways to backtrack to a previous option, jump to other selections and hopefully, not get stuck. It’s difficult to craft a script without knowing how users would use it, a catch-22 that creates unavoidable user frustrations.
Self-learning and personalized trees will solve for these challenges and dramatically enhance the experience.
- Self learning: Chatbots get better as usage data is captured and the feedback used to manually expand service support, variances and script options - a slow and tedious process that again does not scale. Instead, chatbot intelligence should facilitate self learning so that it identifies new capabilities, questions, variance or script changes that users want and adapts the experience accordingly. This way it automatically adds questions (and answers) for #1, variants for #2 and changes scripts for #3 to quickly build a robust service.
- Personalized: Chatbots don’t provide a magical personal experience yet. Personalization is related to self learning in that each user gets a different script based on their learned behavior. The bot should be a custom instance auto tuned with your data that dynamically updates the presented script the more it learns. Ideally everyone has their own personal bot version similar to Samantha from the movie ‘Her’.
As the pace of AI innovation picks up, 2018 will bring a wider adoption of these intelligence capabilities to enable deeper personalized conversationality in chatbots.