A Business Leader’s Guide to Chatbots

Praful Krishna
The Startup
Published in
8 min readJul 1, 2020

There are too many chatbot vendors, platforms and approaches out there. Almost every one claims to have a unique AI enabled approach. A few work. Many don’t. With marketing amped up, it’s harder and harder to discern the good from the bad. On the other hand there are bold mandates from the senior management for comprehensive digital transformation. All this leaves the business leaders responsible for implementing chatbots at sea. Add to this limited budgets for trials, limited appetite for in-house teams, limited access to data, and you get a scenario with many false starts and bots that barely work.

The article talks about a systematic approach to enable conversational experiences for your organization. It just skims the surface, but message me if you would like to discuss more.

1. Understand what you really need

As with most things AI, planning while keeping the end user at center is the most important part of any project. The first step is to decide what exactly to you need. Not want. Need. Every thing starts from here.

As examples, here are a few well thought-through objectives I have seen in my career:

  • To let an internal or external customer discover some specific information e.g. finding out vacation policy for pregnant employees, or finding the best number to call for cancelling international airline tickets.
  • To enable a customer to complete a task quickly without waiting for a human agent e.g. blocking a credit card, or creating a ticket to report a bug in software.
  • To engage a prospective lead and push them further down the marketing journey e.g. point of sale discounts for bundled products, or generating leads on Facebook.
  • To enhance the value of the organization’s brand e.g. check out Sephora chatbot — almost everything it does is barely a click away in any case.
  • To save costs of human call center agents and call center infrastructure. (To be honest, this the worst objective to start with. Let this be a byproduct of the other objectives.)

The end-goal must dictate everything. For example, if you are building an internal chatbot for HR policies for the company, you may think about a Slack app with to-the-point answers driven by recall-first natural language search (don’t worry if you don’t know what that means). It’s reasonable to assume that employees will click an “Escalate to Human” button if they need to. You will think about ways that HR experts can answer the escalated questions at their leisure and the app can learn from their answers.

Whereas if you are thinking about a conversational interface for blocking lost credit cards, it must be a bespoke widget on your website, mobile apps, and other touch points e.g. social media. If you already have a chat interface, this app must integrate with it, triggering at the right intent. It must be able to pop open some structured forms for critical information like SSN and credit card number to ensure security and to evoke trust. It must be truly conversational and empathize. You will rely less on AI and more on manually crafted responses. Of course, it must be high-precision, it must be real time, and it must know on its own when to escalate to a human.

In short, think about the right experiences and interfaces rather than the right chatbot.

2. Decide how much AI is necessary

After you know what you want, you need to decide who to work with to make it happen. The most important factor for that is to understand how critical artificial intelligence is to your vision. This will dictate build vs buy and choice of vendors, among other things.

To understand the role of AI, let’s quickly look at the anatomy of a conversational interface. Typical bots are designed around manually crafted conversation flows. The bots expect users to enter one of the pre-programmed sets of keywords e.g. “Hi! I lost my credit card.” This is called an intent. The bots are programmed to reply to such intents as well, e.g. “I am sorry to hear that! Don’t worry, I can help you deal with this hiccup. When did you lose the card?” Again, the bot expects another set of intents e.g. “yesterday”, “on Tuesday”, or “just now”, and depending on the customer’s answer the bot responds appropriately or invokes some other function e.g. actually blocking the card after verification.

Artificial Intelligence helps such a bot in three ways:

  1. It helps match user inputs with the right intent. For example, if the user says “My card was stolen,” it’s perhaps the same intent as “Hi! I lost my credit card.” If so, the app must know some semantic relationships like card := credit card and stolen = lost. AI driven natural language understanding (NLU), can be very helpful for this.
  2. It helps understand prominent intents and likely conversation flow based on history of users’ interaction with human agents. Most organizations have copious transcripts readily available. Analyzing them using AI, NLU or other data science techniques tells you which intents should be manually programmed.
  3. Let’s face it — there is no way your team can think of all possible intents and program them. AI helps deal with un-programmed intent. Before escalating to a human, AI driven natural language search (NLS) can go through all your information base and feature the right content. This is not trivial. At a client my team integrated NLS with a ticketing system to successfully deflect 64% of tickets.

So now you need to look at your product vision, available data and its nature to plan what parts need AI, and what parts traditional software can handle. AI comes with huge costs, least of which is the dollars you pay. It comes with marvelous promise as well.

Broadly, AI is very useful when there are too many intents and the language generally used by customer base is too varied.

3. Avoid the common pitfalls

When someone thinks of chatbots, there are certain metaphors and use cases that come to mind. Unfortunately, at times these metaphors are completely the wrong ones. Many business leaders fall prey to the fallacy of focusing on the tool rather than the experience. Three mistakes are prominent.

First, we should not focus only on text. In reality humans respond much better to visuals. “A picture is worth a thousand words” is not just a cliche for conversational experiences. Similarly, a two minute video at times is much more powerful in explaining something than a series of bullet points. It’s better to think of the entire experience from the user point of view and build in a multi-media format. The anchor to this experience can surely be a text chat, but the delivery of information doesn’t have to be. This paper linked below talks about how NLS handles images (look for “Textual Representation of Images”). There are multiple other approaches.

Second, we should try not to anthropomorphize. The motivation to do so is powerful — a digital persona best captures the metaphor of an intelligence substituting for a human. However, the early trend toward anthropomorphization is often associated with errors, bugs, mis-translation, etc. IBM started this trend but ended up being ridiculed for over-promising and under-delivering. Many others followed the same path. While there are exceptions, users typically associate that mistrust with digital personas.

Related to that is the idea that bots should be completely transparent as to when is a user talking to a bot vs. a human. The expectations become completely different — users change their behavior as to what they type, they expect a lower level of accuracy and relevance of responses, and they are willing to click links or tap buttons when they know they are talking to a bot. Trust levels are also different for different interfaces. For example, while providing their credit card numbers, users trust a form, a bot and a human in that order.

There are numerous such pitfalls that do not sound intuitive. The best way is to take an agile product management approach to conversational experiences with copious inputs from the customers.

4. Make business case for the roadmap, not the project

Let’s say you have articulated your vision (or the Desirability), and figured out what is possible and how to avoid the pitfalls (or the Feasibility). Once all the design elements are thought through, socialized, tested with users to the extent possible and put in place, a new challenge emerges — that of Returns on Investment (or the Viability). Designing any product is an iterative process. For conversational experiences, the Viability part forces the biggest constraints.

Let’s understand the two aspects of this problem. First, conversational experiences are bespoke to each situation. It is nearly impossible to just get a product off the shelf and quickly implement it. Second, the solution must be accurate. In most cases users will lose trust in it otherwise (see the Virtuous Cycle of Trust image). The need for accuracy further adds to the customization and costs. There is a trade-off for the first project. If we go very broad, accuracy is hard to maintain. If we go very narrow, then the costs don’t justify the expenses.

In my experience this problem can be solved only by a phased roadmap approach — Start small, but be very clear that the same solution can be easily extended to many other situations. It is much easier to justify the business case for the roadmap, rather than a project.

Similarly, it may be better to deploy the solution in a phased manner where possible. Start with situations that are more tolerant. For example, if you are planning a completely virtual agent for your customers, why not expose it first to your customer service agents for fine-tuning.

The 2m video below talks more about choosing the right scale in planning AI projects. The same holds true for chatbots.

5. Understand that AI is different from other software

Finally, this is beginning to look like any other business project. Still there are many nuances that separate a conversational interaction project from others. I have written a lot about different aspects in the past. For example — cost, alignment, build vs buy and ROI.

Alignment with all stakeholders is of particular note. It does sound like yet another cliche, but I have found that for projects related to automation there is a threat of job-loss in the subtext, real or perceived. It is important for the conversational interaction team to build trust with its partners early on in the process. A good way to do that is to start with win-win projects rather than automation projects that may lead to reorganizations.

As said earlier, this article only skims the surface. However, hopefully it provides with a framework to think through the harbinger of a company’s migration to the new age.

--

--