Good Bot Design Means Never Having to Say: ‘I’m sorry, I didn’t get that’

Jyoti Iyer
Salesforce Designer
7 min readJan 31, 2023
A graphic of a lit-up phone screen showing chat, with one message from a person and another from a bot. A person is holding the phone and hovering their finger over the chat.
Hobbitfoot/AdobeStock

I am sure you’re familiar with that moment when you’re talking to a bot and you say something — even just “hello” — only to get the reply: “I’m sorry, I didn’t get that.” This unfortunate moment is like a “sorry cliff”: the conversation goes downhill from there. Understanding the sorry cliff is key to designing conversational experiences that deliver not only happy paths but also graceful error handling.

Maxims of Conversation

Why is the frustration caused by a sorry cliff-style failure so acute? In general, people expect conversations to go smoothly. Even if the conversation is with a bot, we feel particularly thwarted when a clear and unambiguous input fails to elicit a helpful response.

When a bot is unhelpful, the experience isn’t so different from:

  • When you’re talking politely to a person but they respond in a reluctant or hostile way (deliberately uncooperative).
  • When you ask a person a question they should have the answer to but they don’t (accidentally uncooperative).

The expectation of conversational success is related to what sociolinguists call the Cooperative Principle from philosopher-linguist Paul Grice: “Make your conversational contribution such as is required, at the stage at which it occurs, by the accepted purpose or direction of the talk exchange in which you are engaged.”

The Cooperative Principle is an idealization of how we talk — obviously real conversations are much messier. Still, in our day-to-day use of language, we are keenly aware of deviations from this ideal: we recognize when someone is veering off-topic, hiding something, oversharing, or being coy, cagey, or careful.

For the field of conversation design, a big conceptual challenge is deciding to what degree the bots should mirror how people normally interact. Many conversation designers have re-conceptualized the following four Gricean maxims that make up the Cooperative Principle into a guide on how a bot should “talk.”

1. Be honest (maxim of quality)
People tend to say things that are true, therefore bots should be truthful. This is about more than delivering accurate information; it also extends to how the bot presents itself. For instance, a bot should declare itself a bot, and not claim to be a human agent.

2. Be optimally informative (maxim of quantity)
People tend to say only as much as is necessary, rather than including all possibly relevant information in every utterance. Bots should be optimally informative by recognizing the correct intent behind the user’s utterance. They should offer enough information to satisfy the user, but not so much as to overwhelm them.

3. Be relevant (maxim of relation)
People tend to include information pertinent to a conversational goal and exclude what is off-topic, therefore bots should do the same. Balancing the demands of branding with bot-functionality is critical. For example, friendly banter may be used to express bot-personality, as long as it doesn’t distract from the user’s main purpose.

4. Be clear (maxim of manner)
People tend to be clear and unambiguous, and so should bots. For instance, people generally give instructions in the order they want them carried out. Bots often give implicit instructions in the form of questions that the user must answer. These information-seeking moves must be placed at the end of the bot’s turn.

Bot: We have pizza or pasta. What would you like to order?
Human:
Pizza’s good.

Reversing the order of the sentences within the bot’s turn leads to problems:

Bot: What would you like to order?
Human:
Let’s have a…[bot interrupts]
Bot:
We have pizza or pasta.
Human:
I know that! I want pizza!

Anti-Maxims of Error-Handling

Stylistic rules like these have been useful in creating more natural experiences but don’t completely eliminate the sorry cliff. While bots do what we program them to do, for humans, flouting or ignoring conversational rules is just as natural as observing them.

There are some common inputs that can cause fatal errors for an untrained bot. Although there are bound to be some surprises once a bot is deployed, conversation designers can anticipate and plan for the following three kinds of situations. Below is a re-re-conceptualization of the Gricean maxims into three anti-maxims of error-handling.

1. Assume humans will ‘do it wrong’
People using your bot will deviate from the path suggested by the design. You can contain that deviation with a little planning.

  • When the user wants the main menu
    Most chat-based bots include a main menu that a user might want to return to. “Anything else I can help you with?” is a common bot-initiated way to effect this return at the end of a flow. However, users may want to make that same transition mid-flow. It’s table stakes to recognize inputs like “return to menu” and “start over” that match a main menu intent. Don’t underestimate the value of visual alternatives like a clickable persistent menu.
An example persistent menu to accompany a chatbot. It has three options: “Talk to an agent”, “Outfit suggestions”, and “Shop now”.
A helpful persistent menu. Source: Persistent Menu documentation from Meta
A screenshot of a bot from MobileMonkey, showing failure to understand simple inputs like “main menu” and  “exit”. In both cases its response is “Sorry, you must select one of the choices below.” followed by a re-routing step.
Stuck in a loop with a bot that doesn’t include an intent for Main Menu or Exit. Source: https://mobilemonkey.com/blog/menu-based-chatbot
  • When the user wants to leave
    If the user says “bye” or “stop” or “exit” their intention is clear. It’s important to build a quick escape path via an exit intent. A user trying to leave doesn’t want to be coaxed into staying.
A screenshot of an example chatbot for mental health chatter, called Hey Jess. It fails to understand the input “stop”, and continues to talk about the user’s afternoon and how to get “some extra pep”. The user tries an alternative route by saying, “No, I mean stop messaging me please. Unsubscribe.” The user’s final message is amusingly desperate, “Unsubscribe?” with a question mark.
No way out of this conversation. Source: https://www.comm100.com/blog/chatbot-best-worst-practices.html
  • When the user wants an agent
    Good design takes an empathetic view of users trying to skip the queue to talk to a person. Providing a path to resolution in these situations is easy: build an agent intent to understand utterances like “agent” or “connect me with a person.” If case deflection is a high priority for the business, offer an intermediate resolution path before having to call on a human agent.

2. Assume humans will ‘be irrelevant’
People frequently say things that aren’t required or even relevant to a conversation with a bot. Creative responses might include:

  • People say hello
    Whether your bot is designed to start with pleasantries or get straight down to business, it’s wise to account for a diversity of greetings. The last thing anyone needs is a bot that can’t move beyond the very first turn. A hello intent is good to have.
  • People say thank you
    A positive and affirming manifestation of irrelevant inputs is people saying “thank you” to their home speakers or typing it into a chat, even if it’s clear they’re not talking to a human. Building a thanks intent can delight a polite user with a polite response.
  • People ask for things out of scope
    If the user inputs something truly out of the bot’s scope, a well-written error message that offers a path to resolution can transform the user experience. Anticipating high-frequency topics that the bot/business cannot resolve can have a real impact on customer service case deflection. For example, if a customer wants a cash refund and the bot can’t process cash refunds, it’s better to offer a voucher or route to a different channel than give a generic error message. Most design tools now include the option of adding two error paths, which is a good option to use.

3. Assume humans will be smarter than the bot
Design tools often have annoying but unavoidable technical restrictions. Don’t ignore them.

Android error message, “Sorry, but to do that, you’ll need to ask your Google Workspace administrator for permission.” Below it are visible two buttons, “Learn more” and “Calendar”. The input to the Assistant is shown: “recurring event for every Thursday”.
Unhelpful Android error message for a voice-based attempt at creating a recurring event. Source: My phone.
  • When the bot doesn’t have that capability
    The picture above shows a voice-based attempt at creating a recurring calendar event. The Android error message “Sorry, but to do that, you’ll need to ask your Google Workspace administrator for permission.” misses the point (to create an event), which can be achieved easily using the Calendar app. A more helpful message would guide the user into giving inputs that the system is already capable of parsing, like buttons, or prompting to open the relevant app.
  • When the bot has the capability but didn’t understand the input
    In real text messaging, people follow all sorts of highly cooperative practices, like breaking up a long message into several shorter messages. This avoids a potentially stress-inducing status like “Robo is typing….” Some questions a conversation designer can explore with engineering are:
    (a) What does our bot-building tool do with a single input broken up into multiple pieces?
    (b) How is the user experience affected by latency in bot-response?
    In voice-based experiences, backtracks are similar: “Beach toys, um, I mean Beach Boys.” It’s good practice to have a strategy for preventing a sorry cliff from interrupting an input.

“Sorry,” like any other element in a bot-conversation, must be meaningfully designed, rather than a fallback response to all errors. In your conversation design process, spend some time thinking about how your users might interact with your bot. I hope this short guide can help you design your own ways to anticipate conversational failure and avoid the dreaded sorry cliff.

Salesforce Design is dedicated to elevating design and advocating for its power to create trusted relationships with users, customers, partners, and the community. We share knowledge and best practices that build social and business value. Join our Design Trailblazers community or become a certified UX designer or certified strategy designer.

--

--

Jyoti Iyer
Salesforce Designer

Conversation Designer and Linguistics PhD. Favourite pastimes include: sticking to Br. spelling, smashing the patriarchy, and making music.