What It Means for Conversational AI to Be “Conversational”

Dr. Ender Ricart
The Startup
Published in
18 min readDec 20, 2019
HAL is a well-known fictional conversational AI that lacked in empathy for it’s users.

About the Author: Dr. Ender Ricart is a Principal UX Researcher at LivePerson, a company at the forefront of conversational AI applications for customer service. The content of this article is informed by insights from in-depth qualitative research on customer experience with conversational AI.

In the research I have performed on conversational AI, people tell me that they do not want nor expect an AI to be human-like. In spite of what they say, in practice, I observe people applying the same fundamentals of linguistic interaction with conversational AIs as they do with people. Interactions with conversational AI are, thus far, designed to emulate (or simulate) human-to-human conversation, and, therefore, trigger people to apply fundamental principles of communication. If and when an AI does not behave in accordance with these principles or ignores them entirely, it leads to confusion and frustration for its conversational partner. In this article, I am going to unpack two fundamental principles of communication and their applications to chatbots:

  • Forming a Shared Symbolic Cloud — How we can successfully communicate about things and build shared understandings.
  • Maxims of Cooperative Communication — How we manipulate what we say and how we say things to communicate meaning.

Insights in this article draw from my training in cultural and linguistic anthropology and qualitative research I performed at LivePerson on people’s interactions with agents and chatbots in customer service. After reading this article, you should have a better idea about the complexity of communication and how it comes to bear on people’s expectations and frustrations when interacting with conversational AI.

1. Forming a Shared Symbolic Cloud — How we can successfully communicate about things and build shared understandings.

In communication, we build a kind of Shared Symbolic Cloud, if you will, that conversational participants contribute to and draw from. This cloud is comprised of subjects, objects, temporal markers, spatial markers, referenceable symbolic systems, and more. We can engage in conversation to begin with because there is enough pre-existing overlap in the language spoken, perhaps similar experiences, learnings, and sociocultural underpinnings.

While conversational participants engage in the mutual building of this symbolic cloud, they nonetheless have a unique set of interpretations and understandings of the conversation at hand, because they also have their own Individual Cloud through which they filter and process information. We have different understandings and meanings attached to words born from slight to dramatic differences in our sociolinguistic systems and experiences.

What is your mental image of a “chair?” Is it a La-Z-boy or an armchair? Mine is a wooden chair. You probably imagine a different “wooden chair” than I do. I see the wooden chairs my parents had in the house when I was growing up. There is no way that this is also your mental image of a chair, let alone a wooden chair. Regardless, I can successfully communicate with you about chairs or the need to buy wooden chairs for the dining room table. What this specifically means for you, your mental image of a wooden chair for the dining room, will not 100% map to my mental image, but it will still overlap enough that we can communicate.

Communication, then, is like a game of telephone. The message is communicated and received by others with a degree of fidelity to the original and intended meaning. Each conversational participant takes away something different from a conversation because our Individual Clouds differ. This is why it is so valuable to have supporting, nonverbal language infrastructures in place afforded by face-to-face communication. Things such as body language, vocal intonation, other vocal cues such as sighs but also visual aids like specific dining room chairs to point out and compare. All of this comes together to form that Shared Symbolic Cloud of communication. Participating members of a conversation have access to, understanding of, and can contribute to this symbolic cloud of linguistic interaction.

In sum, the Shared Symbolic Cloud of communication is a composite of a participant’s Individual Clouds, and Individual Clouds too are composites of societal clouds such as macro levels of language, culture, and social norms and the more micro levels of personal experience, sociolinguistics, and niche culture, etc. All this comes together to enable communication at all and then about specific things.

Knowledge cultures — mind the gap

Building a Shared Symbolic Cloud becomes more difficult when we throw things like medicine, physics, philosophy, heating and ducting installation, ballet, university admission process, etc., into the picture. These are known as “knowledge cultures.” The more stand-out knowledge cultures include specialization that require advanced training or education. There is usually identifiable jargon (common examples being legalese or technobabble). Included in the term knowledge culture are also less obvious things like a business. Think about any business or company you have worked in. There is an internal work culture, business goals, best practices, brand image, processes, systems, departments or divisions, ranks, etc. You have likely experienced the confusion of engaging in an unfamiliar knowledge culture at multiple points in your life. Maybe you started a new job, and people around you were using acronyms or software you were unfamiliar with. Much of a company’s New Hire Orientation is around helping new employees learn and implement knowledge culture tenets like corporate goals or principles.

According to Pokémon.com, there are 809 official Pokémon types.

Many hobbies are also deeply entrenched in knowledge cultures. I can recall not too long ago trying to get into the then newly released Pokémon Go. I quickly found myself overwhelmed by the sheer variety of Pokémon, Pokémon classification, abilities, stats, and care/evolution. Meanwhile, my partner, who had grown up playing Pokémon, was making strategic decisions about which Pokémon to catch, develop, and evolve.

Another and more frequent exposure to foreign knowledge cultures occurs when you call customer service because of a question or issue. You frame the problem or question using your Individual Cloud of experience and knowledge — your point of view. You have very little understanding of how the company or business talks and thinks about said problem or question within their knowledge culture. It can be frustrating to engage in a conversation with said company or business because there is little correspondence between how you are thinking and talking and how the business is. It is a failure to build that Shared Symbolic Cloud wherein communication takes place. This is because there is a larger gap between your Individual Cloud and the cloud the customer service agent is mobilizing, heavily influenced by the knowledge culture they work in. It makes it more difficult to talk about the same thing and build shared understanding if and when you have different meanings underpinning similar words and concepts.

Diaper debacle — black-boxed business process and practices

I ordered diapers on Amazon that were supposed to arrive in two days. Five days later, I had yet to receive them and noticed that the delivery date had been pushed out another two weeks! I got in touch with Amazon’s customer service through the in-app chat. They informed me that the diapers were sold through a third-party seller, and Amazon could not do anything to help me. I needed to send the vendor a message to cancel or refund the order. This was confusing as, when I made the initial purchase, I could find no indication that this was a third-party vendor. I sent a message to the third-party vendor. There was no response or refund. The next delivery window elapsed, and the delivery date was pushed out even further. I got back in touch with Amazon to complain. They again informed me there was nothing they could do at this point except message the vendor and wait. If the product did not arrive by the specific date for delivery, then they could compensate me. I never heard from the seller. I never got the diapers. Amazon issued a refund. I presume the seller is still selling diapers. I still can’t tell if I am ordering through them or not when I go to purchase diapers.

In the above example, there is a gap between how I understand the Amazon Marketplace operates and how the customer service representatives understand it. From my point of view, everything on the Amazon Marketplace is Amazon’s. I do not have visibility into what is being sold by a third party and the rules and regulations behind cancellations, returns, refunds, or complaints. I experienced the promise of two-day delivery being broken, and the diapers failed to be delivered. For me, this was the fault of Amazon and not some then invisible third-party seller. Already frustrated, it was even more frustrating to have the customer service representative tell me nothing could be done because of rules that seemingly were magicked into existence just to annoy me. Had I known at the time of my purchase that the seller was a third-party vendor (and maybe the seller’s star ratings and not the product’s) and had I known the rules surrounding cancelation and refunds, I likely would have gone about my purchase differently. This is a clear example of inside/outsider knowledge of the knowledge culture the business is operating under. The onus is on Amazon to be transparent about this for improved customer service relations and should not be on me to learn through some agent telling me, basically, “yeah sorry; not our problem!”

Customers do not have access or exposure to a business’ knowledge culture: the company’s way of thinking, saying, and doing things is black-boxed. The customer service agent or sales rep, however, is in a unique position. They have an intimate understanding of the company’s knowledge culture and can empathize with the customer and their point of view. Because customer service representatives are in this privileged position of dual understanding, a good customer service representative will go the extra mile to meet the customer where they are and build the bulk of the Shared Symbolic Cloud to enable effective communication.

Image of a customer trying to make sense of a business’ knowledge culture with access only to a small portion of the whole.

The Amazon customer service representatives I spoke with did not successfully empathize with me. They did not realize that I, in my Individual Cloud, did not have the knowledge or access they have. If they had stepped into my shoes, they would have gone the extra mile to demonstrate to me how I can find out if a product is being sold by a third-party seller on the Amazon marketplace. They could have informed me all in one sitting about the rules regarding cancellations and refunds for goods sold by third parties, rather than doling out these policies slowly over the course of a month with different agents. They also could have followed up with the seller and perhaps notified me that they are going to be putting the seller on probation or removing them from the Marketplace (I had done some digging and found this particular seller had failed to fulfill orders for a number of people). They did not do this. Instead, I had to be the angry and confused customer. I would have much preferred to be informed, teaming up with Amazon’s customer service representative to resolve my situation and monitoring it over time. I just needed them to share the necessary knowledge with me so we could build that bridge of mutual understanding...

Starting from the customer’s point of view

Based on qualitative research I conducted at LivePerson with people of various age, gender, educational background, income, occupation, and locality, we know that a positive experience with customer service includes empathy and personalized care. These are the actual terms used by the majority of study participants, with all mentioning this in some form or other. Empathy was characterized by study participants as when the customer service representative acknowledges what the customer is experiencing and how it is impacting them. This amounts to feeling like one is being heard, taken seriously, and that he or she will be given individualized attention given the specifics of their situation. This latter aspect dovetails into study participants’ conception of “personalized care,” discussed as the customer service representative working toward identifying the specifics of what is going on and providing tailored solutions given such particulars.

What empathy and personalized care have in common here is this feeling of having successfully connected through communication with the customer service representative. That is at the core of what makes for a positive interaction with customer service. These research insights demonstrate that customers value when agents go the extra mile to meet them where they are, starting from their Individual Cloud of experience and understanding, and work from there to build out a Shared Symbolic Cloud that they, the customer, can understand and see the applicability to their situation. This amounts to recognizing the gap between the internal knowledge culture that a business possesses and the knowledge and experience of the customer and starting from the customer’s point of view (empathy) to find a resolution that satisfies their situation and the business (personalized care).

Building chatbots that start from the customer’s point of view

As discussed, customer service needs to go further than a typical conversation partner to translate the internal world of the business (its knowledge culture) into something easily digestible for the customer and the Individual Cloud of experiences and understanding they are operating within. Customers cannot do this work because the internal logic and workings of the company are black-boxed and inaccessible to them. In customer service interactions, therefore, it needs to be the agent building this bridge, the Symbolic Shared Cloud, and providing, at necessary junctures, pertinent information to help the customer join in and engage successfully. This same responsibility applies to a business’ digital customer service agent, the chatbot. It too must work from the customer’s point of view, their Individual Cloud, to build empathy.

Tell-tale signs that your chatbot is not bridging the gap are as follows (insights derived from research I conducted at LivePerson):

A. Customers are struggling with how to word things to get the chatbot to understand them.

This is a struggle to translate his or her problem/query as it is understood and experienced in their Individual Cloud into the knowledge culture of the company.

B. When the chatbot presents selection options, the customer cannot figure out which category to choose.

This is also a translation issue, but more directly related to the organizational schema that a business knowledge culture might be implementing. It is a question of how the company is classifying or categorizing this product or this topic. It is a similar experience to walking up and down every aisle of a grocery store to try to find where they categorized the dried fruit — is it next to the fresh fruit, nuts, spices, cereals, or canned goods? If you can’t find it after your first try (maybe two if you aren’t in a hurry or don’t have kids), of course, you are going to ask someone that works there rather than go through the whole store.

Building conversational AI experiences for customer service that possess the winning qualities of empathy and personalized care are readily achievable with user research. Below are a few examples of things you can start investing in today to help your bot bridge the gap and translate effectively between your internal business knowledge culture and the customer’s point of view.

Solution — how to build chatbots that have empathy and personalized care

Reminder that empathy for customers is feeling listened to and understood and personalized care amounts to having his or her situation be identified as unique and then customer service working towards finding a resolution that works for the customer. Both of these can be achieved by a chatbot.

  1. Start from the user’s point of view — their Individual Cloud

It is important to work backward from the customer. In another article, I talked about this from the perspective of mental models. It also applies to what a bot says and how it says it. You don’t want the bot to be too steeped in jargon or the company’s knowledge culture. It needs to be a proper marriage between the customer’s point of view and the business’. Conduct user research into how people are framing problems or issues — what language they are using to talk about things? What is the context in which they are experiencing it? Incorporate learnings into the language and phrasing of the bot. This will go a long way to help customers feel grounded in their interaction with an unfamiliar knowledge culture. You can admix business jargon or info about your company’s organizational system as teachable moments. In the example below, the bot gently rephrases the customer’s query using the business jargon “digital portal.”

Customer: I have a new credit card but when I log in to view my activity online. I can’t see the new card there. What is going on?

Bot: I am sorry to hear you are experiencing difficulty accessing your credit card activity on the digital portal. To help you better, would you please take a moment to log in here…

Having the bot restate or rephrase the customer’s intent is additionally a way to build empathy. It demonstrates to the customer that their specific situation was understood — the first step toward receiving personalized care.

2. Plain talk doesn’t just apply to words

The design basics of user experience on the web also have many parallels with conversational AI. To create satisfying customer experiences, it is absolutely imperative to design categories, information architectures, logic hierarchies, and more from the user’s point of view. Again, just like user experience on the web, working backward from the customer will make their role in the conversation and the interaction options seem intuitive (that is, resonating with their Individual Cloud). At risk of sounding like a broken record, and tooting my own horn, perform user research (such as card sorting, first click, or tree testing) to identify how to label and construct information hierarchies and categories so it resonates with the customer’s understanding.

3. Recognize a customer’s issue or need as unique and deliver “personalized” care with bots

Sure, maybe the company gets hundreds, thousands, even millions of customer service hits about the same issue daily. It doesn’t matter. From this one customer’s singular point of view, the issue is unique to them. They don’t want to be told that their issue is commonplace. If they did not go to or find their answer in the “Frequently Asked Questions (FAQ)” page, being shuttled to the FAQ page reinforces that: (a). they are just a number, and (b) the company doesn’t value them and their situation enough to provide personalized care. Unless your bot is specifically an FAQ bot, don’t send a customer to the FAQs. It is OK to pull content for the bot’s response from an FAQ page, but don’t link to the FAQs or indicate that their question is commonplace. Instead, have the bot talk to the customer and frame content (derived from FAQ pages or not) as unique to this individual and their specific situation. This will set the bot up to deliver a personalized care experience to the customer.

2. Maxims of cooperative communication — how we manipulate what we say and how we say things to communicate meaning

In addition to Individual and Shared Symbolic Clouds, what we say and how we say it conveys meaning as well. The British philosopher of language, Paul Grice, outlines four principles of cooperative communication that we apply unconsciously when we converse with others to drive and derive meaning. The maxims are as follows:

Maxim of Quantity — Your contribution to a conversation should be informative only to the extent needed; that means there should be no additional information nor should there be too little.

Maxim of Quality — Say only what you know or believe to be true and possesses sufficient evidence to support it.

Maxim of Relation/Relevance — Contribute to the conversation at hand.

Maxim of Manner — Don’t be obscure, be brief, don’t be vague, and organize your contribution.

Gricean maxims operate at the overarching level of the conversation as a whole. To this end, they draw individual utterances made into the larger whole of related subjects, objects, spatiotemporal references, and topics. We unconsciously apply these maxims to convey and comprehend meaning, both implicit and explicit. If and when we encounter a violation of one or more of these maxims, the violation and type of violation serves to communicate significance beyond the surface value. See the following example:

Person A: Did you talk to Michal and Jorge about getting together next Saturday?

Person B: I sent a message.

Here, Person A applies the maxims of cooperative communication to derive meaning from what Person B has stated. They apply the Maxim of Relevance to determine that the “message” must be related to getting together with Michal and Jorge. Surely, Person B wouldn’t intentionally mislead Person A by talking about some irrelevant message they sent! Person B has additionally flaunted the Maxim of Quantity. They could have responded by saying, “I did talk to them. I sent them a message, and they responded to say, ‘yes; next Saturday works.’” However, Person B did not say this and in not saying this but rather violating the Maxim of Quantity, Person B has, in turn, successfully communicated a different set of implicit meanings to Person A.

Person A: Did you talk to Michal and Jorge about getting together next Saturday?

Person B: (EXPLICIT) I sent a message. (IMPLIED) No, I have not talked to them. I messaged them, but they never got back (and I may be a bit irritated by this), so I don’t know if we are getting together next Saturday or not (please don’t ask me again).

Thus, violations of these maxims serve to communicate additional value beyond the explicit content of what was uttered. All of this occurs tacitly, often without calculation, as a part and parcel of communication.

How do these maxims apply to communication with conversational AI?

Conversational AI are not good at carrying context across utterances. Because of this, the application of Gricean maxims fail and with it the thread of cooperative communication.

For intent recognition purposes, conversational AI tend to ground conversational context on a turn-by-turn basis or per discrete interaction, rather than across interactions and intents. The longer context can be retained (subjects, objects, topics, and other identifying information) and mobilized, the easier it will be for the bot and the customer to mobilize Gricean maxims. Businesses should preface the development of machine learning and natural language processing that enables AI to integrate transconversational historical data and multithreaded intents in any given interaction with a customer.

Below is an example of a well-known conversational AI, Mitsuku, developed by Pandorabots and acclaimed “record breaking five-time winner of the Loebner Prize Turing Test, is the world’s best conversational chatbot.” (Link)

Between individual conversational turns (I say something and someone else says something), Mitsuku appears to be in accordance with the principles of cooperative communication, but Gricean maxims apply at the level of the overarching conversation (comprised of multiple conversational turns), not a single turn. Looking at the bigger picture of what is actually being talked about, Mitsuku violates the Maxim of Relevance, Quantity, and Manner. It switches the topic of conversation from the weather to the cost of raincoats, after which it fails to be relevant altogether. It violates the Maxim of Manner and Quantity by talking at length about random things like a “Mousebreaker” clearing its memory.

As Mitsuku’s conversational partner, I felt frustrated and confused because I tried to apply these maxims to understand what Mitsuku was saying, sussing out any implied intentionality related to our larger conversation. For example, I had to think over what “Mousebreaker” might mean. At first I thought maybe “Mousebreaker” was a play on words, referring to a computerized “windbreaker,” but then why does it erase memory? So, this implied meaning didn’t make sense and essentially I wasted cognitive power. The topic continued to leapfrog. Even Mitsuku’s possible joke about the cost of my raincoat lands awkwardly when she promptly forgets the conversational thread and then becomes distractingly vague (violation of the Maxim of Relevance and Manner). Mitsuku’s repeated violations devoid of intentionality (and, therefore, meaning) prevent us from actually cooperatively communicating and failing to build a Shared Symbolic Cloud. Maintaining the larger conversational context across each of our interactions is essential for a truly cooperative and collaborative conversation to occur with conversational AI.

To summarize

If we are going to position AI as conversational, then we need to be more aware of the anatomy of a communicative event.

  • Shared Symbolic Cloud — how we build mutual understanding.
  • Maxims of cooperative communication — how we say things and what we say convey implicit and explicit meanings related to larger context of conversation.

I discussed how building mutual understanding becomes more complicated when a complex knowledge culture is involved. With customer service, it becomes increasingly important to meet the customer where they are, their individual clouds, and work backward to build empathy and personalized care. The same need applies to chatbots used for customer service. Research into the customer’s point of view is needed to achieve this. This includes not only what they are experiencing and how they communicate, but also how they organize information.

With maxims of cooperative communication, I emphasized the need for conversational AI to maintain conversational context beyond discrete interactions (however a complete interaction is defined). These maxims, which, are unconsciously applied by people in communicative events, apply across interactions to index all past subjects, objects, topics, places, people, etc.

The goal is not to make AI indistinguishable from humans but make them more conversationally compatible with humans. This is important because people will unconsciously apply the fundamentals of communication when invited to converse with AI.

--

--