On (not so) Natural Language-powered Interfaces

Are you the type of person to say hello to bots?

Ulysse Bottello
Design Odysseum
3 min readDec 6, 2019

--

Two types of users interact with conversational UIs. Those who say “Hello” and the rest.

I love the “greeting user” persona, not because of its politeness, but because they will be more likely to have a delightful experience.

Let me explain.

AI needs vs. Human mental model

I always advise never to hide the fact that the user is talking to an automated program and not a human.

That’s chatbot design basics, at the super common sense level. Consider it your contribution to AI ethics, if you want. Then, you have another hashtag to put on your Twitter bio.

But even if you’re mentioning it clearly at the start of your conversation, rare are users that read your copy; they scan it.

And it ends up causing a large part of the users to think that they are unconsciously interacting with a human being, not a bot — at least as long as it remains consistent.

This instinctive behavior leads to a better understanding, so a better overall experience.

Why?

AI-assistants are powered by Natural Language Understanding models, which is a specific usage of AI to detect user intent, for example.

Because it’s based on AI, NLU needs text data to perform well. The recurrence of words and their places have a lot of impacts because it will be mainly on these points that the AI will determine the intent of the user request.

When we interact with a human, especially with customer service agents, we give a lot of information in a rich form, to have a complete answer and save time. On the contrary, when we’re facing a machine, we use another model mental, thanks to years using Google search daily: we speak with keywords.

Having in mind the constraint of AI models, and our natural reaction to dialog with machines, it results in poor intent detection performance, then poor user experience. Simple keywords are too broad in meaning to detect a user intent, sometimes even for well-trained NLU models.

I concluded that when you don’t know or forget that you’re talking to a machine, you help the computer and help yourself: resulting in a faster and precise answer.

Then, how to deal with it?

Trust debt

This is our enemy. People have tried bots, and they are likely to miss the shot.

We have to acknowledge the trust debt from our users. And it’ll be a long journey to regain trust by proving them have you need to interact the same way you’re interacting with humans to have a great experience.

Onboarding and Fallback are great opportunities to educate on how to use the assistant. Also, consider to train and “handover” intent to deflect poor and popular keywords that you know your model will have a hard time to process it.

Seize every opportunity inside and outside the conversational UX and know the limits of your AI model on real user interactions.

But please stay transparent about the fact that the user is dialoguing with a bot. I still have to answer on Microsoft’s Tai debacle; we don’t need another one :)

--

--

Ulysse Bottello
Design Odysseum

Design at @chatbotfactory, I design conversational assistants and AI-powered products.