Talking to machines: why it’s hard and how it could change everything

One of the great things about building a chatbot on Facebook Messenger is that you can watch every conversation somebody has with your chatbot. Gone are the days where you had to rely on quantitative data to assess the performance of your digital service. With a Messenger chatbot, you can see everything. When you’re a relatively small player like Mia, it’s possible to go through every conversation to understand how the bot is doing, and the areas that need to be improved.

Going through these conversations, one of the biggest takeaways was how your chatbot takes the role of a teacher. Chatbots have been around for a little while now, but humans are yet to adapt to this new way of interacting with businesses. Or, businesses haven’t created an intelligent enough chatbot to deal with human expectations. This isn’t a new problem. It has long been the case that humans don’t change as quickly as technology changes. For example, the technology behind touch displays has been around since the 1980s. However, it took until 2007 when Apple released their first iPhone, that humans really started to become comfortable using this technology. So, back to Mia. Although chatbot technology has been around for awhile now, humans aren’t yet accustomed to using it.

To explain the challenge further, let’s use some examples we’ve seen with Mia. Part of the issue is that a chatbot in Messenger looks a lot like any conversation you have with a friend in Messenger. So immediately, people expect to be able to talk to the chatbot the same way they talk to their friends.

“Can I buy insurance for my macbook?”

“Is damage to my alloys covered on my car insurance?”

A chatbot would need to be extremely sophisticated to deal with these types of queries. And at the moment, most chatbots are not. Getting to this level of sophistication requires a lot of smart people to work on it for a long time. Something a lot of people don’t have access to.

So, when a user tries to talk to your chatbot as if it’s a human (understandably), for the majority of chatbots experiences, everything comes falling down. The user is greeted by an array of error messages and they leave disappointed, underwhelmed, probably never to return.

However, there is another way…

If you encourage users to interact with your chatbot in a more predictable way, the experience of using your service can be a valuable one. This involves educating users that short phrases and keywords are better than long sentences. Even better, users can be encouraged to use menus, buttons, and predefined quick replies. Now it could be said that forcing a user to manage their chatbot experience by using menus and buttons sounds a lot like using an app. And kind of deems the chatbot concept pointless. I don’t think this is necessarily true, but it’s a topic for another post 🙂.

Back to education…

If we look to voice assistants, such as Amazon Alexa, we’re seeing a change in how users interact with machines, and consequently, other humans. Consider manners. Most children who have been brought up well have the habit of saying “please” and “thank you”. When using these phrases with most voice assistants, they become confused. As a result, we simplify our language and cut them out of our vocabulary when talking to voice assistants, improving the accuracy and efficiency of communications. This is an example of how technology is influencing how we interact with it, in order to make for better relationships.

So, your chatbot has a similar role. Unless you have a super-duper-NLP-enabled-chatbot, you’re going to need to encourage your users to interact with you differently to how they talk to humans. This means keywords, shorter sentences, and making use of buttons where possible. Slack’s Slackbot provides a good example of teaching users to interact with it in a certain way (see below). Does this defeat the object of having a chatbot? Maybe. Will it make your slightly dumb chatbot more effective? For sure.

And finally, a nod to the unintended consequences of how the way we interact with machines, changes the way we interact with humans. There are plenty of people talking about how the relationship between kids and their favourite voice assistants are impacting their communication styles. If kids are discouraged from using “please” and “thank you” when talking to Alexa, does this mean they’re less likely to use the pleasantries with their parents? If kids get used to bossing Google at Home around, to keep up with their every request, does this mean they’ll treat their friends in the same way? As kids become accustomed to brief, functional language when talking to Siri, will they struggle to have a deep conversation with their grandparents? Nobody knows right now, but I sure hope not. I believe those people developing these chatbots and voice assistants have some responsibility to do what they can to prevent this from being the case.

So, to conclude:

  • Remember most users aren’t used to interacting with chatbots.
  • Build guides into your experience to educate users on how to interact with your chatbot.
  • Beware of the unintended consequences of changing the way humans interact with machines, and consequently other people.

Oh, and have a play with Mia and let us know what you think.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.