From couch to chatbot

Elvia Vasconcelos
It’s me, and another me in the background #memyselfandI

“Designing for Chatbots — Part II” was an evening of talks curated by @Mobile UX London. In this post I have documented my talk, that comes from my recent experience working with chatbots.


Design process is led by words 🤔

As a ux designer when you are creating a chatbot, you are essentially being asked to convert all the information you collect into conversations. And this is because language is the main tool you will be using to create value for your users.

This is the key difference in terms of process, outputs and tools. The process is led by copy and logic and this changes things for UX.

Context

This talk is the product of a goal I set to myself late last year: to have a practical toolkit for UX designers working with chatbots.

It all started with a secret project that I can’t talk about, but that I did talk about at @LadiesThatUX London meetup and that became a post: UX for conversational Interfaces post.

When the project finished, I sat down to document my learnings. I realised that there had been a lot of hacking of conventional UX tools because the object of the project was conversations as opposed to a conventional digital product. I wanted to first of all explore what was different when designing conversations and on a practical level I wanted to have a toolkit for myself and other UX designers to work with on future chatbot projects.

High on positive feedback from that experience I then decided to continue the conversation, this time internally with the 45+ team of Experience Designers I work with @SapientRazorfish_.

The outcome of these 3 presentation sprints is this talk that I delivered Mobile UX London. It looks at the tools and methods I would use if I was to start a new chatbot project tomorrow.

Caveat — this is based on my learnings and what I have experienced first hand. Most likely I will get something wrong and say something stupid but that’s all right because you live and you learn.

Covering:

  1. Before you start
  2. Understanding Artificial Intelligence
  3. Tools
  4. Interactions Design
  5. *Bonus* Challenges for you

1. Before you start

First, setting the mood for yourself and your team before you start.

Sketchnotes of David Contreras and Shey Cobley Mobile UX London April Meetup

In part 1 of this meetup, David Contreras UX and Shey Cobley (Critical Mass) discussed their experiences with chatbots.

David talked about designers wanting to resolve problems by slapping screens onto everything instead of thinking why we are using the tech in the first place. He urged us to think about the problems people face and to take it from there. In reverse, he also talked about what are the things that conversational interfaces are good for and how we can use that to make the case for it being the right solution or not.

Shey Cobley brought it home by telling us that:

  • Success starts with people and not tech
  • To focus on the places where people are already having conversations.
  • To think hard about What is the problem you are trying to solve and why is a conversational interface a good fit for it?

What I got from that evening was that our responsibility as a designers is to challenge the tech solutionist approach that we are routinely faced with when we start a new project.

Be Critical

I found this on twitter the other day and it made me laugh. Again, I think it’s important to look the tech industry straight in the eye and challenge it👩‍💻 as much as you can.

Sadly this talk is not about that, but I would love to explore this in the future.

No content strategist or copywriter = make a run for it

Last thing to check before you start, is your team. This applies to any project, but for conversational interfaces if there is no content strategist or copywriter just run.

Ask the difficult questions

Before you start a project ask yourself this question, ask your team, ask your client. It is important to create the space to be critical and to build a shared understanding about the project before it starts. Asking these questions might annoy people but it will also bring clarity. It’s part of our job as designers.


2. Understanding Artificial Intelligence

Building your knowledge around the key concepts and definitions in AI.

This also made me laugh. Looking at definitions very quickly turns tricky because nobody can agree on an absolute truth.

AI was coined at a conference in 1956 by american scientist John Mccarthy.

In short AI refers to a machine that mimics cognitive human functions such as learning and problem solving.

NLP

Natural language processing is part of AI and refers to the ability of a computer program to understand human speech as it is spoken or written. NLP is also one of the biggest challenges for conversational interfaces because of accents, grammar, slang, different languages, you name it.

And final definition, a Conversational interface has been defined as any UI that mimics chatting with a real human. There’s two types of conversational interfaces:

1. Chatbots — written input

2. Voice assistants — respond to speech

On the right I have placed a few of the players in the field, like Watson, OK google, slack etc. There’s a lot more.

This is my simplified view of what natural language processing is. I have used Api.ai’s documentation for my definitions.

There’s a person that speaks, the NLP platform translate human into machine, the application works as a rules engine that executes actions and gives feedback to the user.

Intent

An intent represents a mapping between what a user says and what action should be taken by your application.

This is how you structure input and output into rules.

Entity

Final definition: entities are reference values that are mapped into natural language.

I think there is some work to be done in making these definitions clearer, but I understood it as entities are the actionable words in a sentence.

e.g. I am looking for a flight to Copenhagen.

Flight will be an entity. Synonym might be trip, travel etc

We could potentially add departure and return date as follow up intents to ‘Flight’ so that users are prompted to give all the necessary information to proceed.

I created a Glossary on evernote

Intent structure = wireframe

My goal is to come up with an intent structure that facilitates the design process, end to end. This is the closest thing you will have of a wireframe, it is your most important tool. This is how you are going to design the whole experience and together with the rest of your team, you will be able to articulate this into an experience.

A couple of examples

The above intent structure reflects the documentation available on API.AI reference. I am sure that once we start using it will change, and there is already something rather importante missing — ‘state’ which is not mentioned anywhere, but for us it would be the reference point of where the user is in a tree. But for now i’ll use it for a couple of examples:

Scenario 1 — User asks: How many hours of sleep did I get last night?

Under ‘User says’ we’ll add all the natural language + synonyms E.g. How long did I sleep for?; How much did I sleep yesterday?

Under ‘Entity’ we’ll highlight the actionable values E.g. How many hours of @sleep did I get last night?

And we can also add the synonyms for those e.g. Rest, time in bed, nap, nod, doze, dream

Action and Response have to do with the fulfilment aspects. The first one is to look up the hours of sleep in the previous night period, and the second is the feedback we’ll give to the user. That feedback can be text, image, or video.

Let’s say we wanted to trigger the same action but without them asking the app, just by simply opening it up for the first time i.e. in the morning, on first opening of the app, between 04:00 am and 11:00 am trigger event morning

Events is a feature that allows you to invoke intents by an event name instead of a user query. It’s a trigger.


3. Tools

In past experiences I found myself hacking conventional UX tools / methods to better suit working with conversations. The reason for this is that the Design process is led by the logic map which feeds into the code, and the copy structure, that also feeds into the code.

Bot tools

Tools on evernote

If i was to start a project tomorrow I would sit down with my team and play with a few bot tools. I would try a few, encourage the discussion so that we could together, come up with the right combination of tools that would suit us.

I’ve listed a few examples from really visual drag and drop style interfaces, to templates, to more promising sounding content management style ones. I’ve been compiling my list of tools on evernote

Decision trees and visual vocabulary

I would normally start mapping journeys on paper, and then move to sketch because it naturally then becomes a place for wireframes.

Using sketch in this scenario was painful and unpractical because it was more of a decision tree than a journey, because the tree branched out every 5 seconds and I was owning it, and because on day 1 it became unprintable. So I would now look at other tree generating tools that have greater flexibility to accommodate these needs.

In parallel to choosing a tree tool, I would look at establishing a visual vocabulary and defining a legend from day 1 e.g. differentiating between what user says and what bot says, what is a quick reply vs a text reply …

A robust enough visual vocabulary that would allow us to highlight different branches when planning user testing sessions, and to have conversations around sprint planning to make the case for vital bits of the journey that need to be prioritised and built.

Content strategy tools

In the past we had to use excel to host the copy, because it was it allowed us to structure things in a way that referenced the logic map and the decision tree. When I showed the excel to another copy writer in the office he almost cried. He explained that excel is the most hideous tool to write on.

We need a new tool that allows us to nest things, to cross reference the tree and to be passed onto the JavaScript file easily. Let me know if you know of such tool.


4. Interactions Design

And finally the more practical side of designing a chatbot.

Nail the basics first

I always fallback on this diagram by David Armano as I find it always helps in conversations around prioritisation.

I interpret this as a way to break experiences into building blocks, and to prioritise them from absolutely critical (useful) to more complex dimensions of experience like the social. Although not necessarily linear, it makes it obvious why we should focus on nailing the basics first (useful and usable) before moving onto delivering other aspects of the experience. Most products and services are still there.

Starting the conversation & Empty states

People don’t know what to ask and they feel uncomfortable with open questions like ‘How can I help you?’. Open questions add a lot of cognitive load to interactions and therefore make it really difficult for users to start using a chatbot.

As a first step, identify all the things that are stopping people from starting a conversation. This will be part of the chatbot empty states, those situations where there’s no data and no history.

If we synthesize the things that are stopping people from starting a conversation into user needs, we can start designing to meet those. In our case we identified that 1 — people wanted to know what this was, 2- what it did and 3-how they could use it.

So we opened with an introduction, a clear purpose and followed with how it can help them and how they can use it. We used buttons to make it even faster for people to start engaging with the bot. This tested very positively.

If you need some inspiration take a look at http://emptystat.es/

Clear, concise and actionable

Nancy B. Duan “Designing Conversational UI with Information Architecture — Part 4”

I recommend reading @nancyduan’s 4 articles on the topic. She nailed it by saying they need to be clear, concise and prompt users to take action in meaningful ways.

UI elements

Although you will very unlikely have wireframes, I find it useful to have a cheat sheet to guide conversations with the team. I am referencing InVision guide to chatbots below.

Text elements

Source http://blog.invisionapp.com/guide-to-chatbots/

Media elements

Source http://blog.invisionapp.com/guide-to-chatbots/ InVision

Motion

Motion is incredibly important to keep the conversation going and making it feel dynamic.

It is something that needs a lot more thought that what we had time to do. In here I am only referencing two very basic things:

The typing indicator, which we naively didn’t have in the beginning and that proved to bring a lot of delight to users.

The autocomplete is something that users expect as well.

Bringing it all together

Reference UI elements

There won’t be wireframes to guide your conversations, so it is useful to start your own library and make it available to your team.

If you have the resources, bring someone in to look at motion — this will very quickly become non optional.

Generic intents

No matter what kind of chatbot you are creating there will be generic scenarios that apply to all of them. Like Hello, goodbye, existencial questions like ‘Who are you?’, ‘Where are you?’, responding to provocative questions, handling abuse etc.

Generic intents refer to common interaction scenarios that are likely to happen with most chatbots independently of their purpose.

In here, I’ll focus on 3 negative generic intents: There will always be plenty of mistakes, people will invariably need help and they will also find themselves in situations where they will want to start again.

Design for errors

Machines don’t understand humans because of all the nuances in language. There will be lots of errors. Design them in.

We came up with a fairly simple, 3 level approach:

  • First message, “I’m sorry, I don’t know anything about <last thing they typed>”
  • Second and third levels will vary to acknowledge that this has happened before and to offer the user different options to manage this and ultimately to speak to someone if they are getting nowhere.

Although fairly simple, this pattern generated really positive responses because it acknowledged that users were struggling in succession and offered them ways to manage that. They felt more comfortable to try again once they knew they could reach out to a live agent at any point.

Offer help aplenty

There’s plenty of situations where users will want help.

e.g. they hit a dead end, something has gone wrong, something didn’t happen as expected.

We addressed this by adding permanent help i.e. an icon and by programming the chatbot to respond to things like ”I need to speak to someone”.

Offer ways to start again

In the absence of common navigation, we knew we had to create a loop back to the beginning throughout the conversation.

We started by the easiest situation, when users had completed their task, we offered them a way to start again. We then mapped the other situations where users might want to start again, and sprinkled that option in the conversation e.g. Idle for x minutes; When request has been Identified.

Generic intents are applicable to all chatbots and can be used to very quickly put together a UX work pipeline


Wrap up

Ask difficult questions

Intent structure = wireframe

Team tools

Useful and Usable ftw

Empty states

UI Elements

A balanced plan


5. *Bonus* Challenges for you

5.1 User research hacks?

As a baseline I could use this formula for user research:

  • speak to 5 users every 2 weeks.
  • understand what you are wanting to test through your hypothesis and also collect some sort of quantifiable data that always goes down well people. And obviously a cat.

Is this enough though?

5.2 Discovery for voice

I did a self-initiated project with an Alexa earlier this year, where I documented the first 11 days of having an Alexa in my house. I had to resist the urge to start writing down on post-it notes how to call different skills and put them all over the Alexa because I couldn’t remember them.

There is a clear discovery problem with voice interaction and I have no clue how that will pan out but I am very curious to hear examples of how this can be better addressed.

5.3 Generative vs passive

Final challenge, has to do with the designing things that are not so much passive and predefined but that can scale, change and grow. Maurice Conti has a brilliant Ted talk on this and he talks about design giving input such as goals and constraints and leaving to the rest to the tool.


Thanks for reading!

www.elviavasconcelos.com

Elvia Vasconcelos

Written by

Designer, Feminist & wannabe hactivist, compulsive drawer and dressmaker.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade