How to talk to your machines. Why we have a Conversational Designer.
Eliza here, Conversational designer and (Voice) developer at Pixplicity. These days I’ve been talking more and more to our in house developed voice assistant in the office and my Google home back in my apartment. I can confidently say I have solid experience with carrying a conversation with my machines. But how did this happen? Why am I telling that little blob on the kitchen counter what to put on my grocery list instead of writing it down? The reasons are simple: for one, I’m a super-nerd so why would I go old school and second: it’s the future baby!
Ok Google! Hey Siri, Alexa, Cortana, Hi Bixby! (Ok maybe not so much that one). You hear them more than your neighbor calling for their lost cat. So how did we get here? Every decade or so a new technology comes along that changes the way we interact with our machines.
Our generation hardly remembers the first computers were used by connecting the right wires into a giant room-sized machine. Then the mainframes made their first appearance and by using a keyboard and mouse you clicked through the user interface of your 1987 desktop. Then came mobile and the whole user interface went haywire. MOBILE FIRST! Touchscreens and one finger controlled interfaces. Everything in three taps or less.
So what’s next?
“Voice is the future.” I bet it’s not the first time you heard that…
Infographic: Here's How Much Consumers Will Use Voice Technology in the Near Future
Over the next few years, shopping in augmented reality, browsing the web without a screen and interacting with…
With all that focus on the user interface, why is it that we get the user interface of voice so wrong?! 85% of the conversational apps fail. 85%! These are guesstimates, but let’s face it, your experiences haven’t been the best yet. That’s ok, we’re still learning.
UX Designers, Interaction Designers, UI Designers and…Conversational Designers?
Would you tell your developers to start building something without giving them mockups and the flow charts mentioning explicitly which elements should navigate to which screen? Of course not! Have you seen developers trying to “design” something? Oh all the horror stories out there… 😰
And so, we all have designers. In-house, as freelancers or through an agency. Interaction designers spearhead that full experience the user has with your tech. UX designers that ensure the experience you have with the interface works. Interface designers, so it all is pleasing to the eye. But what if you need to talk to it? What happens then?
The lack of a graphic user interface doesn’t make design obsolete, but indeed even more important. The design in a Voice User Interface, VUI for short, is conveyed through, well, voice.
The tone, and the choice of words and phrasing are key here. Bear in mind that, unlike GUIs, you cannot steer the user’s interaction. Since you don’t have a screen in front of you, testing, identifying edge cases, predicting what the user could say when using the app, and so on, is even more difficult.
- Let’s say for instance that you build a conversational element for your weather app. You may have trained it to respond to the question “How’s the weather,” but the conversation may break in a poorly designed app if the user instead asks, “What’s the forecast?”
- Instead of mock-ups, as conversational designers we’re concerned with agent persona, user personas, use cases, decision trees and sample dialogues.
- All the replies of your voice app need to respect the conversation basics and need to be true and concise to your agent’s characteristics, and your branding principles.
- In addition to those responsibilities, the conversational designer needs to take care of keeping the context of the conversation and switching contexts when the user navigates a different path, repair a conversation if it derails and move the conversation forward by offering alternatives. If that weren’t enough, don’t forget that in good conversational design, the most appropriate responses are those designed specifically for the capabilities of the device it’s running on. Oh, and don’t forget personalized responses! 😅
All these topics just begin to touch on the responsibilities of a conversational designer. And doing it well is an entirely different story.
Just as with any desktop, mobile or web app you can’t start implementing your app’s logic without having a clear vision on how the user should interact with your product, which problem(s) you’re trying to solve for your user, and what the design should be. Here’s a road-map with the steps for a successful mobile app:
The exact same steps will be followed for developing a conversational app (plus a few more). The implementation of each step, however, is completely different between the GUI and the VUI design. What they both have in common though, is that the foundations lay in the first 3 steps.
Talking to a person vs talking to a machine.
Imagine the following. I am showing you one of the new projects we are working on and tell you to “tap on the second button” or “click the pink button” or the “button in the right corner”. All these instruction direct you to the same button. With a GUI in front of you you can figure it out. However, when these instructions need to be followed by a machine instead of a person? We will need to somehow inform the machine that by “pink button” and “the one at the bottom, right corner,” we mean the same thing.
So variety in the way we express ourselves is a major challenge for VUI. Not that much of an issue if the user interacts directly via a graphic interface.
Is this the only one? Unfortunately not. Since conversation is so inherently human, there are a few principals about conversations that we don’t realize that we know. These principles are:
- turn taking,
- reading between the lines
We take for granted that all these principles must be respected by a good conversational app (and it’s only human to be frustrated when they’re not!).
Let’s take another example to further explore the differences between the interactions with a graphic user interface (GUI) and a VUI and how things can get quite complicated. Let’s look at how Google Trips informs me of my upcoming travels. I’m presented with a list of upcoming trips and when I tap on one it will show me more information concerning that trip. With a GUI I can just tap on the item on the list that I want to see more info about. If I wanted to perform this action by voice, I could say something like “Tell me more about my trip to San Francisco”. Easy.
What happens though if I have multiple trips to the same destination? If the interaction is through a GUI, that’s still not a problem because I know exactly for which trip I’m interested. But if I say “Tell me more about my trip to San Francisco” what do we do? The Voice Agent should ask me to clarify the question:
“Hi Eliza for which dates do you want to know more?” and if the answer isn’t sufficient (e.g “for November”) it should follow up with more.
Those follow-up questions need to carry the contexts of the conversation: the intent of the user (more information about that trip) and the location (San Francisco). This is just a simple example, but it shows how important your conversational design is, and even more so, the tech behind it to captivate a users intention and smart replies to navigate itself back into the conversation tree.
This is where so many get it wrong. VUI design is not GUI design. Its interaction design on steroids. We at Pixplicity specialize in conversational design. If you’d like to learn more about our work, why not start a conversation with us? You may be answered by a real human! Here’s where it begins: firstname.lastname@example.org