“Don’t Make Me Tap!” — An introduction to VUI design
Voice UI’s are the hot new topic. Siri, Alexa, Cortana and OK Google have gone from living on your computer, to your phone, to your watch, to now nearly every device in the home. The new way to interact with machines is to talk to them, which means the interface no longer lives on a screen, but in a conversation. If you are looking for the “Intro” to designing a Voice User Interface (aka. VUI), then look no further than Ahmed Bouzid & Weiye Ma’s book “Don’t Make Me Tap!”
The authors start by boldly asserting that forcing humans to type into tiny flat devices (aka. your phone) is insulting, demeaning, and simply “not cool.” They go on to say that Natural Language processing “has arrived” and its time for us start thinking differently about how people interact with their machines. They make this concept all the more tantalizing by presenting an image of a young man, yawning and stretching in the morning, and saying, “Turn coffee on.” To which his lampshade responds, “Coffee is now brewing.”
The future is now! And it can look a lot like this…
But lest you think that making a VUI should be straightforward and easy, the authors are quick to correct you:
“A common misconception the novice VUI designer often suffers from is the belief that designing a VUI consists of taking a Graphical User Interface (GUI) and “simplifying it” for use through voice….. What is lost in this conception are the following realities: (1) people can speak a lot faster than they can type, (2) they can listen much more slowly than they can read, and (3) they can talk much more quickly than they can listen. The conclusion is that while designing a VUI may seem, at gut level, to be easier than designing a GUI, the opposite is in fact the case: VUI design is a lot harder than GUI design” (p. 18)
Conversational interactions are inherently different than visual interactions, which means you can’t shoehorn a GUI into a VUI. You always need to design with human abilities and limitations in mind. To help you with this, the author offers a number of good rules of thumb, such as…
- Avoid long prompts
- Use short menus
- Put important information first
- Allow interruptions
- Offer short cuts for the user who knows what to do
- Offer to repeat
- Offer help
- Offer summaries
- Use “earcons” (the audio equivalent to an icon)
What I find missing from this book, and find missing from the field of VUI design in general, is good design patterns. Sure, the guidelines are useful. But they are useful in the same way that UX Heuristics are useful — they can help you identify what’s missing in an existing design idea. There is nothing currently available to help you create that initial wireframe. There aren’t any patterns that you can borrow and adapt. And there are no in-depth design examples that you can study. This isn’t a flaw of the book, but rather a symptom of VUI design being a very young field.
The authors do offer suggestions on how to get started though.
“[T]ake the time to develop a full, detailed interaction flow, exhaustively enumerating all the possible interaction paths, the exact wording of every application prompt, including error recovery and system error prompts, along with the exact language that the user can speak at any point.” (p. 118)
Doesn’t that just sound exhausting? They continue…
“For complex interactions, come up with the structure of the flow and pick the most traversed paths and work on developing those first, in conjunction with Wizard of Oz Testing.” (p. 119)
Based on that description, it sounds like it will take roughly 5 years to design anything of moderate complexity. That is, until we have design patterns and abstractions to help us build VUIs in a jigsaw kind of way.
Shameless plug for TinCan.ai
While I can’t provide you with VUI design patterns just yet, I can provide you with the first VUI wire-framing tool: TinCan.ai. At Conversant Labs we’ve been working on ways to shorten the VUI design and development cycle, and TinCan was the result. With TinCan you can mock-up VUI experiences and test them with users. Sign-up for the beta and tell me what you think.
Have you tried a Voice UI before?
What did you like?
What could be improved?