You already know how to design for voice-based UIs
It doesn’t take a fortune teller to predict the dominance of voice-based user interfaces (VUIs) in the near future.
Personal assistants like Siri and Google Home have already become household fixtures, and the impact of Amazon’s voice AI is perhaps best measured by the plummeting rate of parents naming their babies Alexa. (To be fair, I wouldn’t want to be named after a subservient robot either).
And while the widespread embrace of VUIs are a definite win for accessibility and inclusivity in user experience design, the prospect of crafting an interface for something that’s, well, interfaceless, can be daunting to even the most experienced UXers.
But when I talked to the project teams in the UX company I work for, some of our design leaders noted that the transition to VUI design may be more seamless than you’d think. In fact, they argue that many of the same principles that govern the design of conventional UIs translate smoothly to this new voice-first frontier.
“The transition from UX design to VUI design may be more seamless than you’d think.”
And the preeminent experts of VUIs seem to agree with them. Cheryl Platz, a designer who’s worked on voice interfaces from Alexa to Cortana, writes that, “many of today’s designers can transfer their existing design skills to voice with some simple reframing and a bit of added subject matter expertise.”
So let’s get reframing! Some of the design techniques outlined here may seem like UX 101, but we’ve included the information you need to carry these principles out of the graphic domain and into the vocal one.
Remember visual feedback
Every UX designer understands the importance of feedback in the digital experiences they craft. The confirmation email, the sound of moving a file to the Trash, the pop-up that says “your form has been submitted” — all of these play an indispensable role by validating the user, informing them that the task they’ve set out to complete has been completed.
In traditional interfaces, this feedback manifests visually or aurally. And in the case of VUIs, you might expect that all of its feedback falls in the latter category. While nearly every VUI does vocalize a confirmation, they all incorporate a visual element too.
Take Amazon’s Alexa, one of the most successful VUI’s on the market today. Whenever a user addresses Alexa, the Echo device housing the AI illuminates with a blue light. Then, after the user voices their command, Alexa clearly responds with feedback — an “okay,” a “sorry, I can’t do that,” or anything in between — and turns off its luminescence.
“Visual feedback for VUIs shouldn’t be extravagant or loud.”
Similarly, Google Home also provides an LED “dummy light” that activates when users interact with the VUI. Visual feedback for VUIs shouldn’t be extravagant or loud — we’re still working primarily in the aural space here — but it’s a key aspect that UX designers need to remember when crafting these voice-based user interfaces. And luckily for us, visual feedback is a technique no designer is likely to forget.
There’s an old Internet proverb: When somebody asks you a question they could Google, they’re not just looking for the answer. They’re looking for a conversation. No matter how advanced our tech gets, no matter how many AIs and robots we surround ourselves with in our day-to-day lives, we still need human connection.
That’s why the best VUIs incorporate humanity into their design — not to replace that human connection (hopefully), but to better imitate it. The most popular VUI devices have human names and human voices, but their experience designers have added much deeper layers of personality to them.
There’s no end to the little quirks and gimmicks behind Siri or Alexa. What might be called an Easter egg in a conventional interface becomes something more akin to an idiosyncrasy, a hint of personality behind the VUI.
“No matter how advanced our tech gets, no matter how many AIs and robots we surround ourselves with in our day-to-day lives, we still need human connection.”
And again, most designers are already familiar with injecting personality into their designs — it’s just a matter of relying on that personality a bit more than you would for a traditional interface. One good way to bake that human touch into VUIs is thinking of it as an extension of conversational UX, a popular design trend that’s essentially shorthand for chatbots. If you or your UX design agency has crafted a quality chatbot, you already have the skills needed to bring some life to an otherwise cold, robotic AI.
What about tutorials?
A polarizing topic in the UX community — some designers find tutorials useful, necessary even, while others take it as a sign that the architect of the experience has already failed. Wherever you fall on that spectrum, it’s important to understand the equivalent to tutorials in the VUI domain.
There shouldn’t exactly be a “tutorial” for VUIs. The premise behind VUIs is that they’re natural — simply ask or give a command the same way you would a person. We’ve all seen The Jetsons, so we know how a personal AI assistant should work.
The problem is that your standard VUI isn’t quite advanced enough to behave like a human just yet. For example, most VUIs can’t process or understand context, something inherent to every human conversation and interaction we have. In other words, some guidance is still necessary.
Let’s return to the prototypical example: Siri. Rather than explain how to use the VUI in a tutorial, Siri instead gives examples of commands, questions, and prompts you can use her for. It’s a nudge in the right direction, without having to belabor an entire tutorial for something that should be effortless to use. Consider it training wheels — a concept you’ve no doubt designed for in other scenarios.
Keeping it simple, stupid
Yes, we’ve reached the lynchpin of all user experience design: KISS. It should come as no surprise that the mantra every UXer should have seared into their brain or tattooed on their knuckles is applicable to VUI design.
“Users should be able to garner a response from VUIs by asking and commanding the same way we would talk to people — we shouldn’t have to learn a new parlance to interface with them.”
In fact, KISS is especially crucial in voice-based user interface design, because it’s a relatively new mainstream technology. Like I said before, we’re only just exploring what these personal assistant AIs can do, so it’s important to keep things basic for the user’s sake and for the device’s.
That means avoiding contextual interactions, and designing explicit, jargon-free commands. Users should be able to garner a response from VUIs by asking and commanding the same way we would talk to people — we shouldn’t have to learn a new parlance to interface with them.
There’s a reason the UX principles that guide the design of websites and apps are the same as the ones for VUIs, and even other new technologies. It’s because user experience design is much, much older than the Internet, or even the first computer. It’s an ancient practice, one essentially defined as the creation of something that is wholly usable.
And that’s why, no matter what new technologies are thrown at us in the future, UX design agencies and their team members will be prepared to create better ways for us to use them.
Originally published at www.invisionapp.com on August 6, 2018.