Poor Voice Design, Angry People
Understanding the unique risks when designing VUI’s
In a world of countless daily digital interactions, we have all experienced the annoyance of a bad user experience. When an errant pop-up obscures the screen, or when a misplaced tap sends you to the wrong page, it’s a pain, but the frustration is usually mild and fleeting.
However, voice-powered experiences are much different, and designers should be aware of this from the very beginning of the creative process.
The big promise behind the power of voice as an interface is that speech is our most natural and intuitive form of communication. We all learn to speak at a young age, which also means that over time we develop natural expectations about being understood by the listener and how the conversation should flow.
With a voice user-interface (VUI), we often have those same expectations, which can lead to powerful and intense feelings of frustration that come on quickly when the experience isn’t what we expect it to be.
Anyone who has called a phone IVR system that couldn’t understand what they were saying has felt this. Repeating shouting “agent” or “reservations” at American Airlines is rage-inducing.
Thankfully, the technology of voice recognition is improving, and continues to accelerate exponentially.
Improved Technology, Different Challenges
Conversational voice experiences are now becoming more widespread. This will push the UX challenge from the technology (can it properly identify the words spoken) to the design (do users know how and when to respond properly).
In the world of visual design, the best practices around crafting a great user experience has been established. When you’re creating a new product or feature for the web or mobile, the best teams use a multi-stage design process.
Wireframing — The flow of each step of the experience is mapped out.
Design — Once the core experience has been mapped out, higher-fidelity designs are created with tools like Photoshop or Sketch.
Prototyping — Designs are then put into a prototyping platform like InVision or Adobe XD, to create an interactive simulation of what will be the final user experience.
Testing and Iteration — Those prototypes would then be tested by others within the team and company, and are often put in front of potential users for feedback.
This is all followed by changes to the design, updates to the prototypes, and continued testing, until the team decides to move on to development of that product or feature.
Now Use What Works for VUIs
This same methodology needs to be applied to Voice-first user experiences. The flow of the conversation and the phrases both recognized and responded with need to be well-planned and thoughtfully-designed.
Along the way, the experience also needs to be interacted with, ideally in the actual voice of the platform that users will be on. It is only through those test interactions that designers and UX professionals will find what users are likely to say and where they get tripped up within a voice interface. For example, anyone who has worked with voice will know that the first lesson you learn once you hear your text spoken for the first time is that it’s too long. Always.
The Right Tool, Just for Voice
This is what we’re helping teams do at Sayspring. Our design platform lets designers and UX professionals plan a voice-powered user experience, and select the actual speech used through. Because you can do this without needing to code or deploy anything, teams can plan flows, create prototypes and make changes rapidly to get things right from the start. If you’re involved with voice projects, or plan to be, create a free account and try it out.
Be Helpful, Not Frustrating
Users being frustrated with a poorly designed product is nothing new. But people will often get mad when they speak to a product and are not understood. With the high user expectations that voice brings, and such a severe reaction when things go wrong, make sure to properly design your voice first experiences before you unleash on an easy-to-annoy world.