3 key takeaways from learning how to design for voice

Valeria Querini
May 2 · 4 min read

I recently took the course on VUI (voice user interface) design offered by CareerFoundry. I did so while simultaneously working full time and it took me around two months to obtain the certification. During this time I consumed a lot of learning material, completed many exercises, had Skype calls with my mentor and connected with the rest of the community on Slack. It was an intense and gratifying experience.

As a way to reflect on the newly gained knowledge, I decided to summarize my key takeaways and share them here.

If you are considering learning VUI design, I hope my experience will inspire you and help you understand what you are getting into.


1 - If you have designed experiences before, some things are going to stay the same

As someone who has already been working as a user experience designer, I was not surprised that the UX process doesn’t really differ when you are designing for voice. After all, voice is just another technology to which we can apply the same kind of thinking we excel at as designers.

Humans should always be at the center of everything we create, and VUI design is no exception. It all starts with understanding the context, framing the problem and doing user research.

Creating user personas is a useful way to synthesize the initial user research. You probably already know about them. They help you go into concepting while keeping in mind who you are designing for. In addition to user personas, system personas are practical artefacts to guide VUI design. They help you to understand and communicate how you want the interaction between users and the system to sound and feel like.

After sketching the main flows and writing some sample dialogs (the VUI equivalent of wireframes), it’s time to validate assumptions by putting a prototype of the voice user interface in front of potential users. The tools used to prototype (scripts are happy to be written in spreadsheets) and the testing methods (Wizard of Oz if you want to keep it low-fi, for example) might not be the same if you are used to think in screens, but ultimately the purpose of these activities and the overall design thinking process does not change.


2 - If you have designed experiences for GUI-based products, some things are going to be different

While working on the course project, something I found challenging was defining the information architecture of my Alexa skill. With no menu bar and no possibility to rely on visual cues, it’s easy to get lost.

Where am I?

What can I do next?

How do I go back?

Same thing applies to features discoverability.

What can this application even do?

Although conversational interfaces are becoming increasingly popular, most people still haven’t interacted much with voice user interfaces. They can’t rely on known UX patterns. They are not sure how to give a command, ask for information, react to a system that just doesn’t seem to pick up what they are saying.

When dealing with voice interfaces, it’s extremely important to design a system that gives appropriate feedback, helps the user move forward and forgives mistakes. We don’t always know for sure what the intent of the user was and it’s essential to dedicate more design effort to error cases.

People express the same thing in many different ways, using different wording, pauses, pronunciations. Your design should be able to cope with it. If it doesn’t, the interaction won’t be successful and won’t feel natural.

Reducing cognitive load is another design principle that applies to all kind of user interfaces, but that it’s even more important (and tricky) to apply when designing VUIs. An efficient VUI, is one that keeps cognitive load to a minimum. Sometimes this means working on the language in order to make sentences short. Sometimes this means breaking down the conversation in multiple turns so that to accomplish a task the user will have to give many (but simple!) commands.


3 - VUIs are yet another territory of inclusion/exclusion

Inclusive design is important to me. Every design decision has the potential to include or exclude. While learning about VUI design, I reflected about the relationship between this topic and voice technology.

Voice brings the obvious opportunity to make products and services accessible to individuals with certain types of disability, like the blind and visually impaired.

Language plays a big role in making an application accessible to those with cognitive disabilities, as well as people who are not fluent in a product’s interface language. Good speech design is clear, understandable and reduces cognitive load by keeping messages short.

As technology mirrors society, gender bias is very visible here. The majority of voice assistants have a default female voice. Luckily, I discovered the existence of projects like Q, which let me hope that if we want to design for inclusion, we can overcome gender bias in AI assistants.

Like all designers, voice user interface designers are in the position to choose which type of solutions they want contribute to society. If you are going to become a voice user interface designer, remember that you have the power to make things right.


If you are just getting started with VUI design and want to connect, leave a comment below.

If you are a voice expert and found something in this article that resonates with you (or not) please also leave a comment :)