Speaking seems a more natural way of interacting with the world than stroking keyboards or tapping glass tablets. Humans have been making noises and listening for thousands of years.
Back in 2009 Luke Wroblewski published the “Mobile First” manifesto. Is 2019 the year when “Voice First” becomes an industry standard? Speech engines have become unquestionably better. As a result riding a voice operated elevator is not a problem for Scotsmen anymore.
Voice happens to be a very good assistive technology, too. It is enabling us to stay safe in situations like driving a car or cooking — we can ask for directions or a recipe with our attention and hands occupied by a higher goal. It gives independence to people with conditions like dyslexia, multiple sclerosis or poorer dexterity due to age — as Don Norman explained recently in how design fails older consumers. In all those situations, voice assistance makes interaction either safer and easier, or even possible at all when compared with using a keyboard or touchscreen.
Recently we worked on an early-stage voice assistant concept. Our goal was to explore new ways for dyslexic mechanics to use guidance on testing a vehicle.
We’ve designed a conversation between a mechanic and a voice assistant. The assistant would remember defects that a mechanic identifies during an inspection. It would also allow to listen to guidance in case of any questions. When we tested the voice assistant in a garage, we’ve found out that it was understanding less than we expected. It was noisy in the garage because engine of an inspected car must be running during some parts of the inspection. We’ve learned that we’ll need a headset with a microphone for the next round of usability testing.
First feedback from users was positive. They were so happy to be able to listen to spoken guidance instead of having to read long texts. Having prototyped and tested this voice assistant I feel confident that voice UI is important for user experience. However, designing for voice poses different challenges than for screens. For example, we can’t assess all the options of a system at once, like we do it on screen. We can only listen to them sequentially — one by one. Such constraints are common for all conversational systems — be it in the form of voice assistants or chat bots. Designing with these limitations is difficult, yet very interesting. I’ll definitely keep exploring the theme of voice UI in 2019.
Join me and let’s discuss voice first, inclusive design and public services! We have a great line-up of 10 keynote speakers and 50(!) break-out sessions from all around the world in store for you! Topics include service design, user-centered design, inclusive design and related fields.
International Design in Government conference takes place 18–20 November 2019 in Rotterdam, The Netherlands. More information and registration is available at: https://conference.gebruikercentraal.nl
Are you interested in any particular aspects of voice interfaces? Let me know in comments.