Designing for non-visual interfaces — The navigation of sonic space. [Pt.2]
What differentiates Conversational UI and Voice-based interaction from the more ‘traditional’ screen based experiences? When considering the nature of this new space (a new realm rife with new modes of navigation, representation and organisation), how might we best understand it? Designing digital products or experiences that rely so heavily on abstract mental models, compartmentalisation and subtle inputs or sonic stimuli is challenging; so, how to steer our design process in the right direction?
Navigating Non-visual Space
The first, and arguably most noteworthy difference between screen and sonic based experiences is the absence of a visual user interface. In some cases, positive and negative message reinforcement is built into a product’s hardware (via coloured LEDs etc) — but when it comes to more complex journeys, a lack of visual guidance can become problematic.
Surprisingly though, using a combination of guesswork and instinct, we are entirely capable of finding our way, even without white space, flat icons and back buttons.
Picture this; Its the middle of the night and, predictably, nature calls. Tiptoeing through your home in the dark, half awake, half asleep — you trundle down the corridor with your eyes closed, avoiding all sharp corners and protruding door handles. Instinctively, you know where to go, how to get there and (9 times out of 10) exactly how to hit that target. You’re able to do this because you have, through repeated experience of that space, subconsciously drawn a detailed mental model or map that documents every nook and cranny of your home’s interior. Conveniently, the very same thing also happens when you are using a digital product, you learn the journeys, interactions and app/product architecture by heart, and eventually (in most cases, very quickly) through a process of trial and error, they become fully automatic.
Automatic, Intuitive, Instinctual; these then, are our keywords — but how to achieve this holy grail of product experience in such an undefined space of interaction? Furthermore, how are we to design experiences that do not rely solely on trial-and-error based learning? How might we provide fail-safe interactions and journeys that feel intuitive and most importantly, respect the user’s tolerance for failure?
Representations of sonic space
From a design process perspective, the benefit of screen based experiences is that we are able to represent these interactions using wireframes, button states, labels and user flows, but in the case of sonic experiences, how are we to represent a space that is, in no way, visual? Are the methods and processes we currently use sufficient?
Looking at the ways in which people have mapped more abstract entities like sound, emotion, or philosophical ideas may give us some clues as to how we might plan and communicate voice based interactions.
This illustration is a representation of active and inactive areas of the body when experiencing specific emotions. Though it may not perform very well at a granular level (i.e. not much differentiates ‘Envy’ from ‘Contempt’), it does a great job of making emotional experience visual. By the same token, we need to develop ways of visually representing the new space of interaction birthed by CUI; its nucleus, its logic and the practical and emotional experiences it supports … but how?
Hierarchy no more.
Another hurdle to contend with in negotiating this new paradigm is understanding its boundaries. In order to do this, we must first conceded the fact that classic hierarchical logics (e.g. Main menus, back journeys and pre-defined entry points) don’t really apply.
Sonic space transcends screen-based frameworks — now, the user is capable of jumping from one task or journey to another, changing gear in terms of complexity or focus, at any time. It’s as though we’ve spent our lives operating with the assumption that time is linear and accidentally bumped into a black hole; can we contend with this level of complexity? How are we to shape CUI or voice based experiences so they resemble something we, as designers and users, can fully understand? Also, from a product or brand perspective, how can we shape the experience so that it evokes core aspects of the brand’s personality?
All these questions point to a myriad of obstacles we need to negotiate when developing new methods and processes to support the design and development of CUI experiences. At this point, I don’t have the answers – but consider this series of posts an exploration of the topic. Next stop, new methodologies …