Humans are really good at understanding, moving around, and solving problems in three-dimensional environments. We do so effortlessly, from an early age, whereas any form of logical problem-solving imposes a much higher cognitive load and must be learned with great effort and frustration. This is just one way of stating Moravec’s paradox.
One great promise of VR is that it will allow us to unlock our amazing spatiotemporal problem-solving capabilities within immersive environments that we create from the ground up— and therefore allow us to find solutions easily, even in domains that have presented great challenges in the past. But it’s not obvious how to create these environments, and doing so will require articulating and mastering some new design principles.
Neither skeuomorphic nor flat design approaches will succeed if ported to VR.
Designers argue about a lot of things that the rest of us never notice consciously, but the debate over skeuomorphic and flat approaches definitely crossed into the wider world. Just about everyone knows that iOS 7 and Windows Metro look very different from their predecessors. Ultimately, this transition was enabled by our collective comfort with digital devices of all shapes and sizes; once they became such deeply integrated and intimate elements of our lives, we no longer needed reference to prior modes of experience. This familiarity on out part is what freed designers to explore and discover the fundamental requirements of good mobile experiences, rather than porting the nostalgic cruft from physical objects.
Neither skeuomorphic nor flat approaches will succeed if ported to VR, and limiting the discussion to this spectrum will hold back the development of VR-native design patterns and principles.
But why are they inadequate?
The problem with the truly flat approach is that removing textures, lighting, and detail makes it much harder for the visual system to quickly and unambiguously interpret an environment. We need these sources of information to understand what is going on around us, and their absence will lead to worse experiences that impose higher cognitive loads.
The issues with skeuomorphism are a little less obvious. After all, many 3D game environments aim to be as rich and photorealistic as possible — and isn’t realism the ultimate form of skeuomorphism? It’s true that high levels of realistic detail are a safe place to start, just as they were in mobile (and desktop before it). If we know that lighting and shadow matter, for example, then it makes perfect sense to start by including them with the highest fidelity possible.
But just as this slavish devotion to real-world detail ultimately proved limiting in other platforms, so it will in VR. Through applied research and good old-fashioned tinkering, we will eventually discover what aspects truly matter for satisfying experiences of various types — and what can be left out.
Cues to the rescue
Instead of the flat-skeuomorphic spectrum, we should be thinking in terms of cues. A cue is a source or channel of information that helps us to correctly interpret the structure of our environment, or an object within it.
Linear perspective is a cue to the structure of the environment and the form of objects within it, as are shadows and textural patterns. Perceptual scientists have identified many others that contribute to our preconscious understanding of the world around us. Importantly, cues from different senses are typically combined, again with no conscious effort; for example, sounds frequently help us to localize and identify objects.
This is an area where the brain sciences have much to tell us. There are many answers waiting to be unearthed in research that has already been completed. Unfortunately, few resources currently exist for designers and developers who want to apply this body of knowledge to their work. And some of the most important contributions from neuroscience and psychology will only materialize when scientists include the needs of VR into their experimental questions and designs. VR will take the brain sciences in some very exciting new directions.
A given design does not need to include every possible cue, but the cues that are used must be concordant with one another — they need to work in harmony.
A minimal design in VR will be different from a minimal web or industrial design. It will incorporate the minimum set of cues that fully communicates the key aspects of the environment. Take one of the cues out, and the user will be less well-situated and effective. At this time, I think it’s an open question what those minimal cues might be for a given application — we know some that work, such as flashing lights to draw attention, but the makeup of the palette isn’t yet clear.
Just as mobile design patterns and principles took a few years to work themselves out after the birth of the modern smartphone, it will take some iteration to figure out what works best in VR. But there is one crucial difference: while the properties of the human perceptual system have long been understood to be factors in the design of user interface and visualizations, these properties are completely central to the creation of effective immersive environments. VR designers and developers who take the mind and brain seriously from the beginning will converge on the best solutions more quickly, and with less wheel-spinning.
Further Reading and Watching
Lots of smart folks are working on design and interface issues in VR. A few that I’ve admired recently:
- Jody Medich on What Would a Truly 3D Operating System Look Like?
- Alex Chu’s recent talk on The Role of Space in VR. Alex is trained as an architect, and I think this skillset will be an important one.
- Josh Carpenter’s SFHTML5 talk on UI/UX for WebVR.
- Oliver Kreylos has been working on these issues for years; take a look at his VRUI Toolkit.
Screenshot at the top is from Colosse, an excellent submission to the Gear VR game jam from earlier this year.