UX Design for VR : The Basics — Part 1

Key considerations to get started with designing in VR

Apeksha Darbari
8 min readJan 6, 2017

Introduction

When you search “UX Design for VR” online today, you would probably easily find over a dozen relevant articles. Some of them may contradict each other, but most will have some common design guidelines, which we have to come to agree on by now. A year ago, when I was working on my first VR project, the google search results looked quite different. Completely new to this fascinating field, I found a ton of technical support but barely a couple of articles on design. If you’re interested in learning how far we’ve come, try comparing the drastic transformation of these. You’ll notice that we’re finally beginning to get into the finer subtleties of what to focus on, in order to create a much required new paradigm of design, and we’re still, barely scratching the surface.

It took years of adapting real world objects to screens for humans to start understanding, using and further creating 2D interfaces. Folders, calendars, clocks and even the icon of ‘My Computer’, denoting an actual computer, are some examples of how users related the functions of these symbols to their meanings in real life, also known as ‘Skeuomorphism’. After decades of perfecting 2D interfaces, we now have set standards for creating 2D UIs. Unfortunately, for designers, these are so well-wired in our brains that it is hard to let go of these even when not designing 2D interfaces.

The Challenge

In addition to the many technical, ergonomic and immersion challenges we face with this emerging technology, the primary challenge remains to create a whole new design methodology for VR, like we did for 2D interfaces many years ago. Merely adapting 2D UI elements to 3D environments is not the solution. The wheel maybe one of mankind’s biggest inventions for transportation, but if the Wright brothers kept trying to make a bicycle or a car fly, they may have never made an airplane. To design for VR, we need to think in 3D space from a fresh perspective and let go of the tools we’ve been using to convey information to our users in 2D such as text, buttons, pop-ups etc. We don’t have the limitations of a 2D screen anymore and we need to think skeuomorphic now more than ever, as our real world and our virtual world are both 3D spaces. It only makes sense that this has to fit better.

This is not, by any means, an easy task. We require consistent experimentation, constant prototyping and repetitive user-testing, to start finding a way to what could potentially become a standard in designing for VR in the near future.

What I’ve learnt

In the last year, I have worked on quite a few VR projects and collaboratively taught a workshop abroad, helping students build VR projects. And I know for a fact that I have a long way to go before I can claim to be any authority in VR design. We’re all experimenting and failing and learning and getting closer to becoming better designers and developers everyday.

That said, I would like to share some best practices/ recommendations/ considerations, for what I believe makes a good VR experience. This is a compilation of what I’ve learnt from my projects, research and critical analyses of other VR experiences and working with my students. This is essentially aimed to give a direction to people who want to get started with designing and developing for VR. It is, by no means, an exhaustive or absolute list but mere lessons and observations, of what I believe works. If you’d like to add something or disagree with anything I’ve said, I’d be glad to hear from you. We have a long way to go and learning from each other, maybe the best way to find solutions. Let’s get down to it then!

This is the first of a series of articles covering different key considerations for getting started in designing for VR.

Designing Interaction

How the user interacts with the experience is a critical aspect of VR design, as is for any other design. Some key factors to take into account here are:

Input Schemes

Different experiences provide different input schemes (often varying with different VR devices) to users for interaction. Some examples of these are:

  • Gaze : Some experiences (especially when the devices don’t have controllers) use gaze as an input mechanism. Most GearVR as well as some Oculus Rift experiences use Gaze for interaction. The mechanism for gaze is such that it allows the player to look at objects and trigger them by staring at it for a few seconds or tapping a button (or both).

Best Practices: While using gaze as an input mechanism, a few practices should be adopted.

Reticle: Reticles are used to help tell the users what they are looking at. These are generally created using opaque circles/rings and are in the direct line of vision of the user. A good reticle would be of colours contrasting against the background for good visibility, small and appropriately sized so as to not block the user’s vision and responsive to depth i.e. bigger on closer objects and smaller on objects far away.

Reticle contrasting to the environment and reactive to depth.

Timer: In case you’re not using tap (or supporting both), the time taken to register the gaze while triggering an object should be optimized. Too long and the user may get bored and strained staring at an object and if too short, the user may end up triggering objects unintentionally. The optimal value I’ve found is approximately 2 seconds but should still be user-tested for different experiences. There should also be a clear feedback to the user about how long they have to ‘gaze’ at something to trigger it. This is typically done with with a buffer ring around the reticle, filling up as the timer is running.

Field of view: It should be ensured that the most commonly triggered items are within the field of view of the user to conveniently trigger them. Their position in the experience should be at a comfortable position for the user, so they don’t strain their necks trying to look at it for a long period of time.

  • Controllers: There are various controllers for different VR devices and support various kinds of inputs. They can be programmed to perform different kinds of operations as required by the experience you’re creating.

There are numerous ways these controllers can be used and the more VR experiences you try, the more you’ll learn. The two most common ways currently for using HTC Vive controllers, are either reaching out to touch the interactive object with the controller or pointing a laser from your controller to the interactive object and and pressing buttons (trigger/grip).

‘Found’ & ‘NVIDIA VR Funhouse’ respectively — Controllers switch automatically with context.

Some experiences allow you to use your controllers as different objects to perform different actions. For example, ‘The Lab’ has one level where your controllers behave as a bow and arrow and the user can shoot arrows. ‘Found’ has a similar implementation of controllers as a slingshot and has controllers changing according to context throughout the experience. However, in this case, optimization of controls to maintain the feeling of reality is essential. For example, if you have a bow and arrow, you should only be able to pull it so far before it snaps back due to its elasticity. Also, haptic feedback assisting the user to understand how to use these controllers, is a good practice.

The hardware is evolving as we speak and interactions are expected to grow more refined and sophisticated over time. There would soon hopefully be a lot more ways of using controllers and many different kinds of controllers as well.

Feedback

While interacting with any experience, feedback to the user is absolutely essential for the user to know they are doing something. Visual and audio feedback are necessary for the users to understand how their actions are affecting the environment. Considering the two input schemes described above, let’s discuss visual feedback.

While looking at the environment, it should be clear instantly which objects can be triggered and which objects cannot. This can be done with objects glowing, popping up or highlighting in some way when looked at or interacted with using controllers. If the user doesn’t know which objects they can interact with, it may get really frustrating very quickly.

For gaze specifically, time to trigger is feedback to the user about how much time is left for them to look at an object to trigger it. This is often implemented using a ring around the reticle which fills up when looking at an object in accordance with the timer. This gives constant feedback to the user about how long they need to look at object to trigger it. It also helps inform them that they’re triggering an object, in case it is unintentional.

For controllers, often haptic feedback is used to inform the user that they interacted with something in addition to visual feedback, especially in the case that the user missed the visual feedback. Haptic feedback is also a great way to bring the users’ attention to their controllers.

Spatial audio is another method being adopted to help guide users and provide feedback. These are audio cues that come from specific directions and can help guide the user, by getting them to follow the source.

Affordances

Creating affordances that users understand is critical to make the experience intuitive. For example, if you want the user to go a particular way, putting a door with a handle there may lead them to naturally try to open the door to get through.

Using 2D UI elements for conveying information to the user has become common for designers but this breaks immersion in a VR experience because the user knows text flying in space, for example, isn’t a real world phenomenon. Using objects relevant to the environment or contextual hints to convey information is always a better way. For example, floating text or arrows in your environment can break the immersion whereas a phone ringing or lamps lighting up or flying birds may be a better way in order to draw the user’s attention to where you want them to look.

Customizable Options

A good practice is to give your player options to customize the experience. Movement speed, font size, graphic quality are a few examples. This is especially helpful for users more susceptible to motion sickness, weak vision etc. or users with different kinds of limited agencies. Additionally, providing an intuitive and convenient way to access these options and switch between them is a good design challenge.

In the next part, we will talk about designing movement in VR.

References

  1. Jerald, Jason. The VR Book. 1st ed. Print.
  2. “How To Design For Virtual Reality”. Backchannel. N.p., 2017. Web. 6 Jan. 2017.
  3. “Design Practices In Virtual Reality”. uxdesign.cc — User Experience Design. N.p., 2017. Web. 6 Jan. 2017.

--

--