Unlocking VR’s Potential by Focusing on UX

Jim Yang
6 min readDec 16, 2015

Stepping into the world of developing VR experiences is incredibly exhilarating, and, depending on which front your exploration starts from, very difficult. In the past few weeks, the development and innovation teams at This Place rolled into “new-to-us” territory.

Following up on our recent VR post, we continue covering in-depth our trials of imagining, designing, and developing for this recently reinvigorated digital medium.

Why come back to VR

With the recent developments in hardware and software, VR has once again left the dark rooms of digital design for a parade in the limelight. Like the often mentioned AI winter, the last generation of VR technology peaked in the memories of many with half-way technologies and bulky wire-wrapped headsets. Public interest and funding waned for years.

Now, cumulative advances in computing power allowed developers to build 3D creation tools of larger scale with more robustness and most importantly greater accessibility to the individual creator. Unity3D, the most prominent example of such a tool, opened up an entirely new path for experience and world-building which had previously been constrained to skilled 3D game developers.

Online forums and communities that support this ecosystem blossomed as well: StackExchange for its comprehensive Q&A style discussions and YouTube for its extensive video tutorial collection.

The Good

Part of the allure that keeps attracting people to this medium is its high level similarity with the real world. There’s little mental effort required to absorb a VR experience that is composed of physical analogs (hallways, rooms, populated with everyday objects) because we spend 24/7 training in it. The best part is that VR doesn’t even need to be photo-realistic to engage us this way. It just needs to be realistic enough.

Though VR is composed within 3 dimensions, it’s not confined by the bounds which limit all other 3D media (e.g. sculptures, exhibitions, rides). You can have an “infinite” canvas or a room layout which doubles in on itself.

Moreover, VR devices are slowly approaching the range of human vision in terms of resolution, field of view, and granularity. For instance, a 3D wall of text is more intuitive than a scrollable 2D viewport on a surface because the third dimension gives the experience and its content physicality. This makes immersive experiences harder to disrupt from the outside and even from within (if we want to build overwhelming 3D walls of text).

a VR developer’s desk

The Bad

In spite of its novelty and shininess, not everything is perfect in the universe of VR experiences. In the realm of VR user experience, anything more sophisticated than being stationary and looking around are often some combination of incoherent, awkward, and frustrating.

A common path people seem to be taking to add interactivity to a VR experience is using skeletal motion tracking. For most consumers, this means using something like the Leap Motion: translating real world physical movement (e.g. head tracking, hand tracking, body tracking) into VR events. Sometimes it works, but more often than not, the implementation falls short of expectations. These forms of input are slower, less precise, and lack the robustness of traditional methods. (e.g. keyboard, analog stick, mouse)

The more a VR experience relies on human movement to drive its interactions, the more psychosomatic side effects materialise: motion sickness, headaches, trying to run away or grab things that aren’t there.

How do we make this better?
Progress the hardware?
Optimise the software?

Spot the difference: Then (‘90s VR) vs Now (‘10s VR)

The Truth

The truth is that we won’t get very far advancing solely in the same direction we’ve been going in so far. The next innovation needs to have its origins in UX.

VR is waiting for its game changing product, like the smartphone market was waiting for the iPhone. Just as most of us don’t use touch screens on our desktops and laptops or wave our arms around like a frantic orchestral conductor to interact with our usual digital environments, simply tying real movement with in-VR movement is recipe for disappointment.

How to move forward

To think about appropriate ways to make the things we want, we need to internalise the core of VR experiences: people can do the things they can’t do within the normal circumstances of life as a human being. Superior UX design makes users’ environments powerful and gives users power to match.

We can draw on examples of simple behaviours between mobile devices and desktop to see the larger patterns of interaction and reimplement similar behaviours to make both the environment and the player powerful within the context of VR.

Looking at the trees

For example, we want to show scrolling content within a viewport.

On mobile, we usually drag our finger across the screen.

  1. Touch detected (binary action)
  2. Finger X and Y positions change (analog action)
  3. Calculate the difference between last finger positions and current finger positions (indirect action)
  4. Content moves in the direction of the difference

On desktop, we can hit the down key on the keyboard.

  1. Down key is pressed (binary action)
  2. Content moves up 100 pixels

Writing down the different instances of input across devices reveals some patterns; two are immediately clear:

  • Binary
    two values possible (e.g. on / off, red / green)
  • Analog
    range of values possible
    (e.g. 0.0–1.0, 52.1%, pointer moved 245px on the X axis)

In VR, the solution might be:

  1. Gazing at an object: a ray projected from eyes hits something
    (indirect binary action)
  2. Ray hits bottom half of the object’s bounding box
    (indirect binary action)
  3. Press X button on Bluetooth keypad (binary action)
  4. Content moves up to allow lower content to come into focus

Depending on the content and the context, we can optimise this experience even further: just glancing at the bottom half long enough cuts the need for the Bluetooth keypad input.

Seeing the forest

Understanding how to compose more sophisticated (“second-order”) input from basic (“first-order”) input is the key to building meaningful UX for VR.

It greatly helps to have a broad hierarchy of these breakdowns to assemble the VR experiences we want and to do that effectively, we need to standardise terminology and go back to the basics.

  • an interaction is composed of a series of actions affected by input
  • all interactions have a property which can be called degrees of freedom
    which is the minimum number of distinct actions needed to compose it (e.g. desktop drag-and-drop has 3 degrees of freedom: mouse X, mouse Y and left mouse button state)
  • first-order actions are input values that require no further manipulation from the designer/developer to use directly as an action within an interaction (e.g. Escape key to quit)
  • second-order actions are values tied to some manipulation of a combination of first-order actions
    (e.g. long press = finger touch + timer)

With more VR prototyping in the pipeline, we will keep adding relevant terms and refinements to our findings. It’s okay to draw from UX metaphors we all know and love but examine your assumptions, your audience, and your goals to avoid shoehorning existing behaviours into VR input methods. Keep your arms still and your eyes peeled for our next VR post.

Talk to us to see how we can help bring your VR ambitions to life: hello@thisplace.com

--

--