Environment Traversal in VR

How to explore a virtual world with minimal nausea and maximum context, a UX experiment

Andrew R McHugh
Humane Virtuality
12 min readAug 31, 2016

--

When stepping into a virtual world, you want to move around, explore the interesting things on the horizon. Given any VR headset, there exist limitations. For Cardboard, you can’t change your position at all. For Rift or Vive, you can move around, but you still have a limited space to move. So, what are some of the best ways to move around larger spaces?

In the next sections I’ll break down the problems as presented, which problems I ran into and how I overcame them, the technical bits of code, a splash of user testing, and a working prototype.

Problem, Hunt Statement

As noted in the majority of current design literature for virtual reality products, it’s a huge no-no to move the camera that controls the user’s view without user initiation and control. For the same reason you can get car/sea/motion sickness — your brain doesn’t expect some part of your body’s motion because of what it thinks are conflicting signals — you can get VR sickness. Making your users nauseous is a great way for them to not only leave your app, but hate you forever.

Virtual reality designers have a (moral) imperative to create non-nauseating experiences. We also have a desire to push boundaries and build apps larger than what is within arms’ reach.

It’s important to start my experiments with a goal — otherwise I’m likely to lose focus. For this design experiment, my goal is to find an answer to “How should we traverse a large virtual environment?”. I form this question into a hunt statement, or guiding statement:

I am going to research environment traversal methods in VR, for non-walkable environments, in order to learn what is most comfortable and understandable for users.

Solution, Prototype

Sketching twenty possible traversal methods, I prototyped four and a half. User preferences vary. But, one of my methods, micro-movements, was interpreted as laggy and thus bad UX. I conclude by suggesting to include traversal methods in application settings so users may choose what is most comfortable to them.

Experiment 11, Transitions:
armthethinker.github.io/webVR-experiments/#11-environment-traversal

Source Code:
github.com/armthethinker/webVR-experiments/blob/master/11 — environment-traversal.html

Problem Space, Existing Work

Designers and developers have already come up with traversal methods, many of which either draw from reality or cinematography. I wanted to do my own exploration, especially looking for novel ways to move around. (Some of these were presented in my last case study on head-tracked transformations.)

Just Move Around

Photo source: Giphy
  • Most natural interaction
  • Not available on Cardboard
  • Requires room to move

Gaze-Based Movement

From a VR game called Land’s End.
  • Looking around can trigger movement
  • Useful for when you don’t have controllers or can’t walk around
  • Users experience various levels of nausea, from no nausea to extreme nausea, based on the transition type and the user

Portal Teleportation

Portals in Budget Cuts, an upcoming VR game.
  • Looks really great — haven’t tried it in person though
  • Gives the ability to see where you might go without having to travel there first

Emily Eifler’s & eleVR’s Spherical Video Experiments

One of the multi-camera experiments. From the eleVR blog.
  • Interesting uses of stitching, though not as magical as I wanted it to be
  • Camera placement causes neck strain in some of the videos
  • I’m curious about programatic changes in the stitching pattern (e.g. camera A takes up 80% of the view, but when you look at the other 20%, it takes over)

Folding Space with Redirected Walking

Research project showcasing redirected walking.
  • Slightly changes the projected reality to trick you into walking in circles when you think you’re walking straight
  • Requires a space large enough space to hide your redirection
  • Not available with stand-still systems like Cardboard

Walking Apparatuses

It’s a hamster ball for VR.
  • Great for walking however far you want
  • Not great for the Cardboard, nor easily accessible to many users
  • Great for being a human hamster

These were all minimally interesting, but not all methods work for Google Cardboard. I need explore my own constructions, based on gaze-triggers, so that I may experience and understand them as a UX Designer.

A birds-eye view of the environment I use to move around in. When you gaze at a black orb, the active traversal method fires and you move to the orb’s location. The model is based off of a level in Monument Valley and is available as a template in MagicaVoxel.

Design Process

To start testing as fast as possible, I created a simple test environment with a plane to stand on and a box to look at. Gazing at the box or pressing a key triggers the animation, easily letting me focus on coding the prototype.

After reading Dustin Senos’ article, How to get value from wireframes, where he suggests making twenty unique design directions in wireframe form before any higher-complexity design work, I was inspired to try it out on my own work.

I started with a quick sketch of 20 boxes with 20 transition methods. Like Senos suggests, the first few were easy, but the last few really made me think — just what I was looking for.

Such transition. So traversal. Much sketch.

Over the course of my work on this prototype, I created a structured tree of possible transition animations and interactions, based on my 20 interactions sketch.

Notice that some can have different speeds. Animating slowly to a new position feels more like floating while a quick animation feels like Spiderman.

From this tree of traversal I implemented as many as I could, focusing on a few basic modes (jump cut, fade, animate) and a couple complex modes (micro-movements, rotate into).

Jump Cut

This is the most basic of traversal methods. One moment you’re in Position A, then the next you’re in Position B. Used often in film.

Jump cuts are instantaneous, so they don’t make your user wait around for a transition animation. There isn’t any sensation of motion, so your user’s brain doesn’t receive conflicting information from the real and virtual worlds. Therefore, users don’t get nauseous from the transition. However, because there is such a complete change in visual information, it can be easy for users to get confused about where exactly they are.

Fade (Technically Fade + Jump Cut)

When you’re in Position A, the view starts darkening until all you see is black. Though you don’t see it, when your view is entirely black, you change position from A to B. Then your vision becomes less black and you see that you are now in Position B. Used often in film as “fade through black”.

While not instantaneous, this transition can be quick. Using a black-screen transition is useful because the user is notified that they are about to move. No surprises here. Users still may be confused once they are in Position B, but because of the fade-up-from-black, they feel like they have an extra moment to get acclimated to the space. Because of the lack of sensed-motion, the user doesn’t get mixed signals and therefore doesn’t feel nauseous.

Animate

When you’re in Position A, you move to Position B, going through all points in between. Used in film as “dolly shot”.

An animation between positions really helps users stay acclimated to the environment. Each position between A and B is only a small visual update to the previous position. However, in this experience and others, motion in virtual reality conflicts with the user’s lack of motion in non-virtual reality. This disconnect can quickly cause nausea for some users, but interestingly, not all.

Micro-Movement

When you’re in Position A, going to Position B, you make a series of jump cuts to A.1, A.2, A.3, … A.n for n positions between A and B.

While I was thinking about the animation type of transition and potential for nausea, I wondered: “Would users get nauseous from a mix of animation and jump cuts?”. This transition has the benefit of giving the user the context of their motion (like animation) while not using continuous optical flow. The user sees in the virtual world what they feel in the non-virtual world, therefore getting rid of nausea.

Brief user testing showed this interaction to be more odd and jolting than useful.

Rotate Into

When you’re in Position A, spin to your left or right, and then you are in Position B.

Over the last few years I’ve been waiting for a game to come out called Miegakure. You can “turn” through a fourth spatial dimension in order to solve puzzles. That idea interested me and I figured maybe it would be useful for VR. Given that you don’t perceive visual input as well when you’re spinning, it seems like a good time to do a jump cut without the user noticing.

Example of Miegakure game play. Photo source: Miegakure.com.

When I code interactive prototypes, I do so as a UX Designer — so I’m not a programing master. As such, I needed to figure out a way to blend two spaces. Since doing it programmatically was too hard (given my time constraints and programming expertise), I figured I could take two spherical images and toggle between them when a user rotates.

Standing in Main Squeeze, a local vegetarian restaurant my girlfriend manages, I created two spherical images.
Normal view.
The full spherical image of Position A, near the counter, flattened to a rectangle.
A blended image of the two spheres to be used while the user is still rotating between A and B. The left side is from Position B. In the center you can see the blend. The right side is from Position A.

While I didn’t complete the actual transition, I did create hotspots in the photospheres that allow you to move between them.

Implications for Spherical Movies

It is easy to think that my findings here may relate to spherical movies. And while they might, we cannot make any strong conclusions for spherical movies because the research was done looking at traversal methods.

Cursor Squareness

I often tweak the design of my cursor, trying to find out what works best in my constraints. I’m developing on Cardboard which uses phones with low pixel density (at least for VR). Cardboard experiences as well as other low fidelity headsets have a “screen door effect”, which is just what it sounds like: looking at your world through a black wire mesh. If your cursor is small, then there are fewer pixels to represent it, leading to odd representations that can wiggle a bit because of the estimation process.

Various cursors as they might be represented in Cardboard. Far left: square aligned with pixels. Left: square that looks fuzzy because it is misaligned with pixels. Right: circle that looks fuzzy because there are not enough pixels to make a smoother curve. Far right: triangle with the same problems as the circle.

Your brain merges left and right eye images by finding similar features in each. Since small cursors are poorly represented through screen doors, your left and right eye may get different images of the cursor, meaning that your brain won’t recognize it as the same object. This leads to double or blurry vision.

To combat all this, I created a large square cursor. The increased size reduces the amount of difference in the visual output of the screen. A square, as opposed to a circle or other shape, plays more nicely with the square pixels. With higher resolution screens, none of this would be a problem.

Technical Bits

Read this if you’re interested in the code behind the designs, else skip.

Entity Creation

With my transition work, I had difficulty adding HTML to the DOM. I remind you: I’m a designer shoveling code together to get prototypes working “enough”, not a developer writing production level code. Back in my web days, I’d throw jQuery into the mix. Previous experimentation with jQuery and A-Frame led to jQuery breaking some of A-Frame’s methods. So that’s a no-go here.

Instead, I searched on the Mozilla Javascript documentation and StackOverflow until I found some things that worked for me. Then, I threw it into a function so that I may easily create the entities I need and put them where they should be at will.

Transition Functions

Given the potential size and combinations of my transitions list, I generalized my transition functions — meaning that every time I call transitionFadeJump(), it runs smaller, reusable bits of code. For example:

We start with the outer function, transitionFadeJump():

Inside, there is transitionFade():

The transitionFade() function includes adding a spherical element around the camera with addEl() and animating it with addAnimation(). The function transitionJump() gets passed into transitionFade() as a function to run in between fading to black and making the sphere transparent again.

Fetch Settings

When working with users, I need to be able to toggle between the various traversal methods quickly. In previous work, I began using settings objects where I could easily change the transition on reload. But, that requires the user to take off the headset, pass it to me, I change a variable, I press refresh, I pass the headset back, and the user puts it back on.

While developing this experience, I found a better solution with Fetch by GitHub. Fetch retrieves the settings object (as JSON) from a separate file. I set up a interval so that my JS checks the settings every one and a half seconds. When there’s a change in the settings, like the transition changes, the transition triggered by looking at the orbs changes too.

However, this limits users on desktop who might not be able to play with the code themselves. I created a series of hotkeys and a fetch variable that halts the fetching process so as to not overwrite what the user just changed.

Place Transition Orbs

Transition orbs doing their thing.

Programmatic placement of the transition orbs is faster than placing them by hand. I made a simple placement function.

User Testing

While testing these prototypes isn’t my primary focus, I think it would be a drastic oversight to not include a few user tests. This experiment had three users, all women in their mid-20s.

Setup

In my first case study, I outlined my setup and workflow. Here, I’ll include the diagram I had before.

My MacBook Pro holds the files and captures the session. My iPhone 5 displays the experience to my user and sends its screen back to my laptop to be recorded.

Methods

When working with users, I implement think alouds: the user describes, out loud, everything they are thinking, hearing, seeing, and experiencing. Think alouds help me understand what the user is experiencing and where their expectations may not be met (e.g. “I thought that if I looked at this, a thing would happen, but I guess not.”). When a session is done, I may ask a couple of follow up questions regarding their experience, expectations, and perceptions of virtual reality. I like to run these sessions in groups of two users who know each other, rather than a one-on-one session. Since these are more explorations than usability tests, the ping-ponging conversation between the three of us is more useful than one person reporting to me (as found in my one-on-one and group testing sessions).

User testing … but my fade transition wasn’t working properly and was causing a flash.

Findings

Users should be able to choose their traversal method. User preference for traversal methods were mixed. One preferred jump and hated animate. One liked a slow animation best. The other user liked fade best and a fast animate second best. These results suggest that users vary in their preferences to an extent that would be well served by a user changeable setting.

Users don’t like micro-movements. This traversal method was designed to reduce motion sickness while still giving the context of an animation movement. However, users interpret the interaction as a lagging render. That’s not what we want.

It is disorienting to not have a body, one user commented. This has been mentioned in case studies by others. Because of the immersivity of VR, we expect all of us to be there — not just our eyes. Though I think this may become less rigid as more user step into lo-fi VR experiences.

Moving between photospheres is fun. I left the photospheres from the rotate-into traversal method in the environment. My users found these and really enjoyed toggling between the near-counter and near-window locations in the virtual Main Squeeze.

Users who are less familiar with VR limitations try to walk in the non-virtual world. Two of my users had passing familiarity with Google Cardboard. They showed a tendency to try and walk around. I think this speaks to need to create mobile positional tracking.

Conclusion

I am going to research environment traversal methods in VR, for non-walkable environments, in order to learn what is most comfortable and understandable for users.

In this design experiment, I explored the user experience of different environmental traversal methods. Using divergent design sketches, I created 20 possible traversal methods, prototyping four of them with various speeds.

User testing shows that traversal preference is highly variable with some users liking fading, jumping, animating slowly, and animating quickly. The micro-movement method came across as laggy and thus undesirable. It is therefore my suggestion to allow users to change their traversal method in app settings. While I think there can be interesting use cases for complex methods (like rotate-into), simple solutions (like jump cut) seem to work best in the general case.

Experiment 11, Transitions:
armthethinker.github.io/webVR-experiments/#11-environment-traversal

Source Code:
github.com/armthethinker/webVR-experiments/blob/master/11--environment-traversal.html

--

--

Andrew R McHugh
Humane Virtuality

Founder @WithVivid. Prev: Sr. VR/AR Designer & Team Lead @ Samsung R&D, The What If…? Conference founder, @CMUHCII , children’s book author.