A Year’s Journey in Motion Controls

Jon Wade
Shopify AR/VR
Published in
7 min readDec 23, 2016

From necessities like eating to recreational tasks like playing games, we depend on our hands to grip, point, push, pinch and tap everyday objects. It’s the duty of VR motion controllers to simulate our most fundamental manipulations of the world around us. They have a lot to get right.

Earlier this month the Oculus Touch controllers launched for the Rift. Their launch provides an opportunity to recap our journey working with both the Touch and Vive motion controllers over the past year and to discuss some directions in our thinking about manipulation and interaction in VR.

The Vive Arrives

It’s hard for me to believe it was only a year ago that I had my first long stretch of time with a Vive DK1. In the year since, the Shopify VR team has logged hundreds of hours on the Vive experimenting, running usability tests, and enjoying community content.

A rapid-fire snapshot of our experiences with the Vive motion controllers:

  • Lighthouse is a stable and robust tracking system. We’ve used it in a wide variety of environments and rarely had troubles.
  • The trackpad is versatile, but rarely exercises its full capabilities. We like that it affords swiping, directional movement, and four unique buttons, but it regularly becomes, “The big teleport button.”
Track pad use in Tilt Brush, Google Earth, and Thread Studio
  • The finger trigger is the most logical pickup/grab input. Unfortunately, many new users tend to squeeze the whole controller to pick things up, leading to the trackpad and the side grips being pressed. If an experience uses the trackpad for teleportation, like in Thread Studio or The Lab, this can lead to the user unintentionally teleporting all over the place.
Some grips involve applying pressure from the fingers and thumb
  • The top menu button is inconveniently located if you have to use it often, especially for users with small hands.
  • The side grip buttons are essentially useless for gripping. Holding them down while trying to use other inputs tends to strain your hand. They also get pressed or released by accident during quick movements. They can work well for non-grip actions, like rotating the world in SportsbarVR or enabling swing-to-walk in Climbey.
  • User testing revealed that trying to use one input while holding down another was difficult for many people.

Picking Up is Paramount

VR from Shopify is designed for a wide audience. Our audience may not be familiar with game controllers and is probably new to VR. In our experiments, the more controller inputs we assigned functionality to, the more difficult it became to learn and enjoy those experiences.

After numerous rounds of user testing, we made a radical pivot in our design philosophy. We challenged ourselves to start with an extremely minimalist control scheme: could we build our apps using only the trigger? Working from that starting point we ended up with:

  1. Grabbing is the core action. The finger trigger only picks up and releases objects and is never used for anything else.
  2. Movement mechanics, like teleport, are handled using a different input.

We were able to create a huge amount of content using two inputs. Though it partially went against the first guideline, the trigger/pickup input was also used to interact with floating UI elements.

However, with time we found ourselves creating objects that required being activated once held. Spray paint cans and power drills belong to this category. In attempting to keep control mappings approachable for a wide audience, these objects became surprisingly difficult to implement:

  • The trackpad was our teleport button and we didn’t want to map it to a second role.
  • Testers routinely let go of the finger trigger when trying to press the menu button.
  • Using the side grip button to activate an object often flipped real world conventions. You hold a spray bottle with your palm and use your pointer finger to pull the trigger, not the other way around.

We quickly found ourselves having to constantly debate input layout. I felt our lives as developers would have been much easier had the Vive’s side grips been usable for holding on to objects…

Enter the Oculus Touch

In the summer, just as we were making the above conclusions, our first Touch developer kit arrived. Having spent so much time with the Vive, the Touch took getting used to, but I now prefer the Touch layout and ergonomics:

  • The buttons are more accessible than the Vive’s menu button. Distinguishing between the physical buttons is more natural than distinguishing between regions of the Vive trackpad.
  • The separate grip and use triggers easily affords the hold-then-use objects we had difficulty implementing on the Vive. Pressing the use trigger while holding down the grip is effortless.
The Oculus Touch

That’s not to say the touch is universally the better motion controller:

  • Throwing objects, one of the most entertaining things to do in VR, is awkward on the Touch. I often hesitate to let go of the grip trigger because I fear I’ll send the Touch flying.
  • It’s yet to be seen if the joystick can fill as many roles as the Vive track pad. While better for directional movement, gestures for swiping or clock-rotation motions are less satisfying.
  • The Oculus Constellation tracking system can work really well, but I still give the edge to the Vive’s lighthouse system in both consistency and robustness. As a developer I also prefer the floor-origin coordinate system of SteamVR.

The Capacity for So Much More

It took me too many months to really discover the Oculus Touch’s killer differentiator. It’s the capacitive touch sensors embedded into most of the inputs on the controller. These sensors provide a simple, but powerful piece of information: is the user physically resting on a particular input surface?

I knew these capacitive sensors were responsible for the social gestures in the Oculus “Toybox” demo, but Thread Studio didn’t have social functionality so I didn’t experiment with them.

That was a mistake.

Oculus Toybox

The gestures may have initially been created for social communication, but they provided me an additional sense of immersion in my solo experiences. Allowing me to point and gesture in VR, even if only for my own amusement, enhanced my sense of agency in the virtual world. Surprisingly, seeing disembodied hands mimicking my gestures didn’t feel strange at all.

The hand poses can also enhance usability. For example, I’ve avoided physical UI push buttons in VR due to the many ways they can accidentally be engaged. On the Touch, I can constrain these UI buttons to only depress when the user is pointing. This is powerful. The user acts on the UI button the same way they do in the real-world: point a finger and press the thing. No building muscle memory for a controller, no learning what input is “press,” and far fewer unintended button activations.

How much further can these poses be leveraged to create “just act natural” interactions? Can we more accurately emulate the ways we grip, point, push, pinch and tap in the real world?

Where do we go from here?

The long-term dream for hand interaction in VR is to achieve error-free tracking and provide haptic cues indistinguishable from reality. Such a system would require room-scale finger tracking, matching an individual’s hand size and the simulation of rich texture, force and temperature cues. A solution with these capabilities is years off.

Many next-generation VR predictions include controller-free hand tracking, but for a number of reasons I think motion controllers can and should stick around. Controllers are likely to provide more accurate and reliable position data for some time to come. Physical buttons and axes will continue to provide more decisive commanding than the interpretation of hand postures. I also expect some users to react to the cognitive dissonance of holding a virtual object without any sense of touch or weight in the hand.

I began 2016 delighted with merely having positionally tracked manipulation in VR. This year’s motion controllers allow us to spatially designate the “where” for our actions to take place. Innovative design continues to progress on this core component and Valve’s Dev Days prototype showed the big players are still coming up with fresh ideas.

One of Valve’s prototype designs for future controllers (Image via @shawncwhiting on twitter)

However, as the year ends, the capacitive sensors of the Oculus Touch leave me day-dreaming of additional design problems. With the general “where” of the hands solved by current controllers, is the next step to better emulate the many grips and poses we use to perform more intricate interactions?

One goal is to provide richer interfaces while minimizing the concept of button mapping. It’s desirable that new users spend time focused on learning the utility of virtual objects, not how to perform basic interactions with them.

So after a great year filled with learning, I enter 2017 with a fresh batch of ideas for existing controllers and an increased desire to try new, radical controller designs. I retain the belief that there are still so many fundamentals yet to be explored in VR. Finally, I hold an elevated appreciation for the nuance and complexity our seemingly simple, day-to-day interactions contain.

--

--