Simulating Weight in VR

Feels Like Studio
10 min readJul 20, 2016

--

A quick exploration of methods for conveying “heaviness” in virtual reality

Throughout the past year, we’ve loved working in virtual reality — from in-house projects to our work with Google Daydream for IO 2016. We’re firm believers that the future of VR is bright and ripe with opportunities for exploration.

In our past VR endeavors, we created fully contained experiences to help us better understand the workflow and production requirements of the medium. But this time around, we wanted to dive deeper into some of the nitty-gritty interaction principles that drive experiences in VR. We were particularly inspired by the Daydream team’s approach toward a more utility-driven VR platform, and the interactions that are going to be required to make that possible. So we decided to start small by isolating a specific interaction to explore.

Choosing a Focus: Visual vs. Physical Interaction Models

Right now, most utility VR interfaces are anchored by simple 2D planes floating in space. But on the gaming side of things, there’s been a clear shift towards less visual, more physical interactions. See Cosmic Trip, which brilliantly uses physical buttons to navigate its item menu; or Job Simulator, which almost entirely eliminates the old point-and-click interaction paradigm.

Cosmic Trip (Left) Job Simulator (Right)

We brainstormed around this premise and found ourselves drawn to the idea of artificially simulating weight when interacting with objects in VR. Specifically, could we throttle the speed of a user’s actions when picking up objects to make things feel heavier or lighter? Heavy objects would need to be picked up slowly, while light objects could be picked up normally. Not a new idea, and perhaps not directly applicable to the context of utility, but one that we felt would be fun to explore.

We hoped to come out of this exercise with findings that could apply to all platforms with motion controls: from Daydream’s Wiimote-style controller, to the advanced room-tracking controllers of the Vive and Oculus. With this in mind, we decided to design for the most advanced platform (until Oculus Touch launches, that’s the Vive), and later explore ways to simplify for Daydream, and eventually even gaze-based controls on the Gear VR and Cardboard.

Motion Controllers

As far as production software, we were torn between our familiarity with Unity and the rendering potential of Unreal. We stuck with Unity for now, and hope to explore Unreal more in future explorations.

Defining Goals

Our previous VR projects were rather off-the-cuff, and left us little to reuse going forward. So we entered this exploration with some high level goals. If nothing else, we’d have some measure of success if we at least addressed the following:

  • to develop our collaborative workflow within Unity and get more of the team comfortable with that workflow.
  • to build a basic “boilerplate” environment for us to use internally, so future VR experiments could get up and running quickly.

The Process

With our direction and goals decided, we assembled our team — three 3D/motion artists, two designers, and a creative technologist. We used Git for collaborating and sharing assets across machines. Scenes could be “checked out” and edited by one person at a time, while others could work on prefabs that would be merged into the master scene. At such a small scale, this approach worked for us, but we’re actively exploring ways this could scale to larger teams and projects without getting messy.

Step 1: Becoming Pick Up Artists

You can’t explore weight without first nailing down the basic physics of how objects can be picked up and moved around, and we quickly found that these concepts were one in the same. Like most things in VR, there isn’t yet a consensus in the industry on the “correct” way to handle this behavior. So we explored a bunch of options, which naturally grouped themselves into two categories: direct links (simple parenting, creating a fixed joint), and loose links (adjusting velocity or using forces to attract objects towards the controller). The type of link defined the methods we employed to simulate weight.

Direct Link

In a direct link, objects match the controller’s motion one-to-one. If the controller moves too quickly for the object’s mass, the link is broken and the object falls to the ground.

Loose Link

For loose links, objects have different strengths of attraction towards the controller depending on their weight. Light objects react quickly, with the feeling of a direct link. Heavier objects will lag behind the controller and require more effort to lift. We didn’t expect this to work well — the object breaks 1:1 tracking, which is a pretty big no-no in VR — but it surprisingly felt very promising. We chalk this up to two things:

  1. We still show the controller (which maintains 1:1 tracking) while lifting, avoiding the feeling that the user is not directly controlling the environment.
  2. Once the object reaches the controller, we snap it and form a direct link. We added this mechanic after finding that the feeling of weight was most effective during the “picking up” action, and afterwards only served as a distraction to the user.

Step 2: Exploring Other Sensory Cues

In addition to the mechanics of lifting and grabbing, we felt it was important to explore other forms of feedback that could help convey an object’s weight. This manifested in two forms: visual and haptic feedback. In both cases, we tried to reinforce the amount of “strain” the user would feel as their controller approaches the thresholds of tension for a given object.

Visual feedback depends on the type of link. For direct links, we explored a variety of gauges and meters that could indicate the controller speed. We found simpler to be better, and settled on a basic green-to-red meter, attached to the controller, that fills up as the speed approaches an object’s threshold. With loose links, it was more effective to visualize the connection between the controller and object as a “string,” which shifts from green to red as the tension increases.

Gauge

For haptic feedback, we took the same logic from the visual indicator and applied it to the vibration of the controller. As the user approaches the threshold of tension, the controller vibrates with more intensity. It’s straightforward, but effective.

Step 3: Testing and Tweaking and Testing

With all of these factors, we had a seemingly endless list of permutations to explore. So to test and compare, we started working with multiple implementations in the same environment. We tried things out, adjusted, and tried again.

Eventually, we whittled down to the methods that we felt best demonstrated the impact of all the various factors on the pick-up behavior. Most of the primary differences weren’t even based on the physics — instead, they focused on the impact of secondary visual and haptic feedback on the physical interaction.

  1. Parenting
    Object attaches to controller, but will clip through static colliders. This is the simplest form of picking things up, with no weight simulation.
  2. Fixed Joint with haptic feedback
    Object attaches to controller, repositions on collision, and drops if the controller moves too quickly for its mass. Controller rumbles more as it approaches the speed threshold.
  3. Fixed Joint with visual feedback
    Object attaches to controller, repositions on collision, and drops if the controller moves too quickly for its mass. A meter fills as the controller approaches the speed threshold.
  4. Fixed Joint with visual and haptic feedback
    Object attaches to controller, repositions on collision, and drops if the controller moves too quickly for its mass. A meter fills and the controller rumbles as it approaches the speed threshold.
  5. Force
    The object is attracted to the controller using forces, so its velocity increases as it approaches the controller.
  6. Velocity (connection break with tension)
    The object’s velocity is adjusted to draw it towards the controller. If the tension between the controller and the object exceeds the threshold for its mass, the object drops.
  7. Velocity (connection never break)
    The object’s velocity is adjusted to draw it towards the controller. The object will never drop, no matter the amount of tension.

At this point, we wanted to get input from people outside the team on what factors were helping, hurting, or unnecessary.

We created two scenes for user testing. In both, we arranged seven stations around the user. Our first scene had a single object per station, so testers could directly compare how it felt to pick something up using each method. As they worked through the stations, we asked a series of questions: Which object felt heaviest? Which method would you prefer if you had to pick up many objects? Which method felt the most natural?

Our second introduced multiple objects with differing weights at each station. This way, users could see how well a single method conveyed variation in weight between heavy and light objects. Then they could compare that with other methods in the same environment.

Takeaways

While we saw a few high-level trends in our tests, no single implementation was a consensus top pick. This wasn’t unexpected; going into this exercise, we didn’t think there would ever be a “right” answer. But we were able to draw plenty of conclusions from the input we received.

Loose links work best when the connection can be broken

Our testers reported that this mechanic most naturally conveyed a feeling of heaviness. Without a strict cutoff for tension, objects can start to feel floaty and unresponsive to the user’s movement. Breaking the link helps ground the object, while forcing the user to consciously alter their behavior to account for the object’s perceived “weight.”

But unless it’s vital to the interaction, weight is an annoyance

Our testers enjoyed interacting with loose links, and many thought that the mechanic was fun and playful. When used sparingly, to purposefully draw attention to the weight of something, we think loose links could be a useful system to further explore.

But when we asked our testers what method they would prefer if they had to pick up many objects, they overwhelmingly chose the simplest direct link. When efficiency is the priority, people don’t want to be bogged down by “forced” reminders of their human shortcomings.

This might inform our future projects in a few ways. If precisely mimicking reality is important, we might simply avoid letting the user attempt to pick up something extremely large or heavy. Or we could choose to embrace it. Arguably the most exciting aspect of virtual reality is that it allows us to see and do things that aren’t possible in real life.

Additional feedback is welcome — and don’t forget sound!

Testers gravitated towards more feedback over less with one exception: they found visual feedback a distraction if they were unable to clearly understand the relationship between it and their actions. Otherwise, the inclusion of visuals and haptics was useful.

And just in case it hasn’t been said enough, consider how sound can reinforce the physical aspects of an environment. When we chose to add sound to this exercise, we didn’t think much of it. But it turned out to have a huge impact on how our testers perceived weight. The thuds of heavy objects hitting the ground or rubbing against each other reinforced the differences in mass.

“Basic” interactions in VR are more complicated than they seem

Planning for physics-based interaction models was much trickier for us than traditional screen-based UX, since there are so many factors influencing the final result. There’s an inherent loss of control when you design for a framework in which interactions occur, rather than the interactions themselves. The game industry has been dealing with this for ages, and the rest of us are just catching up.

Final Thoughts

We found a lot of value in dissecting interactions around weight simulation. We know time is always a limiting factor, and breaking down every single element of each and every interaction would be an impossible task — but identifying and breaking down recurring interactions is a way to both optimize them and prepare you for other more specific interactions that you will inevitably face in the future. You might find that what works in theory might not work in practice.

We also want to highlight the importance of side goals for these exercises (for us, that was greater Unity familiarity and a basic boilerplate environment). It’s valuable to have something concrete and attainable to work towards, to offset the unknowns of organic exploration.

So what’s our recommendation? We’ve found that so much is dependent on the experience itself. It’s likely that in many cases, the restraints we tried to impose by simulating weight would be unwelcome, as they slow the user down and draw their focus to the interaction itself rather than the action they’re trying perform. So unless you have reason to draw specific attention to an object’s weight, it makes more sense to keep things simple and use direct links.

That said, we think there is something promising with loose links in which the connection breaks when the tension is too high. There’s plenty of room to fine-tune and explore this, and to consider other factors: irregular shapes or the starting positions of objects, or perhaps using two hands to pick something up. This is just a start, and we’re excited at the potential implementations.

Tell us what you think

We’re interested in getting input / advice from the community on this mechanic, so we’ve open sourced our Unity project and released it as an executable. You can find both on GitHub.

Have you found an alternative solution, or seen it done well in an existing experience? Comment and let us know!

--

--