Build for VR, in VR.
Sandbox VR is a world-building solution that enables anyone to build for VR, in VR. Using imported models, textures, lighting rigs, and more, users can build out elaborate scenes from the ground-up. Check out a demo we released in early 2016:
Sandbox VR can be used by individuals or teams involved with filmmaking, game development, urban planning, and much more. In order to explore possible use cases, we recorded a series of demos targeted towards artists, game developers, and architects. Here’s a more recent demo where we explored prototyping a Wild West scene (for a film or game) from scratch:
Over the course of the project’s development, we thought a lot about the pros and cons of current 3D modeling / editing software, interaction patterns that seemed to work in other VR apps, and what we wanted Sandbox to accomplish. We’d like to use this document to share more about our process.
When developing for room-scale VR, it’s hard to understand what the scene will look like until you see it through the HMD; for this reason, we had no idea how small / large an object would be relative to the player until we tried it ourselves. It was terribly inefficient to go back and forth between our computers and HMDs to make small changes to basic transformations (position, rotation, scale) in our scene, and that’s what led to the development of Sandbox VR.
We wanted to make it as easy as possible for anyone to build a scene from scratch, all while they’re in VR. What if you could start with a blank canvas and build out an entire world without taking off your HMD?
In order to make Sandbox VR usable for someone who’s new to VR, we thought a lot about how existing applications (VR and non-VR) approach UX problems that we knew we’d run into. Here’s how we thought about the interaction patterns seen in the app:
- Around the time we started developing Sandbox VR for the Vive, Tiltbrush was growing in popularity and introducing features that we really liked. At the MxR Studio at USC, we usually demo Tiltbrush to people who are new to VR, and most of them figure out how to select different tools and brushes with minimal / no prior instruction. Though we experimented with other options, we decided to implement a similar layout for the main menu in Sandbox VR.
- Our tools are categorized into seven main panels that are housed on the left controller: Shapes, Actions, Import, Material, Lighting, Environment, and Terrain. Each panel has a set of buttons that users can access with the trigger on the right-hand controller.
- Inspired by Google’s Material Design guidelines, the buttons have a “magnetic” feel to them, and they draw closer to the user when hovered over / selected.
Basic Selection and Manipulation
- In order to standardize the process of changing the position, scale and rotation of an object, we incorporated a selection box that appears around any selected item in the scene, featuring controls for scaling and rotation. Objects can be selected by hitting the right-hand trigger while the controller is within the object’s boundaries. Users can adjust an object’s position by clicking and holding on the object while moving the controller. The selection box is generated based on the object’s length and width (plus a few units for buffer room), giving users more to grab onto when moving an item.
- In order to select multiple items, users can hold down the right trigger and drag over desired assets. From there, the models can be grouped, deleted, or duplicated.
- While the reasoning behind the scale markers was fairly straightforward (most people were used to seeing similar controls in Microsoft Office, Adobe CC, etc.), the rotation markers that appear in the recent demos and documentation were not present in our first few iterations. Instead, we had the object’s rotation correspond to the rotation of the controller (just like the object’s position corresponds to the position of the controller). However, after seeing that playtesters were having trouble rotating and placing their assets accurately, we incorporated curved rotation markers on the edges of the selection box. Now, users can easily adjust the rotation of an object one axis at a time.
Free Scaling vs. Proportional Scaling
- Some key observations from early tests with Sandbox: A — Users are more likely to distort / mess around with the scaling of an object when it’s a primitive (changing the default cube to a wider rectangular prism), less likely to do so with imported assets. B — With select imported assets, users tend to adjust the scale on only one axis (e.g. making a tree slightly taller, making a mountain a bit wider, etc.)
- Based on these observations, we decided to introduce a few variations of the default selection box we had in place for all imported objects: When primitive models (sphere, cube, cone, cylinder) are loaded into the scene, the model is housed in a free-scaling selection boxes by default. Because new users tend to start by importing primitives, the free-scaling option encourages exploration (see illustration on the top left of Fig. 4)
- By holding down the left trigger when a primitive object is selected, users can toggle the fixed-scaling selection box to scale their objects up or down proportionally (see illustration on the bottom left of Fig. 4)
- When custom assets are loaded into the scene, they are housed in fixed-scaling selection boxes in order to preserve the dimensions of the original model (see illustration on the bottom left of Fig. 4)
- By holding down the left trigger when a custom asset is selected, users can toggle additional scale knobs that facilitate scaling an object up or down on a selected axis (see illustration on the top right of Fig. 4)
- Though our lighting solution is fairly standard, our main issue was representing the lights well inside the editor. In Maya and Unity, lights are represented in two dimensions, setting them apart from other 3D objects in the scene (see Fig. 5). While we initially implemented a similar solution using Transform.LookAt to rotate a sprite to face the user at all times, we found that users had no understanding of where the sprite was in 3D space, making it difficult to select / manipulate.
- The objective was to design lighting rigs that users could easily distinguish from other 3D objects in the scene while maintaining the accessibility that comes with regular assets. Our solution was to model lights that were somewhere between 2D and 3D, made with extremely thin polygons but built in a way that allows users to see the light from any angle (see Fig. 6)
New Features & interactions
- In addition to the basic features that allow users to import, manipulate, and and build with custom models, we’ve added other interactions that we think will facilitate the world-building process:
- World Scaling — by holding down the grip buttons on both controllers down simultaneously, users can grab the space around them to move around or zoom in / zoom out by increasing / decreasing the distance between the two controllers. Think of it as a “pinch-to-zoom” for VR. This solution allows users to have more control over their movement in the space, and allows users to navigate the world while walking around or sitting down (see Fig. 8)
- Object Snapping — when users move an object in the scene, it automatically snaps to positions relative to objects around it or to the ground. In addition, if a user holds down the left trigger while rotating an object, the object will rotate in 15-degree increments rather than rotating freely along an axis. This solution is especially useful if a user wants multiple objects to have the same orientation.
- Terrain Sculpting — rather than importing models of hills and mountains, we’ve created an entire toolset that allows users to sculpt terrain by extruding / leveling the mesh. Users can easily shift between brush sizes by swiping right or left on the trackpad of the left controller, making it easier to sculpt hills and mountains with varied widths, depths, and heights.
That’s all for now! Thanks for reading.