A Month Designing in VR

Julius Tarng
Design at Meta
Published in
8 min readJun 24, 2015

--

In April I spent a few weeks designing for the Samsung Gear VR. I wanted to share some thoughts on why I did a hack-a-month (a term for a short trial stint on a new team) in VR and some takeaways on the design process and design patterns I explored over two short weeks.

Me + early prototype using the iPhone’s accelerometer to hack head orientation

Why VR?

I studied Industrial Design and HCI in school. One of my biggest interests was crossing the two disciplines. At the time, it meant designing everything in a digital device, so for my senior project I designed a phone — both the physical manifestation, and the digital interface and ecosystem. I was most excited about the field of tangible interaction, where interactions cross between the physical and digital planes. However, the real world application of ID + HCI is few and far between. I ended up working in Industrial Design consulting, but I left after a year and my skillset in 3D has largely been unused since then.

A few months ago, I visited the Oculus lab shortly after they moved onto Facebook campus. Immediately, I remembered my passion for the merging of physical and digital. Given my experience in code, it also meant that I could own the whole design and prototyping process.

At Facebook, we encourage and support employees to pursue projects that they are passionate about. I set time up with my manager to see if there was an opportunity to work on VR. Eventually he matched me with a hack-a-month working with Joyce Hsu, Sean Liu, and Joe Lifrieri. In April, I flew to California to kick it off.

Design tools: Unity vs. Quartz Composer

The first thing I learned about the existing workflow for designers at Oculus is that they go from Photoshop CINEMA 4d Unity code (if they choose not to do it in Unity).

Unity is like Flash, but in 3D. There are WYSIWIG objects in the scene that you can move around, but you can also attach scripts for interactivity. Tons of games are made in it — it’s cross platform and has a great online community for answering questions.

Also you have to use this godforsaken IDE called MonoDevelop, which doesn’t properly support OS X text editing shortcuts or proper vim… AND write C# or a slightly customized version of JavaScript.

My TL;DR judgement on Unity for designing VR interfaces is: it’s not great.

Since picking up Quartz Composer (QC/Origami) at FB, one of the things I value the most in validating new interaction paradigms is an immediate feedback loop between tweaking and testing. Unity allows some live tweaking of variables while a project is playing, but those changes are all lost as soon as you stop playing. Monitor management with Unity and the Oculus DK2 also slows down the iteration cycle. A DK2 functions as a separate monitor at 1080p that requires content to be full-screened at that resolution, but Unity provides no way to immediately view full-screen unless you actually build a standalone OS X or Android app every time.

Due to the disadvantages of Unity, I decided to look into QC to support the DK2. The existing plugins online weren’t supporting the latest SDK, so I spent a late night getting basic orientation angles into a patch, and rendering two flat images (no stereoscope). The result? It actually works pretty well. There’s no sense of depth, but I decided that I could easily place static UI into Unity to test the placement in 3D space.

The plug-in I wrote is available here: https://github.com/tarngerine/oculus-dk2-quartz-composer/, with a sample file

Now, with QC set up to work on the DK2, I could iterate, tweak, and validate any designs super fast.

Designing for VR Ergonomics

A telltale sign that a person is new to mobile UI design is when their typography and hit targets are too small. With any new form of human input, interfaces need to adapt to be easy to use. For VR, there were a couple main points I found really impacted my designs.

Keeping content in a comfortable viewing area

One of my first design exercises I did before the hack-a-month was to use Unity to try and prototype some ideas in my head around notifications. While you’re in VR, you are completely divorced from reality, but it could be useful to get notifications and quickly respond. I didn’t have a DK2 at the time, so I prototyped everything on desktop. I thought that elements could hang off the edge of the screen, and if you looked at them with a high enough head-turn velocity, they could snap to the middle of the viewing angle and activate.

Unfortunately when I got the DK2 and tried the prototype, it completely failed: when you have a headset on, objects that hover at the edge of your Field of View (FOV) are incredibly hard to focus on. Try it now: try to read your phone with it at the edge of your vision, without turning your head and only rotating your eyes.

Turned out that when I started the hack-a-month, this was one of the first best practices the team told me: keep content within a specific frame directly in the center of your FOV.

Designing simple interactions for limited head turning

One of the biggest physical constraints is that you have a bulky headset on. I had an idea for the notifications prototype involved detecting head turn speed/angle as an intent for activating a notification. In practice, it was extremely awkward. The Oculus team had started developing a pattern to get around this limitation, especially when presenting large collections of content (e.g. app store): swipes on the Gear VR trackpads. This allowed you to move content around you without turning your head much.

I found swiping to be a disorienting interaction, disconnected from the trackpad mounted on the side of the headset. In one of the last interfaces I designed in the hack-a-month, Joyce and I explored a variety of layouts quickly in Sketch/PS. We eliminated a few layouts immediately (grids felt overwhelming and implied an infinite set, whereas we wanted the set of content to feel uni-directional and completable).

I moved towards a single row of content, starting you out on the first one, and allowing you to scroll horizontally. However, I didn’t want to swipe, and having page control buttons floating felt cumbersome. I decided to try mapping the whole scroll width of the content to the comfortable FOV (~90 degrees). With some refinements, like a paginated snapping to each object, rather than a fluid scroll, it actually ended up feeling pretty good. Now you could scroll through a solid number of items, by only turning your head.

Hover states make a comeback

With mobile, designers lost a valuable tool for progressive information display and a layer of utility — hover states on desktop and web had long been used for anything from tooltips to the OS X magnifying dock. In VR, hover is back in the form of gaze direction. Looking at an object or control can reveal more information that you couldn’t fit, such as a video preview for a thumbnail.

Another thing I tried in my first prototype was a “look-and-hold”/“long gaze” interaction to activate content without having any other form of input. This was heavily used in Kinect interfaces and always felt pretty good to me, and I thought it would be even better in VR since there’s a bit more accuracy than waving your hand in the air. Unfortunately, while other input methods allow you to look at things without interacting, “look-and-hold” makes it difficult to rest your gaze and actually read things without worrying that something will be triggered.

When I started the first project, one of the first things I wanted to try (along with some nudging from Joe) was simulating a common interaction in video games with cursors for menu navigation: snapping to the closest interactive area to compensate for inaccurate cursor control. In VR, the cursor is actually fairly accurate, but it still took more effort than necessary to do certain precise actions.

What I found was that increasing hit areas to handle about a 5–10 degree variance in gaze angle was a good rule of thumb. This meant that you could visually design certain things like a video progress bar to be fairly thin, yet still comfortably scrub through without ever slipping off the control.

Another thing I discovered while playing with cursor animations in QC was that hiding the cursor (or animating it so it looks like it snaps to the hover state of an control) actually reduced how much you think about aligning the cursor with a particular control.

Closing thoughts on VR design

At the end of the hack-a-month, I ended up with a pretty solid set of prototypes and new interactions to help inspire the team I was working with. All of the prototypes were done in fairly high fidelity in QC, in a relatively short amount of time (the hack-a-month was actually only about 2.5 weeks). Unfortunately I won’t be around through actual implementation, but I’m excited to see how the designs pan out in actual usage.

What’s next: tools

One thing that’s become increasingly clear from the past several years as a Product Designer — I am extremely energized by the interplay between design and code. Through my design career, my favorite moments have been the times when I encountered a technical obstacle to verify a design direction. Whether it was learning Objective C to help hit a deadline or futzing with live data in JS.

Working in VR further verified that while I still enjoyed thinking about product and designing new interactions in 3D space, my favorite thing has been building tools to help validate design decisions. That’s why I’m kicking off a serious, broader look at our Design Tools at Facebook along with Brandon Walkin, who’s been leading the development of Origami. If you like working on tools and have living in NYC on your bucket list, get in touch!

--

--