How to Prototype for VR without Coding

Chris Locke
5 min readMay 16, 2017

--

Using the Adobe Suite, Cinema 4D, and very.gd

I believe that many interaction designers may not be ready to move to 3D environments, so the design community should share as much information and advice as possible. For this project I only needed to communicate the idea and the technology to the CTO and Creative Director of Helios Interactive. This meant that I spent most of my time tweaking files rather than designing and developing every scene I imagined.

This article is a mix of a post-mortem and prototyping insights for visual/product designers. It covers designing for VR without coding.

I’m Chris Locke, a New Media Design student at Rochester Institute of Technology, and I did a class assignment with Helios based on shopping at Target in VR. I chose Target because it’s a bold and familiar brand that wants to stay ahead of the curve; they’re also redesigning their stores so there’s a great opportunity to embrace VR shopping. My full client presentation is here.

Technology, eyesight and focus

The best way to design for a VR device is to put on a headset.

It’s practically impossible to understand the limitations of the hardware without trying it first. Using a heads-up display limits our eyesight in a few ways: centered focus, blurred peripheral vision, and less eye movement.

Courtesy of Jan Mellstrom and Pedro Lasta from Unsplash.com and Naresh Kumar from Dribbble.com

I used the Google Daydream for this project, and the only problem I noticed was that the screen is more blurry then you’d expect. This limits the screen size so you have to be wary of putting UI elements in a blurry or uncomfortable area.

Dimensions for AI and C4D

A 2:1 ratio for the canvas, with lots of iterations

In Adobe Illustrator I started with the very.gd template (which is also available for Photoshop and Sketch). This template has recommended sizes for UI as well as markers for users’ comfort zones, periphery, etc.

5400 x 1008 pixels. I normally increase the BG so there’s more/less distance between the viewer and the UI.

I only used the template as a starting point rather than exact dimensions. Every user is different so one person’s comfort zone may be bigger or smaller than somebody else’s. To accommodate these conflicting perspectives, I uploaded exports to very.gd and had classmates try different scenes, and each scene had UI containers with different sizes.

Screen recording of my final very.gd scene with an AE Welcome screen animation.

They told me which panels they liked then I used those to develop a grid system. This part of the process is very similar to designing digital products. Import sketches, choose fonts, develop a grid, add content, animate…

Welcome screen (done in Illustrator and After Effects)

To make a proper environment for my very.gd scene, I used Cinema 4D. I created a floor with a tile material (from the Content Browser) and a physical sky. Then I created planes floating in the air, and some cylinders that acted as product stands.

An HDTV 1080 29.97 preset render (with Physical renderer)

To make all of this more believable, I created a quick rendering of a product from Target: a lip crayon. I chose it because it was on the front page and it’s an easy product to model and texture.

Converting from MP4 to GIF sometimes creates banding, like in the BG lighting…

I used this tutorial to create a 360 export of the whole scene. All you do is create a small sphere (the “camera”), add a Compositing tag so it doesn’t show up in the render, add a reflection material, and add a Bake Texture tag. Once you bake the texture — with only Reflection checked — you bring the PNG into Photoshop to flip horizontally then export again. I recommend baking the texture at a 2:1 aspect ratio so there’s no seams; I chose a 10,000 x 5,000 pixel PNG so you can see lots of sharp details.

The UI is a bit dark because of the physical lighting, but it’s brighter in VR viewer.

This took me nearly 100 iterations! My main pitfalls were text sizes and the UI’s distance from the viewer. There are templates and recommendations from various developers for text sizes and such, but I believe the best way to make decisions is to iterate as much as possible.

Retail versus Online

The largest benefits to using VR are accessibility and leveraging the on-demand economy. Lots of people don’t want to spend their whole day shopping, especially if they need help moving around large stores or if they’re sensitive to large crowds and busy parking lots. Other people only shop online because they can find good deals and act quickly with little to no effort.

By using VR, you keep the luxury of a boutique environment and combine it with the comfortability of online shopping.

I did some basic research on online shopping and Target’s new stores. The store will be divided into two parts, a quick-stop essentials section and a slow boutique area with lots of different products.

An Information Architecture slide from my client presentation (link in top paragraph)

I believe that users would like to start with a Welcome screen that shows sales and Wishlist items, then choose to go to either section; once they’re in they can move across the store and browse at their own leisure. If they see a product they don’t like, they can remove it so the store personalizes each section.

Top: PS traced sketches. Bottom: AI wireframe, comp and a C4D render.

If the user looks towards their feet, a Map would pop up and shows them where other products are. If they’re still having trouble, they can use the Phone — based off of existing phones in Target stores — to call for personal assistance.

Why bother?

There’s a couple reasons in my mind: experimenting on a new hardware is fun, and Interaction Designers already work on tons of different devices. Adding another medium to your skill-set will only make you better at making creative solutions.

If you’d like to learn more, I provided lots of links for further learning:

Thanks for reading!

--

--