Product Presentation in Virtual Reality

Learning how to prototype efficiently in VR and create a robust user testing setup

As virtual reality absorbs our tasks and interfaces, we’ll have more shopping experiences in VR. Volumetric interfaces are different than our flat screens and flat designs in critical ways. Here, I set out to understand those differences.

In this case study, I’ll break down the problems as presented, which problems I ran into and how I overcame them, technical bits of code, a splash of user testing, and a working prototype.

Problem, Hunt Statement

It’s important to start my experiments with a goal — otherwise I’m likely to lose focus. In this experiment, my goal was to figure out how should we present and customize products in virtual reality while also learning how to prototype efficiently. Potential customers need to be able to find what they need, refine it to their liking (e.g. color, size), then buy it — all with ease, speed, and pleasure. I summarize these questions into a hunt statement, or guiding statement:

I am going to research a product selection tool made for the web, translated into virtual reality, so that I can start to understand the critical differences and needs between flat screens and volumetric interfaces.

Solution, Prototype

My drive to complete this internship is to learn how to prototype more quickly, especially as it relates to VR. For this experiment, I mocked up three different prototypes. These were all based off of a mockup I saw on Dribbble. Since UI and graphic design isn’t my interest here, I figured it’d be better to take something that seems to work, then translate it into VR and watch what happens.

Left: layered, flat, planar sheets. Center: layered, curved sheets. Right: layered, curved sheets with a 3D model of a shirt.

In my final prototype, I have three mockups (switchable in the code or with the keyboard):

  1. A series of flat, planar sheets
  2. A series of curved sheets
  3. A series of curved sheets with a 3D model of a shirt

Ultimately, I found that curved mockups give the most feedback for minimal development time … after I created a Javascript function that does the heavy lifting. Users found the experience interesting (they’d never been in a VR app before). Rural users in particular appreciated the potential of VR and how it might help them shop for clothing more easily than driving to a more populated area.

To experience the prototype yourself, send a browser over to my website. If you’re on mobile, you can put your phone into a Google Cardboard to see it in VR.

Experiment 7, Product Selection & Customization:

Source Code:

Problem Space, Existing Work

Before I hop into my own designs, I think it’s usually useful to give a gander at existing work. What have other people created, either as prototypes or as products? Then, what can we take from these to better our own work?

“Customize Product — Goutham

  • Good use of depth, leading to layering opportunities for stereoscopic effects
  • Selection areas might be small
  • Background is lacking context and texture, thus harder to percieve depth in VR
  • Could use my head-tracked transformations experiment to see more of the shirt
  • What’s the rest of the purchasing flow?

“Customize Product” — David França”

  • Nice use of color and typographic hierarchy
  • Two views of the object are available — VR would likely have a rotate functionality, but we could imagine multiple representations and action shots (e.g. multiple environments with people wearing clothes or using products)
  • Reminds me off the need for paragraph text

Vans Custom Shoes

  • Design is actually being used (the other examples are effectively hi-fi sketches)
  • Graphic and UI design could be greatly improved
  • Shows a depth of customization

Mozilla A-Frame Team’s Mockup

  • Actually has volumes
  • Really gives you a feeling of what might come — 3D rendered models wearing the clothes you’re looking at or even a 3D model of yourself
  • Given that it is an example for A-Frame not a prototype for an actual store, the design is fairly limited and not a finished thought

Design Process

This was the first product-y, interface-y VR project I’ve done. Previously, I’ve worked on art projects, a demonstration of an installation piece, research regarding morality and haptics, and smaller experiments with A-Frame. As such, much of my work here was focused on building the tools I can use in the future, design-wise and code-wise.

Like I said earlier, I began by exploring what other designers had made for product selection tools with the plan to build three prototypes of increasing spatial complexity (planes, cylinders, and volumes).

In my exploration, I found:

  1. I need a way to quickly mockup layers, without me having to set the positioning and scale on each layer.
  2. You lose depth when working on a laptop (versus a headset), so it is additionally important to continually test in VR.
  3. I needed to find a new way to sketch and understand which sketching methods work when.
  4. I think the best, quick mockup method for this experiment is my middle one: design on layered cylinders. (It has depth, but you don’t need to get into 3D modeling.)
  5. My mockup method isn’t good for interactions.
  6. A-Frame or ThreeJS has transparency bugs.
  7. I need to make sure I’m orienting the UI from the camera’s location.
  8. Sometimes it is easier to move an object’s position than add and remove it from the environment.

1 & 7. I need a way to quickly mockup layers, without me having to set the positioning and scale on each layer. I need make sure I’m orienting the UI from the camera’s location.

After I mocked up the design in Sketch, I split it into four layers, each individually exportable.

From left to right, foremost layer to the back most layer. Since A-Frame doesn’t handle shadows out of the box, I created a fake shadow of the shopping cart on the second mockup.

Flatties. When putting the layers in VR, I needed to translate the mockup’s pixel size to meters (the unit of measurement in A-Frame). So, I created a handy function that lets me give the pixel height and width as HTML attributes, then automatically sets the meter-based height and width for groups of planar mockups. (You can find the function in the technical section of this case study.)

To add depth, I move each planar layer back a bit. It worked ok for my flat images, but objects farther away get smaller — meaning that if you drew something in a flat way (i.e. in Sketch), the layers that represent things further away, get smaller.

Left: the right way to layer. Right: the wrong way, but you can usually get away with it if you need to.

To combat this, you need a scaling function: for each step back in the z-direction (depth), scale up in the x and y directions in such a way to keep the mono-perspective view the same as the depth-less sketch. I didn’t build one for flat images, but I did for curved images.

Cylinders. To wrap the UI around you, a non-Euclidian geometry is needed, either cylindrical or spherical. In A-Frame, there’s an available element called <a-curvedimage> that takes an image and puts it on the inside of a cylinder based on given parameters. You need to set these variables by hand which creates difficulties and increases the potential for error … my undergraduate math minor in me was displeased.

If I change the positions of each element to give depth, the position only changes in one direction — meaning that if the user turns around, the cylindrical layer that was in back is now in front (see the diagram below). There are a bunch of parameters I could feed to the <a-curvedimage> element to change how the cylinder gets built. For instance, I could increase the radius of each cylinder, but I’d need to also increase the height proportionally. Another option, and the one I opted for, is to use the scale attribute curved images have. It’s much easier to say “scale up 1.1 times”.

Differences between planes and cylinders, position changes and scale changes.

Additionally, since I’m working with partial cylinders for my UI, I have more parameters that I would need to set by hand to render the interfaces properly.

To simplify this process, I created a handy function that allows you to easily set the parameters of a group of similarly sized mockups, turning them into cylindrical mockups in VR. It uses mathematical relationships rather than the hand-set parameters, allowing for more accurate representations of volumetric prototypes without adding development time. (You can find the function in the technical section of this case study.)

And finally, a duh moment: interfaces with depth should be scaled from the camera’s viewpoint. There was a while I was fumbling around, trying to make two elements line up … until I realized they need to line up from the camera’s eye. Duh.

2. You lose depth when working on a laptop (versus a headset), so it is additionally important to continually test in VR.

Left: a planar mockup where you can see the depth of the checkout button. Right: a cylindrical mockup of the same UI. Due to the camera’s mono-perspective on desktop and location of the button, you don’t perceive depth. The right experience is better for the user because the button is more accessible for selection. It also shows the difficulty of developing VR apps outside of VR — you lose a depth cue (binocular disparity).

Our brains use binocular disparity to understand the distance of objects (among other perceptual cues). Basically, we get some of our depth cues by sensing a different image of the world from each of our eyes. The amount of difference tells us the closeness of an object — larger differences meaning an object is closer, smaller differences meaning an object is further from us.

Top-down view of how our left and right eyes see different images. Remember, binocular is what we see in reality and in virtual reality. The monocular perspective is what we see when we play video games or develop VR apps on a laptop.

It’s important to test your work in VR as much as possible. It’s easy to think you’re designing something with great use of depth, but after you put the headset on, you realize you are not. Additionally, if you’re prototyping on a flat screen, the mono-perspective disguises depth.

When you are designing layered cylinders from the user’s perspective on a desktop, you miss the depth cues that you’d pick up from a stereoscopic render. Thus, it’s useful to step back, move around, or better yet keep putting on your HMD. Here, you can see a mono-perspective change location, allowing us to see depth in one location and not in another.

3. I needed to find a new way to sketch and understand which sketching methods work when.

Sketching for VR isn’t the same as sketching a mobile or web app, nor is it the same as sketching a three dimensional scene — at least not when you need a technical understanding of the space. If I’m going to build a prototype from my sketches, I need enough information to properly layer my objects in the virtual environment. To do this, I worked with vanilla sketches, isometric paper, and numbered depth indices. At this point, I can say it was all helpful, but I need to use the tools more to figure out what works best for me and when.

Left: bird’s-eye stage. Center: bird’s-eye stage with depth units. Right: an indexed way to reference layers.

These three images show me testing different representations of the space. Left: a reminder of how the environment wraps around the user. What should take up the space beyond the looked-at UI? Center: bird’s-eye blocking to help me understand the depth cues and that everything should have depth to it. The bottom of the diagram is where the user views from. Right: to help me mentally construct the world and understand its depth, I tried to think about the mid-ground as a zeroth plane, then +/- n for the subsequent foreground planes and background planes up to +∞.

Isometric struggles.

Isometric paper allowed me to more easily understand depth … but freehand drawing on isometric paper was too much of a cognitive battle. In the above image you can see me fumbling to represent multiple volumetric layers. In contrast with one of my previous VR projects, isometric paper seems more useful when one of your dimensions is staying the same the whole time (e.g. wall height when drawing a floor layout).

In a previous project I worked on, ViewPoint, using isometric paper for a room layout was really helpful.

The other sketching option is to just sketch the world from the perspective of the camera. I didn’t do that for this experiment because I had the layout and a good idea of what it would look like wrapped around me.

4 & 5. I think the best, quick mockup method for this experiment is my middle one: design on layered cylinders. My mockup method isn’t good for interactions.

In this experiment, I used images to mockup the user interface. Separating a flat layer into a few layers separated by a small distance gives the interface a nice, subtle depth. Curving the images on cylinders keeps everything pointing towards the user (as shown in 2). This method worked well because I could quickly create flat layers in a tool I’m familiar with (in comparison to using code to create volumes), export it, and throw it into VR, using my JS functions to position everything for me.

It is not without its downsides though. The images that come out of flat design tools (like Sketch) have the interface baked-in, meaning we can’t change them in VR. Rather, we’d need to go back to Sketch, make the edit, and re-export. If we can’t pragmatically change the images, we can’t handle user interaction well for micro-interactions (e.g. hover and click states on buttons). Additionally, the work is not actually in 3D, so depending on the nature of the desired mockup, we might not be able to work with it in two dimensional sheets.

6. A-Frame or ThreeJS has transparency bugs.

Somewhere in the rendering pipeline, between A-Frame and the foundation for A-Frame, a framework called ThreeJS, there exists a bug where some transparent objects can hide other objects, based on whichever element gets read first in the code.

To fix this, order elements in the code from back to front.

8. Sometimes it is easier to move an object’s position than add and remove it from the environment.

I spent time struggling to create an easy way to switch between mockups in the prototype (flat, curved, curved with volume). Not being the best Javascript author, I tried a few ways of adding and removing elements from the DOM. (Quick aside: DOM stands for Document Object Model and is a code-based representation of websites, and here, the virtual environment.) To overcome this, I created a function that sends all mockups way below the virtual ground, then pulls the one that should be active into view. When the mockup is changed, it throws the active mockup down below and pulls up the now-active mockup.

Technical Bits

If you’re not interested in the code, go ahead and skip this part. Scroll down to User Testing.

Each of these code snippets showcase a problem I encountered and solved with code. All of this is to the best of my ability crossed with available time. As you follow me through my case studies, you’ll likely reencounter some of these functions. I’m building tools to help me and others prototype quickly and effectively in VR.

Set Parameters of Flat Mockups

A-Frame uses meters, but Sketch uses pixels. Rather than calculate each height and width in meters by hand, I created a function that reads HTML attributes with the pixel-based height and width, then sets the meter-based height and width.

Set Parameters of a Group of Curved Images

I’m most proud of my setImagesCurved() function. It finds a set of curved images, reads their group’s parameters as given in HTML attributes, then translates units, scales for depth, and horizontally centers the set to the camera/user.

Settings, Reload Page, & EnterVR

I find that using a settings object at the beginning of the JS really helps expedite development and user testing.

Automatically reloading the page is useful when you’re tweaking elements, putting your headset on, and taking it off frequently.

And, when you’re reloading the page with your headset ready, you might as well automatically enter VR.

Position Toggle

Like I said in my eighth design finding, changing the positions of elements can be easier than adding and removing them from the DOM. Using a targets object:

I toggle one target into view at a time with toggleTargetsTo():

User Testing

While testing these prototypes isn’t my primary focus, I think it would be a drastic oversight to not include a few user tests. This experiment had three users: two women who live in a rural area (around forty years old) and a woman in her mid-20s who lives in a semi-urban area.


Each section in this case study provided challenges and user testing is no different. The first challenge was deciding on how to capture the session in a way that (1) allows me to take few notes and (2) provides these case studies with interesting photos, videos, screen captures, and audio. To accomplish this, I used a series of tools — not all of which worked every time.

My MacBook Pro is the MVP here. It uses:

  • QuickTime to record spoken audio of the session.
  • The file system has all of the HTML and Javascript. This makes it really easy to access the experiment files from my iPhone 5 and easy to update anything if need be.
  • Reflector 2 to mirror and capture a screen recording of the iPhone. This part is most prone to failure, working 50% of the time. When it works, I can see what my user sees.

The head-mounted display uses:

  • A Google Cardboard with my iPhone 5 inside. It should be noted that my iPhone has been getting buggy over the last year.
  • The phone uses the mobile Safari browser to fetch and render files of off my MacBook Pro over wifi. Wifi has also been a weak cog.


Users want to push buttons, but don’t have the intuitions how yet. Users like interactivity. As a researcher, it’s your duty to listen to their feedback and keep them on task. My users wanted to see more things in the store and click all the buttons. They also expressed a bit of frustration, not knowing how they might push a button in VR.

Is this the new way to shop? Maybe. Especially for rural areas — and two of my users happen to live in a rural area — shopping for clothes requires a long drive to a larger area or shopping online. Being able to view clothes in some form of reality would be really useful.

Better for users and designers if you could put your own body image in the clothing you were looking at. This was an aside from two of my users. They talked about how, if you could scan your body into VR, you would be able to try things on. With a scan, more user data can be collected so that designers can get detailed feedback about what works and what doesn’t for particular groups of users.

Curved is better than flat — mostly. The curved representation works best for viewing each part of the UI, but it was too close to the user and made them turn to see the whole interface.


I am going to research a product selection tool made for the web, translated into virtual reality, so that I can start to understand the critical differences and needs between flat screens and volumetric interfaces.

Starting with a designed-for-the-flat-web product presentation design, I created a virtual reality version. I did this to learn how to better prototype in VR while also performing a study in contrast (design techniques for the flat web versus VR).

Throughout my process, I found both design and development best practices. Of course, these are best practices for the context in which they were found, but it is a great beginning to my overall journey into user experience design for virtual reality.

Additionally, I created a process to run and capture user tests. While it isn’t my focus, I got feedback from three users on this project. My two rural users appreciated the potential for VR to help them shop without driving to far-off, larger cities.

Experiment 7, Product Selection & Customization:

Source Code:

For more user experience design for virtual reality information, follow me and the Humane Virtuality collection.