See more in Virtual Reality: Six tools designers can use in their VR worlds

How interaction patterns, usability testing and a graduation project got us started on VR.

Hike One
Hike One | Digital Product Design
11 min readApr 9, 2018

--

At Hike One, we always keep a close eye on the latest technology. So naturally, we are looking into augmented reality (AR) and virtual reality (VR). As a (then) graduate student I set out to research interaction patterns in virtual reality for Hike One. In this post I will share the patterns I researched and six tools that I came up with that could improve VR.

Screencapture from ‘The Lab’ on the HTC Vive

A small introduction

In virtual reality, we are moving from two-dimensional flat screens and interfaces to three-dimensional, immersive worlds. But how do you design for these worlds? Are there certain principles a designer should follow?

Although virtual reality seems to be a thing of the recent past, it has actually been around for quite some time. The earliest examples date all the way back to 1968. However, with recent technological developments, it has become much easier for companies to bring virtual reality to the masses. These days I can just download an app on my smartphone, get a cheap cardboard head mount, and immerse myself in a virtual world. Anywhere. Anytime.

Google Cardboard; a cardboard box with lenses in which you can use your cellphone to see a 3D virtual world. The goggles you put on your head are also known as ‘Head Mounted Displays’ (HMD).

Besides the availability, virtual reality also has overcome some technical difficulties. It is now possible to see many more details, see vastly improved renderings and it is fast enough to (mostly) prevent motion sickness.

Developing for VR

Developments aren’t just happening on the hardware side, but also on the software side. Both Unity and Unreal have provided kits for VR that allow you to do development while wearing a head mounted display (HDM). Almost any interaction can be built. For 2D interfaces we have identified certain patterns for interaction, that help us to quickly make good designs that people find easy to use. Hike One wanted to know whether in a similar way, interaction patterns can be found for VR and what (some of) those patterns are.

The research setup

I tested 26 VR applications across different devices: ten on the HTC Vive platform, ten on Google Cardboard and six more on the PlayStation VR. I looked at the menus, how objects can be selected, translation of objects, and manipulation. I checked the controls and ergonomics and I looked at the 2D interfaces, 3D objects, locomotion, animation, and audio.

Diving into VR apps to find out more about their interaction patterns. Screen-recording off ‘Gnomes & Goblins’ on the HTC Vive.

I combined my findings with guidelines I found on blogs, papers and design guides from sources like HTC, Google Cardboard, Leap Motion and Oculus. In total, I distinguished around 150 patterns, and I think I barely scratched the surface.

Example of one of the many patterns identified

150 patterns, 4 categories

The patterns covered almost anything and were divided into four categories:

  • General Guidelines,
  • Interaction actions
  • Core VR Elements
  • Guidelines for supporting elements

In general, it seems that many elements are copied from interfaces that we already know; flat 2D menu interfaces as found on tablets and 3D worlds as found in games. This is not very surprising, considering Gibson has a theory in which the knowledge on how to interact with something is both held in the object and the person, which he calls affordances. Simply said; if we want to know how something works, then parts of it should be already familiar to us. I think this is the reason VR still uses a lot of common interaction patterns. I expect that VR specific interaction patterns will need time to be developed, but also need time to become familiar to the public.

Some examples of VR interaction patterns

However, that doesn’t mean new patterns haven’t been created yet. Let’s start with Google Cardboard. Its interaction is primarily based on the direction you’re looking in. This has led to a “pointing-laser” being emitted from your eyes that is used for all kinds of things; hover states, moving objects, porting yourself, and selection. Selection is achieved by a so-called “fuse-timer”. Look long enough at something and a timer will fill. A full timer is registered as a button press.

Other setups have more possibilities for interaction. The HTC Vive’s controllers do not only have multiple buttons and two touchpads, the controllers are also tracked in 3D space. In many cases, the “pointing-laser” that is emitted, comes from the controllers instead of your eyes, and activation is done with triggers on the controllers instead of a fuse timer.

‘Paintbrush and pallet’ menu

One common pattern is the use of menus around one hand, while the other controller is used for the application of the tool. This can be as simple as one controller used as a selector (mouse) for the menu options while the other controller hosts the menu, or one controller as a paintbrush while the other holds a complicated pallet. Another example would be one controller as a gun, while the other controller contains your armory; with guns, grenades and ammunition.

Tilt brush, by Google, uses a rotating menu around controller, while the other controller is used to select and apply settings.

3D worlds

Designing 3D worlds is also very interesting. Many 2D interfaces already have overlapping elements that use shadows or lighting to create an idea of depth. In VR, however, we can actually place objects closer or farther away from a user. Logically, one challenge of VR is what the best location is for objects. Mike Alger has covered this subject and talks about a front, middle and background. Where data is presented in the front, close to the user, interaction is done in the middle at arms distance, and the scene is set in the background. He has also created mappings of where to place objects based on the field of our vision and reach of our arms.

Mike Alger on his ideas of designing for VR.

Clues for interaction

3D worlds can obviously be very detailed. Unlike our own world, however, not everything is interactable. This has created many patterns with the goal to signal to a user what he or she can interact with. Examples are highlighted objects, objects that move, characters that stare in a certain direction, sounds coming from objects but also controllers that are highlighted or start vibrating.

‘IKEA VR Experience’ uses highlights on both the object and the controller to let the user know that objects can be interacted with.

Diving deeper

What pattern should be researched more extensively?

To allow more in-depth research, I focused on the fields in which VR seems successful. One would expect the successful VR applications to be on high end hardware, but VR might be more successful on Google Cardboard because of its huge reach; everyone has a smartphone, while only few have an Oculus Rift or HTC Vice accompanied by a strong computer. Not looking at the platform though, it seems that VR has the most potential for informing and training.

Complex construction can be easily shown in VR. This example is from ‘The Lab’ on the HTC Vive

VR makes it possible to scale education which is safer and richer than any other option. Think of rich MRI scans that doctors can interact with, crane operation simulations that can be given anywhere or safe fire drills simulation. But you could also think of virtual shops, in which you can try out a new product or venues in which companies can show products that would be impossible to show otherwise; vehicles, ships, buildings and even city plans. To sum it up: the application fields in which VR is a cheaper, better option are in commerce, education, healthcare, and scientific visualisation.

‘Am I missing something?’

One very crucial element in these applications though, is that a user must be aware of the featured information. And in a 3D world it can become harder for a user to spot the right information. Users could be looking in the wrong direction or overlooking your important element. For example, YouTube showed that people spend most of their time looking to the front in 360 videos. Therefore, I focused my research on the workings of the visual part of the brain.

The angles of what we can see, and what we see inside HMD’s. The majority of what is happening is unseen by a user in VR.

Steering attention

A great source that helped me along the way was the book “Zo werkt aandacht” (This Is How Attention Works) by Dutch researcher Stefan van der Stigchel. He compares our vision to a small spotlight that is scanning the surroundings. We “see” what the spotlight is aimed at and because the spotlight is small, we can’t take everything in at the same time. Therefore, the brain decides what this spotlight should focus on. The brain does this mostly unconsciously and seems to prioritise for us. The external factors that steer our attention are, in order of importance, motion, colour, and shape.

This happens without our conscious input. Internal factors, that happen both with and without our conscious intent are memories, instructions, personal goals, priming, fears and faces, to name a few.

A modal based on the book of Stefan van der Stigchel ‘Zo werkt aandacht’.

A good example in which these attributes are used to steer attention, are games. Think of a shooter game in which a player has to survive. Often these games start with scaffolding; players are instructed (and primed) how to play the game. In an online match, the player has the goal to be the last man standing. Now think of an enemy sniper; no movement, blended in colour and no-outstanding shape. Still, the internal attributes like memories, goals, the face of the sniper and the fear of losing, help the player to find the enemy sniper.

Can tools help to steer attention?

Together with other sources, I looked at ways to steer your attention in a specific direction. For 2D interfaces we already know tricks to steer the attention. But virtual reality allows objects to be placed outside our field of vision; we need to turn our head to be able to see it. What tools are most effective in doing this?

I found it most interesting to look at animation, arrows and converging lines. Animation because we tend to follow motion in a certain direction. Arrows because they are an abstract shape that everybody seems to understand, but does it work for 3D? And converging lines because these occur in spaces as we know them (end of the hall, end of the road), are we influenced by them?

To test these concepts I designed a very basic environment in which I asked 13 participants to watch a video on a big virtual screen. Their conscious attention would be with the video. Then, each tool would be introduced, without the participants consciously knowing. I used a moving ball at eye-level for the motion, an off-centre arrow that would fade in and out and big organic swirls behind the video that also faded in and out.

The first testing environment, built in Unity.

I registered whether a participant noticed the tool and if they saw the message the tool was pointing at. I also asked them to rate the tools with the use of a survey that featured questions from the AttrakDiff survey.

Bar graph showing the effectiveness of each tool.

The results showed that the moving ball and arrow were most effective. The converging lines were misinterpreted and often went unnoticed. The arrow was seen as professional and clear, but also intrusive and boring. The moving ball was seen as more fitting and fun in the environment.

‘Show me more!’

When I discussed my research with my supervisor from the Delft University of Technology, it dawned on me that these tools direct your attention outside your field of view in order for you to discover other objects. But the same can be achieved when you enlarge your regular field of view. Then the objects themselves can steer your attention towards them, with already existing methods.

The question then becomes, which tool can steer your attention towards an object most effectively? For this I looked at five new tools: lighting cues, a radar, a mirror ball, side view mirrors and a birds-eye view. I decided to drop the lighting cues because they were barely visible when subtle, which made them easy to ignore. The side view mirrors were also dropped because they were almost the same as the mirror ball, but less practical.

Fourteen participants helped me this time to evaluate the three new tools and compare them with the previous ones. Participants were placed inside a small, canyon-like world, and asked to find a white capsule placed somewhere around them. I then activated a tool and observed how quickly they found the capsule. Once again, participants were asked to fill in a questionnaire.

The six tools in use inside the testing environment.

So, what tool is the best?

The birds-eye proved to be the fastest tool while the mirror ball was the slowest. In hindsight, I recognized I should have also tested how fast participants would find the capsule without a tool. Now I can only compare each tool relatively.

The birds-eye view was scary to most participants as they were lifted up in the air 25 meters, instantly. It did create an easy to understand overview of the environment. The mirror ball presented a very distorted view, making it hard for participants to interpret what they saw. It did, however, give a fast and crude indication of what was happening all around. And the radar worked very well with filtering information and showing highlights.

What did I learn?

At the end of my research I found that there is no tool that is an overall winner. Each tool has its own strengths and weaknesses. Each tool has its own situation and context in which it is best fitted. The two experiments, however, did allow me to somewhat refine my ideas for each tool.

I’ll leave you with my six ideas for guiding attention and some notes on the best way to use them:

Now start creating!

As I previously stated, I feel like I only scratched the surface with this project. To me, the research seems a little open ended with the detailing of these six tools. It feels like it is only the beginning; there is still so much more I want to learn about VR! Luckily there are a lot of developments happening around VR. Which is good, because the only real way to learn more, is to create these wonderful new VR experiences.

If you’re excited about VR and the possibilities, let us know! At Hike One, we are always looking for the next challenge. We would love to help you along with our expertise and knowledge for Digital Interaction.

Arjen Wiersma
Interaction Designer at Hike One

--

--

Hike One
Hike One | Digital Product Design

Digital Product Design. We guide you to new and better digital products. Writing about digital, design and new products from Amsterdam, Rotterdam and Eindhoven.