Designing Virtual Reality Interfaces: Proximity-based interaction
We’ve been working on Virtual Reality projects for a couple of years now and we’ve learned a lot about the way people interact with the environments we built for them.
Quite a few of our applications were built for mobile platforms and have to work without any external hardware controllers to provide the lowest possible entry barrier for the target audience. This limits the interaction options to the user’s voice, posture, and gazes — where the latter is the most widespread practice.
Most of the existing virtual reality SDKs provide pre-fabricated interactive items that can be activated or triggered by gazing at them for a certain amount of time. (see GIF below)
Being someone who often uses VR applications, I quickly became annoyed by the amount of time I was wasting just waiting for a button to be triggered. Although it’s just a second (or even less) it sums up and there’s nothing I can do about it. To me that feels like a loss of control.
When we were working on our action game Glubsch, we played around with different approaches on how to navigate through a virtual environment and eventually came up with a new method: a proximity-based interaction.
We drastically reduced the activation area to a really small area. Around the activation area we placed a large “hover” area. When the user’s focus enters the this area, a crosshair appears indicating it’s proximity to and the direction of the activation area. Once the user’s gaze hits the activation area, action is triggered immediately.
This way users retain the full control over the interaction and can navigate and interact as quickly as they are capable — depending on their skills.
The GIFs shown above are taken from a short video we made to demonstrate the difference between the two methods and to show how we used it in one of our projects.
If you enjoyed this story, feel free to leave a comment, give it some love, and share it with whomever it may concern.