Overcoming Gorilla Hands in VR

Why we don’t have the perfect interaction yet

VR offers users the exciting possibilities of creating uses interfaces in any size and any shape all around them. From its ability to communicate emotion as a film medium, to VR’s exciting possibilities to create expansive digital environments, there are many uncertainties and questions that exist down the road. One thing is for certain: creating VR content calls for very different design thinking and also requires change in the human computer interaction concepts learned through web and mobile design.

VR provides a great opportunity to develop new learning experiences. Although currently the majority of developers are focusing on simulation and gaming endeavors, there is also a growing interest in using VR to replace computer interfaces, and ultimately as a reading interface to navigate through both e-books and information available on the web.

The challenges that exist within designing for VR reading interfaces is twofold. According to a research paper published by the IBM Almaden Research Center, size 10 font presented with a Georgia typeface is the easiest to read. However VR today does not allow for this because the resolution and pixel density of current head mounted display systems require text to be significantly larger than size 10. This could be solved by making the text bigger (and taking up more screen real estate). Thankfully this first challenge is of a technical nature, and will surely be resolved over time as display technology progresses, and higher resolution displays become cheaper and more ubiquitous.

The second challenge is one of interaction design, where designers and engineers will re-imagine an entirely new interaction language for users to navigate through text. Today, user interfaces in VR mostly utilize hand tracking, where 3D depth cameras and computer vision are used to locate the user’s hands and fingers to scroll through text (think Minority Report and Iron Man). This method of interaction simply does not work (for reasons explained later in this blog). Let’s take a look at the history of VR to see why this solution is the first one that designers turned to.

Without a shadow of a doubt, the largest contributor to this school of thought are special effects designers. Hollywood has indoctrinated the public to view virtual reading interfaces as fantastic swirling animations that fly around a user’s head and purely use hand interactions to navigate through the virtual experience. The largest culprits for this problem are big-budget blockbusters like Iron Man, which showcase futuristic technologies and flashy methods of interaction. Unfortunately, while this kind of user experience looks good on the big screen, the truth is that these interactions are designed by motion graphic artists, with priority on visual appeal, and having no considerations on usability. Minority Report is another film where beautiful user interfaces featured hand interactions that violate basic human ergonomics. As a result of these films, both the public and developers have a poorly preconceived notion of how these interfaces should be designed.

The problem with using your hands for every point of interaction is known as Gorilla Arm syndrome, where having to reach up and touch interfaces for long periods of time result in tired hands very quickly. Another problem with using your hands is the lack of tactile feedback. Your hands will endlessly flail through empty air with no physical contact. Though gestural interfaces are extremely powerful and have great potential within VR, the idea of waving your hand to navigate through folders is incompatible with good usability.

One school of thought for reading interfaces in VR is to stick to tried and trusted forms of interaction. Take the computer mouse for instance, which stunned the world with it’s ability to navigate through text in the infamous 1968 “mother of all demos” given by Douglas Engelbart. As an intermediary between possible interactions available now, and interactions that exist in the future, the mouse can still be used as a stepping stone. At present time, hand interactions and gesture technology are simply not developed enough to be feasible as a useable method of interaction. On the other hand, in May 2015, Google presented one possible solution for this problem called Project Soil, which combines the natural feeling of using your hands, with the ability of have tactile feedback. By weaving conductive materials into fabric, Project Soil allows clothing designers to turn any wearable surface into a touch interface, similar to the one on your phone’s screen. Soil allows for natural user reading interfaces because in addition to the ability to use your hands, the fabric form also allows users to scroll through messages and books simply with a swipe of their fingers.

Soil seems to be the most promising reading interface for VR in the short term. Yet other interfaces exist today outside of VR that can be incorporated for a more efficient reading process. Personally, the most compelling interface that I have personally used is called Spritz. This company was recently acquired by Samsung, and presents a novel method for speed reading. Spritz works on PC or Android and displays text to the reader, one word at a time. Text is displayed at a large font, in chunks where words are displayed in succession. Spritz works by forcing your brain to read faster by speeding up text that is displayed to you. Their methodology is based on past research that by changing the speed that text is displayed to you, you can train yourself to read faster and faster.

Integrating a “Spritz” like environment within VR would solve both challenges that currently exist. Firstly, by displaying text only one word at a time, you can display at a much larger font because you don’t have to fit any other words on the screen. This means that even headsets that are available today would have high enough resolution for reading purposes. Secondly, Spritz will scroll through text word-by-word automatically, requiring no user prompts and minimal interactions

Now armed with knowledge of the advantages and disadvantages of various reading interfaces, as well as an understanding of basic design principles that result in a good interface, we can explore interaction systems that embody and combine various different technologies. And of course, designers and engineers from various disciplines will need to work together to facilitate the creation of the perfect reading interface.

Connected Lab works with the world’s most ambitious companies to deliver the best connected experiences across multiple platforms, including mobile, web, smart TV’s and VR/AR. Our clients come to us for our transformative approach to software development, rooted in Extreme Programming and Design Thinking, what we call the ‘Connected Method’.

Stay connected by signing up for our mailing list here, and thanks for reading.