First Thoughts on Room-Scale VR

Over the weekend, I had my first experience with room-scale VR using a HTC Vive developer kit. During this initial run, I played Waterbears VR, Job Simulator, and Fantastic Contraption. Overall, I was really impressed by how far the technology had come and how successfully it convinced me of being in another place. Still, I wanted to document the things that didn’t work or felt uncomfortable, not to criticize the hardware or the designers of the experiences, but to expose these moments of friction and eventually learn whether (and how) they change over time and with experience.

HTC Vive base station

In VR, I am reduced to a gaze and two floating controller-hands.

In two of my experiences, my hands aren’t represented in-game, but the controllers they’re gripping are. I appreciate the honesty of this interface, but interacting with a 3D environment through only these controllers feels like playing Edward Fortyhands with Wiimotes, or wearing feature-rich mittens — functional, but still fundamentally clumsy on the first run-through. This maladroitness is exacerbated by the lack of tactile feedback, which we rely on to learn how to adjust our motions and inhabit our tools.

The rest of my body is simply absent in-game, and I’m surprised by the ease with which I forget about it. VR only reminds me of my meatbag form accidentally, like when the long cord tethering my headset to the computer brushed against my back while I turned and moved. That said, I found “myself” clipping through a table at one point, and it felt a little sickening even though there were no visual indicators of the collision.

While my body is dematerialized in the game, it’s actually expanded in real life with all the clunky gear to wear and hold. On multiple occasions, I clack my controllers against my helmet, forgetting where my controller-hands end and my helmet-head begins. The interface abstracts interactions to the point-click-drag model we’re familiar with on flat screens to accommodate the controllers’ limitations, but there are still real physical and corporeal challenges to overcome in the execution of these gestures here.

Across the games, my least favorite interaction combined all of these problems: it required me to pick up an in-game helmet and put it on my own head. Physically, the gesture felt like trying to put a shirt on using chopsticks; cognitively, the dissonance between the proprioceptive maneuver and my absent body made my skin crawl.

Getting off the ride is a disorienting process without practice, too. The controller-hands had to be put down before the helmet could be removed, but where? On the floor, or on the furniture I couldn’t see? When I eventually returned to real-reality, I suddenly registered how achy my back felt. The main culprit was the combination of the heavy headset with all of the bending over I didn’t even realize I was doing, but it also didn’t help that my hands were tensed and gripping the controllers the whole time. If we are crafting longer experiences in VR, will we also have to figure out how to teach users how to be mindful of bodies they can’t see?

Shortly afterwards, I walked down the street and noticed my eyes catching on unlikely places. All the browns of rust and dirt and bark that were normally too drab to be noticed suddenly popped vividly, probably because they were so absent amongst the bright tones of the VR world. My gaze also dragged pleasurably along gritty textures, drinking in the high-resolution randomness it had acutely missed: the sediment in the sidewalk, the brick facades crumbling, the tiny clusters of spines on evergreen bushes.

I paused at an intersection and looked down, appreciating the simple view of my feet standing solidly on the pockmarked sidewalk.

Christina Xu is an ethnographer and enabler based in New York who likes to think about how people use technology, especially in social contexts.