Hands and Mixed Reality

Our hands are the most natural hardware we possess.

As we innovate, the time is coming to ditch physical hardware such as keyboards, mice, and other input devices that have long served as extensions of our body. For interaction in virtual reality (VR), controllers are a natural and temporary segue to bridge the gap between the present and future. As we move forward and continue to develop and refine VR, and more importantly mixed reality (MR), we should focus our efforts on hands and not more hardware controllers. The good news is that companies like Leap Motion are on it.

In 2011, I read an essay by Bret Victor titled “A Brief Rant on the Future of Interaction Design.” In his essay, he wrote about the importance of our hands and how they tend to get ignored in many projections of the future. Although the popular emergence of VR has shifted the paradigm slightly, his argument still applies.

“Our hands feel things, and our hands manipulate things. Why aim for anything less than a dynamic medium that we can see, feel, and manipulate?” — Bret Victor

Current technology has limitations in the way we can feel objects in VR, but there is plenty of research and development going into this arena. Companies such as Disney, for example, are having breakthroughs with haptic holograms, and others like Gloveone are creating gloves that allow you to touch and feel in VR. We are getting ever closer to creating Star Trek’s holodeck-like simulations that you can touch and feel. But when companies like Oculus and HTC Vive make more non-hand tracking devices/controllers, it is a bit disappointing and, in the end, not that innovative.

A scene from Star Trek: The Next Generation showing a high-fidelity “simulation” of a wooded area.

How we interact and communicate with technology without a hardware device in VR/MR will be paramount to its success. If we can master communication and get tech to understand us, no matter who we are and what mistakes we make as flawed beings, then we will truly witness something magical.

“Lets have computers serve us.” — Golden Krishna

For my Graphic Design MFA thesis at the Maryland Institute College of Art, I worked under the amazing minds of Ellen Lupton, Jennifer Cole Phillips and Jason Gottlieb. During my time there, I developed a conceptual prototype for a VR/MR technology called ora. As I got deeper into the project, I realized that the user experience and visual design of interfaces play a secondary role to the primary unsolved need: How will we speak to the computer?

Traditional input methods via a keyboard and mouse or touchscreen are no longer applicable in the realm of VR/MR. So what do we do? I would argue voice communication to be less than ideal. Imagine being in a public situation. You would not necessarily want people to know what you are doing at any moment. Think about how underutilized Siri is. We need systems that are more discreet.

I took a step back from my original thesis goal of interface design for VR/MR. I started obsessing over answering the question “How will we speak to the computer?”

Growing up my friend’s sister was deaf. In order to communicate with her, I had to learn a series of gestures known as American Sign Language. The memory surfaced as I started fidgeting with various gestures that I would most likely remember to perform basic computer tasks like opening and closing an interface. But the gestures couldn’t just be from my own head.

Final iteration of gestures for closing an interface and opening a new one. The closing gesture mimics the action of closing a book or laptop and sliding it away. Opening is a simple gesture of sliding an interface into your view.

As anticipated, creating a new language is not a simple or quick task. Much like user experience design, you have to be on the outside looking in. I started observing gestural interactions in daily life, television, and even in video games. All of my observations brought me back to Brett Victor’s article and the affordances of every day objects.

What if the gestural interactions were as simple as the affordance? In the real world, a button afford pushing, a fridge door affords pulling, and a lid on a jar affords turning. Why wouldn’t the same go for the way we interact with objects and interfaces in a VR/MR space?

I’ll be the first to admit that many of my gestures I developed are imperfect. For example, one of my initial concepts for turning on your VR/AR system would be to check your pulse, much like how a doctor or nurse would do. Is it the best gesture? Probably not. What’s great about any new technology is that we get to be pioneers and refine and perfect over time.

An interface I developed that affords turning and pushing.

Additional Commentary

The gestural system I developed is an over-dramatization in order to show the interaction on screen. I regret to inform you that the future will not be like Minority Report. Why you might ask? Because humans are lazy by nature. Gestural communication with computers will almost certainly be minimized into micro-gestures. A micro-gesture, by definition, is a small, almost indiscernible movement of part of the body, especially a hand or the head, to express an idea or meaning. Ahh, there’s the discreet communication system we’ve been searching for this whole time.


A few people who inspire me:

Jody Medich, Mike Alger, Brett Victor, Josh Carpenter