Do we really need our hands in Virtual Reality?

Virtual Reality provides us with dream-like worlds, where we can have experiences that we can’t have in the physical world. Using our physical bodies as input devices might be limiting to these experiences.
Our hands are fundamentally how we interact with the physical world. Being able to do the same in Virtual Reality is currently one of the industry’s biggest challenges. Every time you reach out to grab an object in VR, and you don’t see your virtual hands doing the same action, the illusion is broken.
“You want to see your hands when you go into virtual reality,” — John Carmack
“As long as an input solution isn’t there, people will put on the headset and say ‘where are my hands?’ — Palmer Luckey
Once we jump into a virtual world, our impulse is to see whether or not we exist in that world, and we raise our hands to validate that. Our second impulse is to find out if we can interact with the virtual world and influence it, and we try to use our hands, like we do in the physical world, to do the same.
But do we really need to see and use our hands in the virtual world, the same way we see them and use them in the physical world?
Before answering this question, let’s look at how the Virtual Reality industry has been trying to solve this problem:
Oculus, HTC and other hardware manufacturers are launching positional tracking controllers that track your hand movement, and are fitted with buttons and joysticks that help you perform different actions. To grab a ball in VR, you reach out with your hand holding the controller and press a button.

Leap motion, which can be mounted on an Oculus headset, uses infra-red to track where your hands are in the physical world, and show them in the virtual world. However, the illusion of VR still breaks down once you grab an object with your virtual hands, and feel your physical hands grabbing nothing.

Companies like Virtuix and Cyberith are working on full body motion trackers with a treadmill-like design to track how you walk, run, duck and jump. You need an extra $700 for the setup, and extra space in your office or living room.

GloveOne and TeslaSuit create wearables that track full hands and body motion, and provide haptic feedback based on what you’re interacting with in VR.

And finally, companies like the VOID are mapping a large physical space into the virtual world, where we can freely walk and shoot each other.

All these solutions have one thing in common: they are mirroring an action taken in the physical world to interact with the virtual world. And since the virtual object we’re interacting with doesn’t exist in the physical world, they are also creating physical feedback devices that make us believe it does.
What if the best virtual reality input solution doesn’t mirror our bodies from the physical world, but rather extends them with brand new existence in the virtual world with virtual bodies that are controlled solely by our minds?
What if every time you think about reaching out and grabbing that virtual apple, your virtual hand reaches out to grabs it, without having to move your physical hand at all?
Let’s explore this for a bit.
When you see an apple in the physical world and think about eating it, your brain forms an intent, sends a signal to your arm to move toward the apple, and to your fingers to grab it, lift it up, and move it toward your mouth.

The intent is to move the object. The action is to move the arm and fingers toward it, grab it, and lift it up. The result is that the object starts moving.
Right now, we are solving VR input at the action level (moving the physical hand) rather than the intent level (thinking about moving the hand). And it’s a retrofit approach to solve in VR input, because it tries to use our physical bodies as controllers in a space they don’t exist in, instead of rethinking from scratch what it means to have a virtual body in a virtual world.

In the Matrix, Neo doesn’t get strapped to a treadmill with haptic gloves in his hands, moving his arms in the air while he’s running around the matrix punching agents.
That wasn’t Neo. That was Joynny. And Johnny wasn’t as cool!

Instead, Neo has his brain plugged into the matrix, while laying on a recliner in the Nebuchadnezzar. He doesn’t enter the matrix with his body, but with his mind. And before entering the matrix for the first time, he goes through virtual training simulations to train his mind to use his new virtual body (his residual self image), and to adapt it to the rules of that new world. He ultimately learns that, in the matrix, he is only limited by his what he thinks he can do, and not by what he can already do in the real world.

Inception provides a similar mode of operation in the dream world, and so does Avatar.

These movies suggest that experiencing and interacting with a virtual world is all in the mind.
So the question is: If we have a VR input device that recognizes our intent, and triggers the required action in the virtual world, without having to perform a similar action in the physical world, would that device train/fool our minds enough to believe that we have virtual bodies that exist independently, and are controlled separately, from our physical bodies?
In other words, if you can walk with your virtual body on a virtual beach and reach out with our virtual hands to grab a virtual coconut, just by thinking about it, and without having to move a muscle in your physical body, would your brain start to believe that you have a brand new existence, that is independent, for the most part, from your physical existence?

An even better question is: if we can quickly and accurately communicate our thoughts and intent to the virtual world, do we need virtual hands or virtual bodies at all? Can we skip directly from the intent (thinking about grabbing the apple) to the result (the apple moving towards you) without having to go through the action (reaching out to grab the apple and move it with your virtual hand)? And can our minds adapt to that new mode of Jedi-mind-like operation?

Since we tend to look at an object before interacting with it, a popular mode of operation in VR is to detect that intent through gaze-and-hold. A reticle that follows the gaze serves as a selection pointer, and fixing our gaze for 1–3 seconds confirms the selection/action.
In this mode, the action (moving virtual arms to interact with a virtual object) is skipped, and the result ( teleporting to to a new location or lifting and moving an object) is directly performed.

This works well in carefully designed virtual environment, but it falls short when you need perform multiple actions at once, and you can’t perform a series of actions quickly.
Is there a better solution?
Wearable EEG devices (a.k.a brain activity trackers), most notably the Emotiv Insight, provide a glimpse into a future where devices are controlled by the mind’s intent, rather than the body’s action. Think about adjusting the thermostat, and it’s done. Think about getting a cup of water, and your mini-drone-assistant will fly to the kitchen and get it for you!

The first time you wear a brain tracker, and you gaze at the cube on your screen to move it up with your thoughts, you experience an equally magical moment to the first time you put on a VR headset, look around, and see a new world that you’ve been instantly teleported to.
The combination of a brain tracker and a headset might provide a whole new game for Virtual Reality, and intent might just be the best way to interact with it.
Instead of strapping on your VR googles, a haptic suit, a pair of gloves, and jumping on a treadmill, you simply put on a pair of VR goggles fitted with EEG sensors and haptic-feedback simulators, and use your virtual telekinesis powers to move around the virtual environment and interact with it, without having to lift a finger in real life.
There are currently more questions than answers, and there are lots of experimentation to find out if this solution is possible and if our brains and bodies are willing to accept it.
Over the next couple of months, I’ll be running a series of experiments to interface between Emotiv Insight with Oculus Rift, and to investigate further the premise of mind-controlled virtual environment.
I’ll be sharing the results here on Medium.