Eye Tracking in VR
Eye Tracking in Virtual Reality fascinates me. Lately, I’ve been wondering how you could develop an interface that is completely driven by eye tracking.
Why even contemplate this? We have hands, right? Well… some people don’t. Or some people can’t use their hands. Paralysis, ALS, amputation, broken bones. Many things can prevent people from using their hands as a means of control in VR.
I postulate a fully hands-free and movement-free VR interface relying exclusively on an eye tracking interface.
Initially, it seems pretty weird. How could you manipulate an avatar in a virtual world using only your eyes? Stopping, going, interacting, looking around… there’s a lot to consider in developing an interface like this.
Control in VR is still a bit like the wild west, full of tons of experimentation and ideas, with no real ‘gold standard’ of control beyond the Vive’s (and soon to be Oculus’s) dual controller setup.
I’m not proposing that an eye tracking interface in VR could totally replicate the immersion of using your hands in VR, but I think it could come pretty close for those who don’t have the necessary motor capabilities to experience VR in this way.

Looking Around
Looking around is probably the most basic thing to do in virtual reality. The first thing you experience when you go into VR (before you hold up your hands to see if you can see them), is plain sight - Just looking at the world in front of you.
If you take any given HMD, there is a field of view that can be presented in front of the user representing the virtual space. If you cannot move your head, how could you use your eyes to get a 360 degree view?

In a setup like this, we get a basic up-down-left-right input field. I would suggest that when the users pupil enters into one of these zones, the camera would pan in that direction.
This brings up some questions. Should the “look-zones” actually be present to the user? Or should they be invisible?
The view must have enough space that you can rest your eyes and observer the world or interact with something without the camera always moving. This would obviously take a lot of testing to find the sweet spot.
You could even imagine a sort of ‘quick-turn’ by ‘swiping’ your gaze from left to right, or right to left. This action could allow the user to quickly execute a 180 degree turn around.
Movement
Forward and backward movement. The user would need to be able to initiate and stop this action in a simple way that doesn’t interfere with the established looking mechanism.
There’s a couple ways to approach this in my mind. The first thing that I thought of was with visual buttons; say a green circle to initiate the ‘walk’ action, and a red circle to initiate a ‘stop’ action. Users could interact with these buttons in the same way that current eye tracking interfaces work (focusing the pupil on a specific button for an established time period to activate the button).
Activating the go button would begin a walk sequence. Maybe additional ‘presses’ of the walk button could increase the users speed? Or perhaps the user would choose a single speed that is fitting to the specific VR experience. And obviously ‘pressing’ the stop button would stop the user from movement.
This would give the user a sort of tank-like (or maybe like classic Resident Evil) style of moving around. Activate movement, and then look left or right to make make your avatar change his or her angle towards a different direction.
Alternative ‘swipe’ based movement
Thinking a little deeper about movement, I would imagine that a range of possible actions could be executed by ‘swiping’ your gaze. Just like we swipe on our phones with our fingers, swiping the gaze in certain patterns could create a more fluid style of movement.
For example, say you wanted to start walking forward. You could ‘flick’ your eyes from the bottom to the top of the HMD to initiate a walk. And you could repeat this behavior to increase your speed.
You could slow down by preforming the reverse type of eye swipe. Or you could stop by preforming the same up-down swipe that slows you down, but hold your gaze a microsecond longer at the bottom of the HMD to stop your avatar completely.
I think that there could be so many possible uses of eye-swiping movements combined (maybe even combined with blinks?) to give the user the possibility of executing many different moves without needing to rely on a visual button on top of the virtual environment.
Conclusion
I feel that it is definitely possible to create compelling VR experiencing using eye tracking as the exclusive means of control. For people who do not have the ability to control their bodies, this could be an amazing tool to escape, explore, learn, and interact with others. It will take a lot of experimentation and ideas to create an interface such as this, but I think it’s a task worth taking on.
Beyond that, eye tracking technology is not something typically considered for people with full use of their hands. But I feel that as VR and AR become more ubiquitous (especially AR interfaces, such as Magic Leap) people will be interested in an eye tracking interface, most likely in conjunction with a hand based interface.
Ultimately, it’s going to depend on the application. Controls would of course change from application to application, but defining some best practices for how this behavior could work seems like a good first step in getting this technology off the ground. HMD’s like FOVE already support eye tracking capabilities, and it seems likely that companies like Oculus and HTC will adopt this tech as they improve upon their current models.