VR, John Locke, and the Value of Neuroscience

Every HMD maker should care about neuroscience research, for it is critical to the success of VR.

Shahid Karim Mallick
3 min readJun 21, 2016

We currently enter into virtual or augmented spaces through the use of head-mounted displays (HMDs). These devices project or beam light into our retinas to make us see a digital image that seems…not that way. An apparition that is at once real and not real.

Binaural audio casts sound into our ears from all directions, or at least it seems that way.

Soon, haptics gloves will send signals into our tactile receptors, the nerve endings in our skin, to make us feel an object that seems distinctly not digital, but perhaps could be.

Other inputs will follow suit. Stimulation of olfactory nerves will make us smell scents that might as well be real. Stimulation of gustatory nerves will make us taste flavors that seem familiar even though they are foreign.

And it won’t stop at the five senses. Galvanic stimulators will send electrical signals into our vestibular system to make us feel motion that — let’s face it, does it matter if it’s “real” if it feels just the same?

These devices, which I’ll collectively refer to as augmentation tools, stimulate our senses to make us aware of our virtual environment — just as the real world stimulates our senses to make us aware of our natural environment (otherwise known as perception).

If it looks like an apple, smells like an apple, and feels like an apple…it must be an apple. Right?

Augmentation tools are essentially trying to send information into our brains about things that aren’t really there (what is “real” is still up for debate). They attempt to trick the senses in order to make the intangible seem real. After all, we only know what is real based on what our senses tell us — aka John Locke’s Causal Theory of Perception.

The Role of Neuroscience

Perception research in neuroscience is explicitly concerned with how our brains process and integrate sensory information to build understanding. Studying how our senses take in information is crucial because we’re basically trying to replicate/reverse engineer that process with immersive VR and AR.

However, we’re trying to shrink it down, to fake it — instead of creating an apple, we want to convince our senses that an apple is there. Therefore, we can’t just study the senses, but need to study what fools them.

Our brains can be fooled very easily, but we have to find where those margins exist and how to manipulate them. Andrea Hawksley at eleVR writes,

“The thing about perceptual illusions is that even though they seem baffling, as though our brains were making crucial mistakes, they actually tell us really important things about how we perceive the world. And, generally, what they tell us is that our brain is taking what actually ought to be insufficient information and processing it based on some fairly reasonable assumptions about what our world is really like to come up with a good understanding of what is probably actually happening.

(a) Half-eaten apple in the real world (or a half-rendered apple in the virtual world, however you want to think of it) (b) Senses see the half of the apple that is facing them, perfectly intact (visible half); the bitten half is hidden from view (invisible half) (c) Brain processes the information: “I see half an apple, and from what I know about apples, they are symmetrical. There is no reason to believe the invisible half is deformed or different, so that must be a full apple.”

Perceptual tricks and illusions are the shortcuts we can use to fool our senses and bridge the gap between real and virtual (“not-real?”). For example:

Haptic Retargeting:

The user perceives the same block as 3 different blocks because of slight frame shifts in the virtual scene. His hand doesn’t notice it’s reaching for the same spot each time.

Redirected Walking

The user does not notice that he’s walking in a curved line because vision is so dominant for us (seeing = believing).

Additional examples: Invisibility

Researchers are slowly beginning to see the need for interdisciplinary solutions. However, there is still astonishingly little collaboration in academia between EECS and neuroscience researchers re: VR/AR/natural interfaces/future of personal computing. There should be much more overlap as these specialized fields converge on this fascinating nexus. As we are confronted with greater perceptual challenges (e.g. vergence-accomodation conflict), the role and value of neuroscience will become more clear.

And it will work both ways. As VR and AR tech improve, they will increasingly be used in perception research to drive more interesting questions and answers, yielding a positive feedback loop of profound perceptual and personal computing progress.

--

--

Shahid Karim Mallick

I build natural interfaces (see my latest work at smallick.com). Studied neuroscience @BrownUniversity. Product @NeoSensory, previously @CalaHealth