Video for VR is Wrong.

You can put away the GoPros now.

I thought this was as obvious as the sun, but apparently it’s not. 
So I’ll lay it down here:

Video, as any imagery captured with a camera, does not work in VR.
Yes, even stereo-ocular 360º spherical video.

Why? Because the camera was fixed at one point in space, and the viewer’s head is not. So although the “frame-of-view” in the viewer’s HMD (head mounted display) will respond to his/her head’s rotational movement, not so for lateral movement.

And guess what: unless you are only rolling your eyes around in their sockets like a possessed corpse (movement which VR HMDs don’t register anyways — here’s looking at you Fove), even just tilting and turning your head translates your eyes through Cartesian space.

When the imagery you see reacts to some but not all of your head’s movement, the result is nausea. I believe it’s called “vestibular misanthropy,” or something like that.

Therefore, until multi-point video photogrammetry gets really good, or until this Lytro thing gets real…pre-recorded video doesn’t work in VR. Period. We can all stop trying now.

But wait.

There’s another reason why video is wrong for VR: it’s not interactive. It can’t be — it’s pre-recorded. This is a dealbreaker because Movies in VR: why they’ll never work, and what will instead.

Furthermore, once the technology for full-body motion capture goes mainstream, it will basically be a requirement to show an avatar for the user/viewer’s body. The VR experience is always a little janky as a floating head in space, or to look down and see a body doing stuff that doesn’t match what you’re actually doing. In the future it will be a major artistic liberty to not offer the user a physically verbatim avatar in the virtual reality, rather than the default as it is now.

So what does that leave us?

Real-time rendering solves all these problems. The “camera” is free to travel anywhere and exactly match the viewer’s head movement. Nausea no more! Further, the VR world can be “physically” interactive with the user’s avatar, and more generally interactive with user input or run-time simulation output. Other benefits to immersion and comfort include one’s own (avatar’s) shadow cast into the virtual environment, as well as dynamic focus/depth-of-field based on ocular convergence (once we have eyeball tracking in headsets).

Good News.

Real-time rendering is getting more powerful each passing year — both the engines themselves and the GPUs that run them. Expect to see real-time photorealism within the decade. Once we get there, we won’t be limited to video-gamey graphics, or Pixar-style animations. We can performance capture real actors, map the performances onto high-resolution body scans, pipe it through an AI “bot” engine, and voila: You too can be marooned on an island with an unshaven Tom Hanks with all the visual fidelity and expressiveness of Gollum in Lord of the Rings or the Na’vi in Avatar.

Think about it.

And while you’re thinking about it, please put away the GoPros. 
It’s getting
ridiculous.


Like this story? Get more in the weekly newsletter from CinematicVR.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.