A VR-Enabled (Near) Future
Virtual reality is at an interesting spot. There are tons of promises, but not much has been delivered yet. Ever since I bought the Oculus DK2 in 2014, other than a few demos, I have yet to see a very compelling full VR experience. While Facebook’s Oculus Rift, HTC Vive, Microsoft’s Hololense, and Google’s Cardboard / Daydream are battling it out from a hardware / content platform perspective, the VR experiences today are largely still underwhelming for the everyday consumer. We know the big platforms are targeting gamers, we know hardware will improve and untethered HMDs will get good, we know that VR and AR are just 2 different starting points that will likely merge at some point in the future. But what else do we need to get to the next step?
Haptic Hardware Innovation
Anyone who tried the new Oculus and Vive will tell you that having hands (through controllers) in VR make a huge different. Gaming aside, having hands enables you to become much more proactive in VR. A interesting hack here is that you can make the VR-version of the hardware look / function exactly like the physical hardware, therefore overcoming the visual disconnect completely. We are only a few years away from being able to render our surroundings completely in VR, and even augmenting them to create an enhanced version of our surroundings. However, v1 here will not be as smooth. Haptic feedback / distributed sensors can help a huge deal to bridge the gap. We need gloves, shoes (socks), voice interfaces, cheap/semi-generic objects for picking up, etc.
360 video is a pretty near-term opportunity for VR, both on the camera side and the video processing / production side. The demand is large since the vast majority of casual VR consumers will be on-boarded through 360 video. Storytellers like NYTVR are putting out some great content. Facebook’s Surround 360 have potential to make a big impact. The logical next step here is to start introducing “virtual elements” into real-world videos. In fact, here is where we may see the first signs of VR and AR merging. Google has shown some amazing progress here with project Tango. Capturing technologies like light-field capturing may also play an interesting role, although we are still quite far from 360 degree light field cameras.
Rendering / Motion Sickness
3D rendering is hard and resource hungry. While it’s built on some fundamental compute graphics concepts, there are also TONS of case-speicific hacks (hair, rain, snow, etc). The concept of “presence” is easier when we encounter significant and overwhelming experiences, but it becomes a lot harder for intricate, detailed experiences. That’s one of the reasons the Whale demo works so well, but the surgery demo feels more like a joke. Even with the dramatic decrease in GPU cost, we are still pretty far away from real-time, photo-realistic 3D rendering. Therefore, making the content “interesting” may be a better way to go than making it “realistic” in the short term. Until significant progress is made here, “good enough” VR experiences will have constraints. (No lateral movement, Objects have to be 3m away, etc)
It’s unlikely there will be a single virtual world like the Oasis from Ready Player One. If anything, the “virtual world” will be more like an OS that routes users to many different worlds, where the user’s identities can change within the single worlds themselves. The present hurdle for virtual worlds is still content creation. It’s hard to create worlds — it requires large teams of people with different expertise and lots of time. It’s also hard to predict the consequences once the worlds are “released”. Simulation-based technologies here can be very interesting. Instead of only allowing the “experts” to create worlds, what if anyone can create worlds, set up some original conditions / artificial agents, and see how the world evolves in accelerated time? Only game makers will use Unity/Unreal, but examples like Minecraft proves the creativity of the crowd is often greater. Creating tools to lower the barrier for crowd-sourced worlds sounds like a much faster way to create content.
To say you are building a VR company in 2016 is like to say you are building a mobile company in 2007. There are a few large platforms who are all likely to exist in the long term, and there are still many more killer applications yet to be discovered. To make VR prevalent in consumers’ everyday lives, we are still missing some no-brainer use cases like emails and maps for smartphones. Enterprise use can also be a huge driver. Aside from the obvious 3D design tasks, companies like Envelop VR are exploring VR experiences for office workers. We are not far from a world where the average house hold owns a few commodity VR headsets, and it’s up to the platforms and application developers to show people the way to a VR-enabled future.
VR it is a dream machine. It has the ability to re-create the human experience, and the ability to make our wildest dreams a reality. It’s the closest thing to direct brain manipulation. In fact, with some hardware advancement it could BECOME the basic mechanisms for direct brain manipulation. The cynical views point directly to the Matrix style world where culture and societies are ruined by VR. But with good development and good incentives, we can also envision a world where VR dramatically improves the human creativity and quality of life. I am fully convinced that when VR is fully deployed, it will have a greater impact on us than the Internet, mobile, or any platform that came before it.