Innovating Immersion in VR
Virtual Reality has a long way to go in delivering on its potential as a game changing computing interface. Undoubtedly, there are many examples of exciting projects coming from creatives all over the world (Neat Corp’s Budget Cuts and Google’s Tilt Brush are great examples). However, the toolsets, techniques and concepts content developers have at their disposal are still limited. The result is that the vast majority of VR content feels limited in its immersive capabilities, and many early adopters still lack a “killer app” they can point to as a reason to purchase.
Immersion in VR is a product of the tools, techniques and concepts developers use to engage their audience. It is a core component that countless organizations and individuals are seeking to solve, and it is a critical hindrance to the mass consumer adoption of VR. Below, I’ve highlighted three areas of immersive technology where early stage companies are focused on driving innovation, and have highlighted interesting companies within each:
Bringing a Sense of Touch: Haptic Feedback
Haptic Feedback, or often just referred to as “haptics,” is the use of touch in content design as a means to communicate to an audience. Since the beginning of VR technology haptics has been a frequent area of exploration in commercial applications. Morton Heilig, an early pioneer in VR, incorporated a haptic device in the form of a vibrating seat in the infamous “Sensorama,” which was first prototyped in 1962. Heilig wanted to create the “cinema of the future,” but its ultimate commercial success was limited due to a range of prohibitive factors.
Academics have consistently proven that touch is a crucial factor in creating a sense of immersion within VR experiences. A recent paper titled the “The Wobbly Table” (a collaboration between researchers from the Stanford Human Interaction Lab and the University of Central Florida) demonstrated that haptic feedback was a statistically significant factor in users being able to experience empathy and allocate attention in VR experiences. This has been consistently proven in various studies from researchers such as Gallace & Spence, Cagatay Basdogan, and J.D. Fisher.
The findings are intuitive, but their implications are impactful. If content developers want to create truly immersive experiences, haptics will need to become a component in their toolbox. Below are a three companies with distinct approaches in making this a reality:
- Dexta Robotics - The Dexta is a robotic exoskeleton that fits over a user’s hand. The device tracks movement, and each finger has its own pressure sensor allowing it to change resistance to your hand and fingers depending on the object you’re interacting with.
- NullSpaceVR - Nullspace has built a body suit and gloves with 32 independently activated vibration pads on the chest, abdomen, shoulders, arms and hands. Developers can use a library of 117 effects, which allow users to have a full upper body haptic feedback system.
- Emerge - Emerge uses ultrasonic sound waves via focused speakers to create a sense of touch that can be felt in space within a “cone of interaction” above the device. Their algorithms can simulate the contour of a digital object and model precise interactions.
Adding an Extra Layer of Depth: Volumetric Capture
If you did an audit of the content available across VR platforms today, computer generated games would dominate the landscape. However, there’s a growing field that many are calling “Cinematic VR,” which is content that leverages both live-action and CG assets. Typically, these experiences drop a user into the middle of a scene in a fixed position, allowing them to rotate their head, but not freely move around.
These experiences restrict viewers from moving through 3D space because traditional capture methods (what is known as “spherical capture”) film objects from a single perspective. Volumetric capture is different in that it uses specialized equipment to capture objects from multiple perspectives and transforms it into a fully “volumetric” asset that can be placed within CG environments. The resulting experiences allow audiences to move freely within a computer generated environment while fully interacting with a live action visual asset.
While there are various approaches to creating this type of content, the process typically follows three stages. First, images are captured using an array of multiple cameras and stored as a set of cross-sectional images (we’ll call this the “volumetric dataset”). The quality can vary dramatically depending on multiple factors such as the number of cameras, camera position, lighting, and degree of image overlap. Second, software is used to transform the “volumetric dataset” through a variety of proprietary methodologies — this will typically involve computing/caching surface geometry and voxelization (depth). Third, this dataset is then converted into a format that can be utilized by a 3D Engine (Unity, Unreal, etc.) where a developer can insert the asset into a CG environment.
Early stage companies focused on this problem have a range of approaches. They frequently appear to be production studios given the fact that the capture environment needs to be closely managed. However, the core technology, and thus the core value, comes from their approach to building software that manipulates the captured content. The end goal is to be camera agnostic. Some examples of companies in this space include:
- 8i — 8i transforms HD video from multiple cameras into a fully volumetric recording of a human that viewers can move around in VR/AR. The company is focused on creating high fidelity experiences, resulting in a slower turnaround time and higher production cost.
- DepthKit - DepthKit is focused on creating accessible volumetric video. The company currently sells a depth sensor that can be attached to common cinema cameras, and provides a suite of tools that allow anyone to capture, edit, and publish volumetric experiences.
- Uncorporeal - Uncorporeal is focused on creating high fidelity volumetric content and is seeking to allow all content creators to do the same. The co-founder was also involved in the foundation of 8i.
Making Sound Immersive: Spatial Audio
In traditional media, audio is a subtle but imperative tool content creators have in creating compelling experiences. VR has made this even more true. In order to create a fully immersive experience images and sound need to satisfy what a user would expect in real life. This includes a variety of features that typically go unnoticed such as echoes, variances in volume, and reflections of sound waves. Furthermore, in a fully immersive environment audiences are given a range of latitude to not follow a narrative as intended from the creator. Audio cues can be used to subtly guide a viewer down a path in the absence of linear direction.
Audio is a tool that drives both realism and direction in VR.
Big VR players have anticipated this need and shown their desire to advance this field. There are early stage companies emerging to tackle this issue, but most of the activity has come from the larger players. Below are some recent highlights:
- Facebook recently acquired a small studio, Two Big Ears, and now offers their “spatial workstation” for free
- Google released a new version of their Cardboard development kit for Android and Unity in order to support Spatial Audio
- Nvidia is also getting in on the party, they recently released their VR audio SDK that uses an innovative geometric acoustic rendering engine
VR is still an emerging field with major innovations still need to be made in order to become a mass market consumer product. Killer content will ultimately be the driving factor for most consumers to adopt this technology. Significant progress has been made just in the past year in all aspects of the VR landscape, and major strides in immersive techniques are being made daily. It is an area I am watching closely, and something I suspect many other entrepreneurs and investors are doing as well.
Thoughts, suggestions or comments in relation to this article are welcome — feel free to reach me @kangpandabear here or on Twitter.