Going Green: Mixed Reality Capture at IrisVR
This post was originally published at blog.irisvr.com on Sept. 5, 2017
Why Mixed Reality Capture?
Virtual reality can feel impossible to explain to someone who has never experienced it. But once anyone tries VR for themselves, the excitement is undeniable. One of our challenges at IrisVR is communicating what VR feels like before they even put a headset on. This is where mixed reality capture comes in.
Mixed reality capture merges live footage with the virtual world to better convey the in-VR experience. With no insignificant amount of equipment (green screens, lots of wires, cameras, computers, lights, extra coffee, and an Oculus Rift), we are now able to illustrate what it’s like for someone to step into VR.
This is extremely powerful for talking about virtual reality in relation to the Architecture, Engineering and Construction industries — where the scale, structure, and details of a space are so closely linked to the human experience. VR is uniquely qualified to communicate those elements in person; mixed reality capture helps us show that.
How Did We Do It?
Setting up mixed reality capture is not a simple process, as it requires a fair amount of technical knowledge and VR hardware. We followed Oculus’ useful guide to set up mixed reality capture within Unity, the game engine used in Prospect.
As a first step, accurate camera calibration is required to align the real-world camera with the virtual camera’s position, direction, and focal length. Oculus’ support for mixed reality includes a camera calibration tool that helps with this crucial configuration step. For the sake of simplicity we used a static camera that captures a single unmovable view of the virtual camera. In the future we hope to set up a dynamic camera that will allow us move the camera around in the scene and get even more exciting footage of what’s possible in Prospect.
Configuring the mixed reality capture environment required careful placement of three Oculus sensors in order to get the most accurate calibration of the external camera. The room-scale VR setup with the Oculus Rift also allowed us capture the user’s whole body within the space.
With a healthy amount of wire untangling, choreography, and green screen lighting, we were ready to go. Improved support from Unity, Oculus and our own workflow will help streamline this process.
What did we learn?
The model matters. The best models for recording are not too cramped and not too big. The models need to be spacious enough for there to be nothing blocking the camera and for the space behind the user to have depth and visual complexity.
Move slowly and deliberately. The tracking felt smooth in VR, but aligning the video footage and the virtual footage is a tricky task. The slower and smoother the movement, the easier it is to align and the more pleasant it is to look at.
Front, middle, back. Virtual reality is so powerful for AEC because of depth and scale. The best way to showcase that power is to have something in front of the user and a large space behind them. This gave us a foreground, middleground and deep background to draw the eye into the screen.
Keep it simple. The most magical moments came from straightforward actions. Rotating the model, walking around, pointing, and cutting a section created the strongest moments.
Communicating new and exciting features remains one of our highest priorities. The mixed reality capture process helps us get closer to highlighting the unforgettable feeling of being in VR. We’re looking forward to showcasing new updates, tools, and models!
Let us know if you have any projects you’d like to see featured in mixed reality or if you have questions about the process. We’re always happy to help!