Production

NYC Media Lab
Exploring Future Reality
7 min readDec 15, 2015

There are three approaches to creating content for a virtual reality world: you can animate a 3-D environment, record video, or take a hybrid approach by creating a 3-D environment with photos and video incorporated.

For each stage of the virtual reality production process there are different companies working on the devices and software. While not comprehensive, the list below is an attempt to highlight the different types of cameras and devices gaining traction within each segment.

Cameras

Cameras that record in 360 degrees can capture either time (360 video), or space (a 3-D environment). In both cases, the camera is placed on a tripod. The camera arrays usually have six or more cameras pointing horizontally and one up. Audio recorders are also typically incorporated into the camera rig.

GoPro and 360Heros
GoPro cameras are frequently used in camera arrays for capturing 360 video. The GoPro wide-angle lens makes for image overlap when stitching the video together in post-production. The relatively low-price of the camera helps keep costs down when six or more cameras are required for a full array.

The GoPro Odyssey.

There is no standard way to use GoPros in creating 360 video content. Producers often use 3-D printed holders to put create a ring of GoPro cameras. Some companies are beginning to sell pre-made camera arrays.

GoPro recently announced its own array of cameras, called Odyssey, that will work with Google’s forthcoming Jump end-to-end video processing system. The Odyssey consists of 16 cameras in an array designed to produce stereoscopic video and ease the syncing process. The Odyssey is currently only available through an application process.

Matterport
The Matterport camera captures a space and turns it into a virtual reality environment. The result is 3-D recreation of a room a viewer can explore. The Matterport camera has been adopted by architectural firms that want to capture spaces for their clients, but it has also been used by media companies like The Associated Press and the Detroit Free Press.

The Matterport Pro 3D Camera.

The Matterport camera captures 360 photos from a variety of locations throughout a space. Those 360 images are then stitched together into one large 3-D image file. The camera capture is controlled by an iPad, which also provides an initial processing of the 360 image. The .obj 3-D files and textures can be downloaded from Matterport for use in other environments such as Unity.

Jaunt
Jaunt has worked in conjunction with media companies such as Elle, ABC, and The North Face. In June 2015 Jaunt released the NEO camera system, which can capture depth information along with image information. With depth information, a viewer is not only able to change the angle of the video that they see, but also can move front to back and side to side within the virtual space. The NEO system is aimed at creating high-quality, cinematic VR footage.

The Jaunt NEO camera system.

Limits on capturing 3-D environments
Commercial 360 cameras cannot capture depth as they capture video. This significantly limits the type of experience created with recorded media in comparison to media that is rendered graphically.

Recorded content is limited to 360 degree video with one perspective, or to static images that users can explore. Some have questioned the length of time a user can be engaged with 360 video given these limits.

Audio

A 360 camera is not one lens and one sensor, but instead an array of cameras working together to record the scene from multiple points of view. Once the different angles from each camera are recorded, they must be brought together into one image in post-production through a process called syncing and stitching.

First the video from each camera in the array must be put into sync with the others, often using a sound, flash or light on set as a cue. Once the video from each camera is playing back in the same time, the images from each camera in the array can be processed to find the overlap, and then can be combined into one complete shot. This can be done on a server or on a local computer.

Kolor and VideoStitch are the two most widely used stitching softwares. Photoshop is also used to touch up 3-D stitching. Many producers say that stitching software still needs improvement.

The Google Jump workflow allows for processing in the cloud with the Jump Assembler. The details have not been specified pending the camera’s release in late 2015 or early 2016.

Matterport offers an end-to-end capture and processing pipeline and charges for hosting the finished 3-D environment. As of August 2015, the monthly cost ranges between $49 and $149 dollars per month depending on the number of models hosted and how many users are granted access to the back end of the content. Matterport offers discounts for yearly subscriptions.

These services present a few potential trade-offs: Since processing is matched to the camera, it can be tweaked to match the specific quirks of that camera array. On the other hand, there may be a smaller team working on this software and it may not be updated as frequently as stitching software that is open source or covers a wider range of camera types. Also, there may be less of an incentive for the developers to make processed 3-D files compatible with other editing software, although at this point content from Matterport can be downloaded to use in a Unity environment.

Rendered

The other way to create content is through animation. Animated 3-D environments can be a cost effective way to create immersive virtual reality experiences where the user moves through a space and events are unfolding around them.

Gannet used Unity to create “Harvest of Change” with The Des Moines Register, in which users learn about the changing economics of farming by exploring and engaging with a century-old farm. The National Press Foundation recognized the production team as the first-ever recipients of the ‘Best Use of Technology in Journalism’ award.

Visual effects artists or 3-D artists may be enlisted to finish producing this type of experience to make it look realistic. However, up until those finishing touches, most of the experience can be developed and prototyped using low-fidelity graphics that don’t require a 3-D artist.

Game Engines — Unity and Unreal

Unity and Unreal are two game development platforms that are also used to create virtual reality environments. Working with Unity can be more complicated because it is not a web development language. Some companies are adapting to this challenge, such as Razorfish, which has been successfully training front-end developers to work in Unity.

Unreal is regarded as having the highest quality graphics and lighting effects, while Unity content runs more smoothly on less powerful hardware.

As the audience for virtual reality evolves, so will the population working with programming languages for developing virtual reality content. The more developers working in these environments, the more improvements and new libraries will be added to Unity and Unreal. Environments will become more efficient, easier to use, and will offer new features. It will become cheaper and easier to develop content. In turn, producers and developers won’t need specialized skills to participate.

Digital news outlet Fusion partnered with graphic journalist and virtual reality producer Dan Archer of Empathetic Media to create Ferguson Firsthand, a rendered experience based on the shooting of Michael Brown in Ferguson, MO. Archer recreated the street where Darren Wilson shot Michael Brown in 3-D, allowing the audience to explore the event through eight different eyewitness perspectives.

Viewers can explore the scene of Ferguson Firsthand by navigating with keystrokes and cursor movement. Colored beacons indicate location of eyewitness perspectives.

Graphics Libraries — Web GL and three.js

WebGL is a graphics library that allows 3-D graphics to be rendered in web browsers without extra plug-ins. No applications. No downloads. A user simply goes to a web page and the 3-D environment loads up, ready to explore. Projects made in Unity 5 can be exported to run in WebGL.

People all over the world were tweeting videos of themselves riding the Nasdaq curve.

Javascript library developers can create web browser friendly immersive 3-D environments with three.js.

Roger Kenny, Senior Engineer at Dow Jones, used this library in conjunction with the D3.js statistical and graphic library to create an interactive 3-D rendering of the 2001 NASDAQ stock bubble for The Wall Street Journal.

Kenny was experimenting with the three.js javascript library and D3 when he realized that he could map a dataset in three-dimensions without much difficulty. After discovering the workflow, Kenny produced a 3-D chart of the Dow Jones average and refined the color scheme. Although finishing the textures for the environment and finalizing the design took a few weeks, the project was much less time and resource intensive than other types of computer graphics or 360 video. “We’re trying to make it as compelling as possible with the lowest-common denominator hardware,” said Kenny.

Experience a virtual reality guided tour of 21 years of the Nasdaq here.

The finished piece was deployed for Google Cardboard as well as mobile and desktop browsers, and was met with an enthusiastic audience. “People all over the world were tweeting videos of themselves riding the Nasdaq curve.”

Exploring Future Reality is a report brought to you by NYC Media Lab. Download a PDF of the full report here.
Continue to part six of this ten part series.
Return to the Exploring Future Reality report homepage.

--

--

NYC Media Lab
Exploring Future Reality

NYC Media Lab connects university researchers and NYC’s media tech companies to create a new community of digital media & tech innovators in New York City.