Virtual Production is Changing the Game for Creators

RLab
7 min readJul 30, 2020

--

RLab Technical Director Todd Bryant on why game engines are the future of content creation & remote collaboration

RLab Technical Director Todd Bryant is an award-winning creative technologist whose work includes motion capture, projection mapping and spatial media experiences for projects like Childish Gambino’s Phar0s II, Neurospeculative Afrofeminism, Giant, and many others.

As an educator, Todd has developed comprehensive technical mixed reality and motion capture curricula for NYU and RLab. His next course at RLab, a 2-week intensive on Virtual Production, will run online from August 15–18.

RLab interviewed Todd about why demand for virtual production skills is growing across industries, and what content creators need to know about these techniques and the underlying technologies.

How would you describe the evolution of virtual production, and where we’re at right now?

Modern virtual production has been around for 15 years, particularly for previsualization within the movie studios. When James Cameron was working on Avatar and Spielberg was working on The Adventures of Tintin, they wanted to be able to see what their motion capture avatars were looking like in real time instead of stabbing in the dark. So they invented virtual cameras as these little video taps into their virtual worlds, which gave them the ability to previs in a very minimal environment what the performance will look like. This data would be applied to the resolution characters only after a lengthy process of data cleaning. It took out a lot of guesswork and helped them iterate before they got to post-production. And these tools gave them more control, while also sparking these a-ha moments where they could improvise.

So virtual production really started off as a way for the tools to be monitored. What’s happened since is a proliferation of amazing graphics cards and CPUs and GPUs — all the different computing components have improved. And the software and game engine technology has become so widespread that there’s now a convergence of technologies that brought us to this place where it isn’t just about previs anymore. You can actually jump across the uncanny valley and be able to see what’s called “final pixel,” at least where environments and special effects are concerned.

Some of the best known examples of virtual production in action are The Lion King and The Mandalorian. What techniques were used in these big budget productions?

There are many different flavors of virtual production, and many reasons why The Lion King and The Mandalorian stand out. With The Lion King, they used virtual reality for virtual production previs. This is the first time that there are enough tools in place with the software –Unity in this case — to be able to harness a virtual production tool set that mirrored an actual physical production set. They were able to set up virtual dollies and cranes and drone shots, and all those things you might do in the real world, inside the software. They also leveraged the networking capacity of the game engines as a real-time collaborative multi-user tool. There are pictures of cinematographer Caleb Deschanel, director Jon Favreau, and special effects supervisor Rob Legato sitting around in Vive headsets and pointing their VR controllers at each other. What’s remarkable is that they’re using something that’s considered consumer technology to previs a Hollywood blockbuster.

The Lion King creators used consumer technology including the Unity game engine and HTC Vive headsets to previs the film in virtual reality. (Image credit: MPC, Technicolor.)

This created a renewed interest in enterprise use cases for virtual reality in media and entertainment. Not only can you have amazing experiences in virtual reality, you can use it to create content — to understand scale and depth and move around in a spatial environment in real-time. As Dan O’Sullivan at the Interactive Telecommunications Program at NYU Tisch says, “to create VR in VR is the height of VR.”

What makes The Mandalorian noteworthy, beyond the technology of the LED wall, is that it achieved final pixel through a game engine rendering in real-time. Hollywood has been using in-camera techniques for decades — for instance, using pre recorded driving footage rear projected behind an actor in a stationary car to make it seem like they are at a remote location even though they’re in a studio, like Cary Grant driving in a Hitchcock movie or similar. But The Mandalorian gave the production team the flexibility to work in a real-time game engine with an LED wall that both immersed the actor enhancing their performance, and gave realistic lighting to the foreground elements. In that case, virtual production was not just a previs tool, but the final product. It showed us that virtual production can create the final product that goes to color correction, that gets broadcast out live.

What can creators working with smaller budgets learn from these big-budget production techniques?

What happened with The Lion King was that you had a rush to produce this virtual production tool set for making narrative content that was then released by the Unreal Engine for everyone to use. Now, we can all work from home in VR headsets setting up dolly shots. Everyone can own that previs process and everyone can collaborate and iterate using readily available consumer technology.

Previs is inherently iterative, but what if actual production and previs are now part of the same iterative process? The game engines became such a useful tool because they also give you the final render in real-time. Virtual production creates an agile workplace environment where you can spin a dial and change the scene to morning and move 5000 meters. You don’t have to reset the scene, or go to a new location, or wait for a certain time of day — all of that is now at your fingertips.

What are some of the different use cases you’ve seen for virtual production outside of filmmaking?

People have been drawn to virtual production as a solution to socially distanced content production during COVID-19, whether it’s in television, film, advertising, or other digital content. Virtual productions can be like multiplayer games, designed to be played over a distance, where everyone can be in their own space. If you want to do a live stream event, you can have a skeleton crew in a studio and everyone else can be remote. I worked on a livestream for the Snap Partner Summit back in June, where the speakers from Snapchat were captured in front of a green screen and working with a minimal camera crew inside the studio, so everyone felt safe. I was able to monitor and control the Unreal environment remotely from my laptop in another state on the other side of the country.

The Snap Partner Summit on June 11 involved a real-time composited pre-visualization shoot, that allowed speakers to see themselves inside of a virtual environment while presenting. (Image credit: Snap)

Game engines have also been moving into enterprise use cases — spaces where you traditionally wouldn’t find game developers, but where real-time software just seems to make a lot of sense. For instance, because of its ray-trace lighting and visual fidelity, game engines are moving quickly into architecture and design, where you can put on a VR headset and go into your future space and in real time change the colors of surfaces and move furniture around. We’re also seeing a lot of use cases that involve people designing things together in any kind of manufacturing or engineering environment — particularly with automotives.

How will virtual production play into the different roles in the production process, and who needs to learn these new techniques?

For larger production teams, the facilitator of the engine — the creative technologist — is increasingly going to become a central part of the core broadcast or filmmaking team. This role is not going to replace the cinematographer, producer, or director, but it will help with onboarding into the game engines. As long as you have somebody that can guide you and fix a piece of software if it breaks, then everyone can be on their merry way to use their specific tool. And every tool that you’re used to using in the physical world has a corollary inside the virtual space — a camera is still a camera, it still has an aperture and a field of view in the virtual space. Once your physical tools are connected to their virtual corollary then the twist of a camera knob will move the software slider for you.

For someone like a DIY independent filmmaker, learning an entire engine can be quite daunting, but the engines are building their own frameworks on top of their tool sets to make them easier to use. I was able to teach a class on filmmaking in Unreal last spring without using any code whatsoever. It helped that we were able to use tools that mirrored software interfaces that filmmakers are used to, like Avid or Premiere, which Unreal has built into the system.

What’s the bare minimum studio equipment setup that you would need to support a virtual production pipeline?

If you’re an independent filmmaker, all you need is a computer that can run a game engine, and that’s it. Learn the software and set your assets in motion.

For larger scale productions, you first have to pick your flavor of virtual production because the tool sets start diverging from there. You can go completely digital like a motion capture avatar or volumetric performance in a digital space, or you can have a real actor in a digital space within a green screen or LED wall, or you can have digital assets composited into a real space.

Are there any new tools you’re especially excited to use?

One exciting thing about the Virtual Production online intensive I’m teaching at RLab is that we will be working with RADiCAL MOTiON, which is a New York based company that provides motion capture directly from your webcam.

RADiCAL Motion uses AI to detect and reconstruct human motion in 3D — eliminating the need for special suits, sensors, cameras or other hardware. All you need is a camera. (Video credit: RADiCAL Motion)

Their current Gen3 AI extracts motion capture in real-time. We’re working with RADiCAL right now to develop a streaming plugin for Unreal that will be unveiled in August just in time for this Virtual Production class. This will allow us to steam each others’ performances directly to all the participants in a remote multi-user session to record to the timeline in the engine or use as a live performance.

To learn more or register for RLab’s upcoming online class Virtual Production: Remote Multi-User Collaboration & Content Creation in the Unreal Engine, visit RLab.nyc.

--

--

RLab

RLab in the Brooklyn Navy Yard is New York City’s hub for VR, AR and spatial computing.