Journalism 360: Postproduction headaches

Lindsey Miller
journalism360
Published in
7 min readJun 26, 2017

In the virtual reality workflow, the postproduction stage can be full of frustrations. After you’ve invested all that time developing stories and planning shoots, it’s not unusual to import your footage and find that a camera didn’t fire or there’s an unavoidable stitch line in the worst possible place.

Journalism 360 sat down with some virtual reality experts to talk about common postproduction headaches. This panel, moderated by Sarah Hill, CEO of StoryUP, included editor Kaite Mullin from The New York Times’ The Daily 360, training and development manager Nick Whitaker from the Google News Lab, 360 filmmaker and journalist Melissa Bosworth of Tiny World Productions, immersive content creator and educator Sarah Jones from Coventry University, and director of Journalism 360 Laura Hertzfeld.

If you have issues accessing the video embed below, click here: THE BASICS

360 heatmap. Courtesy of Retinad and StoryUP VR.

Melissa Bosworth shares that her biggest postproduction headache is “getting the timing right.” “It can take so long to render out the stitches,” she says, “and that’s the first step you have to do before you can even start editing.” Her solution is to export at a lower resolution initially so she can see how her stitches look. This allows her to refine the stitching before committing time to exporting at the desired resolution.

Bosworth has also moved away from using multi-camera rigs, like the Freedom360, and is now using the Omni, which comes with software that automatically imports footage.

VIDEO: TITLES IN 360 VIDEO

Another common VR postproduction headache is adding text to 360 projects. Bosworth says that the built-in captions on platforms like YouTube, Facebook and Vimeo don’t always display properly. To make sure the text appears in every direction a viewer might look, Bosworth has to burn the captions onto her original video, on four faces of a cube.

Bosworth is not alone in this struggle. Both Sarah Jones and Sarah Hill burn in titles on a cube. Hill mentions that there are many software products that help with this. “Mettle is one. Dashwood [360VR Toolbox] is another. But who knows? Maybe someday it will actually be baked into YouTube or something like that,” Hill says. Liquid Cinema is another software that eliminates the need to burn in captions.

VIDEO: LARGE FILE SIZES

A woman in Ecuador speaks to a 360 camera. Screen capture courtesy of Kyle Perry of StoryUP VR.

Kaite Mullin says one of the biggest struggles for editors at The Daily 360 is working with large files and numerous layers of text and graphics. “We use Premiere from Adobe,” she says. “It kind of becomes too much for the program to handle, so we’ve been having some issues with crashing and with glitching in our exports.” One solution that Mullin has found helpful is exporting a lengthy piece with lots of graphics as a GIF or TIFF sequence and then reimporting the sequence and overlaying the audio.

Jones agrees that massive file size is a major headache. Another big problem is managing a multitude of files. “I’ve got all these different files, all these different feeds from so many different cameras, and then the stitch files. And then I don’t like the stitch, so I’ve done it again,” she says. Jones emphasizes that “having a clear file management system is key.”

VIDEO: EXPORTING, RESOLUTION AND FRAME RATE

Exporting can also be a source of frustration. From getting the frame rate and resolution right to making sure the project is sized correctly for a specific platform, there’s a lot to consider. Jones uses Adobe Premiere Pro to edit her projects. “I can just tick my ‘Video is VR’ box, and I don’t have to inject any metadata,” she says. Jones has always used 4096 x 2048 resolution. Hill’s team shoots at 60 FPS but renders out at 30 FPS because user devices handle it better. Bosworth started out shooting at 60 FPS “in hopes of future-proofing” her footage. However, she hasn’t had much success uploading 60 FPS to YouTube, so now she’s shooting at 30 FPS — “for the time being.”

VIDEO: DATA MANAGEMENT

Courtesy of StoryUP VR.

A big frustration of shooting in VR is not being able to see what’s been captured. This headache can carry over to the postproduction process when an editor selects footage to stitch. Our VR experts share ways they’ve managed their footage in order to save time.

Mullin shares that footage is organized by hand at The New York Times. Since editors often work with footage that they did not shoot, detailed notes are imperative to help them identify the best shots and avoid having to stitch together every piece of footage. The New York Times also has a clear file naming system so that multiple editors working on a project can easily find the footage they need.

Hill color-coded her files by hand when using a Mac. However, after her studio switched some workstations to PCs because of their higher horsepower, she made an unfortunate discovery. “Those color codes don’t translate, and then you have a bunch of random files,” says Hill. “You can’t put Humpty Dumpty together again.”

Bosworth shoots with the Omni, which imports the files for her, but she also struggles with organizing her files by hand. “I often have like eight different folders, and three of them are labeled ‘fail.’ Like something went wrong with the rig,” she says.

VIDEO: TRANSITIONS

One significant editing challenge is how to transition from shot to shot. It’s important to make sure viewers can comprehend their location and not feel like they’re jumping from place to place.

Jones uses long fades to black or white for artistic pieces. “I don’t like to do anything that breaks the feeling of presence. The hard cuts don’t always work,” she says. The mood of a piece influences the type of transition she chooses.

Conversely, Mullin aims for invisible rather than artistic transitions. She says, “Particularly in 360, where you’re really trying to immerse the audience in the piece, a transition that you don’t notice is the best one.” To achieve this, she utilizes hard cuts or crossfades. When cutting to a different area in the same space, she lines up an object that is in both shots to give the viewer a point of reference. These “match cuts” prevent the viewer from becoming disoriented.

Nick Whitaker, who sees himself more as a consumer of VR than a producer, agrees with Mullin that the viewer shouldn’t see or feel transitions. “The more seamless, the more frictionless that experience is, I think the better the overall experience will be,” says Whitaker.

Bosworth finds hard cuts challenging because viewers need time to reorient themselves. She prefers dips to black and crossfades.

VIDEO: WHO’S DOING IT WELL?

Courtesy of Kyle Perry of StoryUP VR.

As with any craft, the key to improving your skills is to watch good content created by other people. Our experts identify a few VR storytellers they’re keeping their eyes on.

Hill mentions Dylan Roberts from Freelance Society, who shoots compelling news stories in dangerous places, such as Iraq. She also praises Socrates Lozano from Scripps, who reports on natural disasters by creating narration-less pieces that utilize natural sound.

Whitaker recommends checking out the Google News Lab YouTube channel, where he has curated a playlist of quality VR content. Many of these pieces are from smaller local newsrooms that have been able to experiment with VR techniques. For example, KQED from San Francisco has been “cranking out some really great local stories using some pretty basic gear.” Whitaker also applauds the creators of the “Threshold” podcast series for “thinking about what podcasts can be like in VR and . . . bringing that audio-based aesthetic to this experience.” Ultimately, Whitaker looks for three key factors when watching VR content: “Does it tell a local story that no one else can tell? Does it sound good? Does it keep me engaged?”

Among the larger newsrooms, Laura Hertzfeld singles out Euronews and RYOT. Like Whitaker, she suggests that VR journalists look at storytelling techniques used by audio producers, which can be applied to 360 videos.

Jones loves that quick 360 news pieces are placing viewers at the heart of a story. She’s excited about getting easy-to-use 360 cameras in the hands of more journalists. Bosworth shares similar sentiments. “I’m so excited about these local and breaking news initiatives, and that it’s becoming more democratized and accessible,” she says. Yet Bosworth also admires National Geographic’s polished, cinematic 360 experiences. “They’re figuring out the languages of how to guide the viewer in a way that is really impressive to me. I think their stuff is really beautiful.”

The New York Times’ The Daily 360 is constantly churning out stories, with production time ranging from a few weeks to just a couple of hours. Mullin reassures VR journalists that “the turnaround is definitely possible” and recommends focusing on the immersive quality of a story. “We’ve found some of our most successful quick turnarounds are things like the Chicago Cubs piece, where it’s a single shot of people in Chicago finding out that the Cubs won the World Series.”

VIDEO: Watch full discussion here.

While virtual reality has many obstacles and headaches, the field is full of creative innovators who are constantly finding solutions. As Jones accurately puts it: “Whatever the challenges are at the moment, they won’t be the challenges in six months. There will be something new then. . . . But that’s the joy of emerging technology, as we adopt and we try and we fail and we learn and we play.”

VIDEO: Watch the full hangout discussion about postproduction headaches.

--

--