My adventures in the land of VR

bob sacha
journalism360
Published in
26 min readFeb 9, 2017

By Bob Sacha

This post was written to capture what I learned on my journey into 360 video, thanks to generous grants from the Tow-Knight Center for Entrepreneurial Journalism and from the City University of New York (CUNY) Strategic Investment Initiative to the CUNY Graduate School of Journalism, where I’m a Tow Professor.

Look up, look down, look all around: a still from my VR short story “Silo City.” (Photo by Bob Sacha)

Introduction

I wasn’t a big believer at the start of the VR revolution. I received several demonstrations of the Oculus Rift in 2015, and while the technology was cool, the headset was heavy and the animation was very basic. The big change for me, and I think the gateway to a wider audience, arrived when VR came to the smartphone via Google Cardboard.

I remember the exact moment when I saw the possibility in the medium. I found myself in a refugee camp kitchen as men baked bread in the Vrse production for the United Nations, “Clouds Over Sidra.”

Being inside that room a world away, able to look around and take everything in as if I were there, I was hooked. Other scenes in the film also fascinated me: being in the classroom with the subjects, seeing students look directly into the camera as they walked across the playground toward the school, being in a computer game room as a kid slipped by on the way to his seat.

At 8:45 minutes, the film was not short, but I watched it to the end, pausing only to pick myself up off the floor—after I fell over a cabinet in my office. I’d been immersed in the story, walking around holding a Google Cardboard and my iPhone, with headphones attached to my ears.

So I learned my first lesson about 360 video: Best to sit in a swivel chair to avoid personal injury while twirling in circles viewing VR.

“Clouds Over Sidra” was a hit for the United Nations. Wired magazine wrote: “The UN showed the film at fund-raising events, claiming that it helped raise $3.8 billion from donors, and launched a virtual-reality division.”

Since viewing “Clouds Over Sidra,” my perspective is that virtual reality is an exciting and wildly innovative way to tell a story. No one knows if it’s the next big thing in visual journalism, or the future or the savior of the news business. But given the speed of change in the way we communicate, who wants to be left behind if it does turn out to be those things? Already, it allows viewers to be immersed in another world and to become part of the story. It’s been called the ultimate empathy machine, and at the same time it’s audience-aware storytelling. And it has the potential of making money for journalists and news organizations because advertisers want it.

Google is betting big, distributing 5 million Cardboard headsets (1.3 million of those have been distributed by The New York Times). Funding for VR in 2016 is estimated at $2.6 billion. The game engine maker Unity, one of the two most popular VR software companies, is now valued at $1.5 billion. VR is predicted to generate $150 billion in revenue by 2020, disrupting mobile. For Facebook’s Mark Zuckerberg, “This is a good candidate to be the next major computing platform. It’s worthy of a lot of investment over a long period.” China is investing big, and the rapid success of Pokemon Go points to more investment in augmented reality.

The audience for VR is expanding rapidly. On the low end, VR is accessible by anyone with a smartphone (more than 2 billion people worldwide). On the high end, VR offers an advanced ride to fans who can afford a specially designed headset hooked to a fast computer.

Clearly, VR is everything. With that type of buildup, who wouldn’t want to be involved?

A J+ VR workshop for journalists, artists and storytellers at the City University of New York Graduate School of Journalism, led by Marcelle Hopkins, white shirt. (Photo by Bob Sacha)

And what if you walked a bunch of seasoned visual journalists through the process of shooting and stitching VR video (see photo above of CUNY J+ workshop with Marcelle Hopkins), then gave them headsets to watch what they had made, and they all exclaimed, “OMFG, this is amazing!” What would that tell you about the medium’s potential?

On the other hand…

Videographers are still figuring out how to create VR. The technology is basic: Most of the camera rigs are hacked, either custom made by VR production companies or cobbled together with a plastic 3D printed holder for 2, 4, 6, 8, 12 or 24 small action cameras of your choice (such as the GoPro), which require pushing all the right buttons to turn on, then a twist and a clap to sync. They run out of battery quickly or overheat after less than 30 minutes.

Once you’ve got the shot, downloading and organizing that data is a struggle as you try to keep each camera and each shot organized. (Did anyone mention that your video file might actually be split into two parts by the cameras?)

Next, you’ll need a powerful computer and special software that stitches together all those moving videos into a crazy 360 moving quilt. Or, if you’re creating animation, you’ll need to understand the even more complicated world of game engines and have an even more powerful computer.

Yes, there’s a simple all-in-one solution that produces decent still photos and crummy low-res video. Or there’s another one that solves all the problems, but only if you have a certain type of phone and a certain brand of computer. The perfect all-in-one solution will always be just over the horizon.

The complications don’t end there. There’s the distribution issue: Once you produce a VR video, how do you get it to an audience prepared to experience it? What headsets do the viewers need? And what about standards and ethics?

So why did we decide to jump in now? Why not wait?

Still, after considering all the problems with VR and the prospect of feeling like an idiot instead of a reasonably competent videographer, I decided I needed to get into the VR game. I resolved to start experimenting with the tools at hand so I could capture VR video, learn how to use the software and make a lot of mistakes.

I understood that visual storytelling in VR or 360 video is a radically different form of storytelling from anything we’ve done before — starting with the fact that to capture cinematic reality, you need to turn on the camera and disappear, to run away so you are not in the shot.

But I believed and still believe that the key to successful VR isn’t to pull off that vanishing act with smooth moves or slick software, or even to solve the more profound problem of getting viewers to look where you want them to inside a shot even though the technology allows them to look the other way (more on this later).

The key instead is to tell an effective story. And while the tools will certainly change, exploring and understanding how to tell an effective story in VR will not change. I figured that I could pull it off — and that it would be instructive to describe my experience trying even if I failed miserably.

BTW, what is this thing called VR?

Before I share my first attempt at VR, maybe I should try to define VR as I understand it.

Virtual reality, according to the dictionary definition, is the use of computer technology to create a simulated visual and audio environment that allows an interaction by the viewer. VR allows the viewer to be immersed in the virtual world through a “box” that is attached to their face to block out the “real” world.

There are two main types of VR video:

1. Cinematic virtual reality is live action captured with one or more 360-degree cameras. It offers or simulates a 2D or 3D view of real space, but only the space surrounding wherever the camera is positioned. As a result, viewers can’t choose to move through the space on their own; they are rooted to the camera’s position, whether it’s fixed or moving. Using a Google Cardboard viewer and a smartphone or a dedicated headset like the Oculus Rift or the HTC Vive, the viewer is immersed in the action and the story.

2. Animation or a computer-generated replication of reality is usually rendered in game engines such as Unity or Unreal Engines. Often referred to as “true VR,” it lets the viewer move through the story, interact with it and influence its outcome. Using a special dedicated headset like the Oculus Rift or the HTC Vive, the viewer is immersed in the action and the story.

There is a fierce semantic argument about these terms: Some people say that 360 video is not VR, that in order to have an interaction you need to be able to physically move through a story. I don’t agree.

If you strap on a VR headset or hold a Google Cardboard, put on a pair of headphones and watch something as powerful as Lynette Wallworth’sCollisions,” you will be transported and really feel like you’ve been in another world. The story, about an indigenous elder in the West Australian desert who lived largely untouched by Western culture until he found himself in the middle of an atomic test in 1950, is predominantly a 360 video with a few animation overlays. Yet it’s an amazing immersive story that makes you feel wonder, awe and other emotions, and that’s VR for me. So I’ll be referring to 360 video as VR in this report.

The advantage/disadvantage of 360 or cinematic video is that it captures real-world events that existed in the moment they were recorded. The disadvantage is that the viewer’s interaction is limited to being able to look in every direction (360 degrees). Because the viewer is rooted to the camera position, they are unable to choose the path by which they move physically through the story.

The advantage/disadvantage of animation is that anything can be created in the computer and the viewer can choose to move through virtual space in a nearly limitless way. (Examples include Nonny de la Peña’sHunger,” which uses captured audio and simple graphics built in Unity, and Dan Archer’sFerguson Firsthand,” which uses eyewitness reports and graphics created in 3ds Max and combined in Unity; both films are viewable in the Oculus Rift.) Animation also allows the viewer to interact with this computer-generated world. No matter what the source material, the video must be created by the computer from a blank canvas. At this time, the images range from basic animations to crude forms that represent reality. Like everything else, that will change.

Methodology

I decided to focus on 360 video, even though the interaction is currently limited to the viewer turning their head 360 degrees, because

1) the skill set required for 360 video is somewhat aligned to that of traditional filmmaking, something I knew well and had taught extensively,

2) I found it easier to see 360 video in a purely journalistic sense since it captures reality, and

3) I knew people who were already experimenting with 360 video in the field of journalism who could answer a string of my dumb questions.

I decided not to focus on animation, even though it holds more potential for viewer interaction, because

1) at this early stage, projects done using animation have simple or crude visuals that removed me (and many others) from immersion in the scene,

2) the idea of “recreations” sets off warning bells for many journalists, and I didn’t want to be mired in that discussion (even though I personally believe that recreations can often lead to a deeper understanding of a story),

3) I had no experience with animation and knew no one in the field, and

4) game engine software is complicated; mastering it would require a steep learning curve.

And the journalism?

For me, the possibility of making someone feel a story is key to creating powerful journalism. Yes, there’s news and that’s important, but I receive a lot of basic news on my phone via push notification, so by the time I check in on a website, I already have the basic facts. But push notifications don’t give me a deeper understanding of the story or the emotion of why I should care about it. One of the things that drew me to video as visual journalism is the possibility of telling longer stories that create empathy and understanding by focusing on characters and emotions and movement. There’s a possibility of increasing that empathy, increasing the viewer’s immersion in a story, with this form of visual storytelling. It’s also important to understand what kind of story would work best in a 360 video.

Journalism is always exploring new forms of storytelling and journalism must adapt to survive.

Being able to create journalistic VR video means I can teach the skill to my journalism students, guiding them on how to think and shoot in VR. Helping students create VR stories, test best practices, develop technical workarounds and explore ethical considerations means that students who leave the CUNY J-School are trained as leaders in this emerging field.

At the present time, many news organizations are creating VR experiences by partnering with high-level, expensive VR production companies, often that have no journalists on staff, which means the news organizations have to send their own journalists to work on the story. There’s a need for trained journalists who have skills in VR storytelling. Journalism graduates who are trained in VR journalism are more valuable in the job market and allow journalism organizations to bring visual storytelling back inside their organizations.

Finally, advertisers and marketers are intensely focused on VR because of its transporting potential (think travel, real estate) and they need to put their VR ads in places where there’s VR editorial content. So journalists are going to have to create smart VR content to attract viewers, and also to work with those high-paying advertisers.

Findings

The best way to understand something is to do it, and so I dove into capturing VR video in and around New York City. I’ll list my experiences in the order in which I had them as I narrowed my scope to the best equipment and practices.

The plan was to experiment with capturing 360 video scenes on several different cameras and learn to stitch the footage on the computer. We’d share that knowledge with students and involve them at all stages.

Of course the equipment will change, but the lessons in what makes VR work will not. As I write this, however, all of this equipment is available for purchase.

A walk down Eighth Avenue

Don’t watch this on a full stomach.

I started by using the VSN Mobile V.360 HD camera. While it didn’t quite capture a complete 360 sphere, it was easy to set up and paired with an iPhone app that allowed me to preview the image and change the camera settings quickly and easily. The camera was triggered with a small remote.

After testing the camera at home, my idea was to take a walk down Eighth Avenue and capture people on the street at rush hour.

Here’s my first test, shown as a panorama, shot on the move as I walked along the busy New York City thoroughfare.

But here it is in scary 360 format on YouTube.

https://www.youtube.com/watch?v=5Yd6TlkSX3g&feature=youtu.be

I learned two big VR lessons from my first experiment:

  1. If you’re holding the camera, you’ll be in the video, and unless there’s a good reason for you to be in the video, you’ll look awkward, to say the least.
  2. If you shoot 360 video while walking, the footage will make the viewer nauseated.

Later I learned I was not alone: When we first gave students 360 video cameras to experiment with, they chose to shoot while walking as their first test, almost without exception.

And like me, they moved on to making more interesting things after that.

Advantages: Easy to use; no stitching required as the camera captures everything in a single image.

Disadvantages: Not 360 as the camera cuts off the top and bottom of the sphere; not great video quality.

Monster in a box

Or trying to run before learning to walk.

I was super excited by the next camera test, the Freedom360 rig that required 12 GoPro cameras, arranged side by side, with two cameras looking in each direction, in stereo pairs (like our eyes), that would give us 360 3D video. We carefully numbered all the cameras, numbered the cards that went into the cameras and paired the cameras wirelessly with a remote trigger that would make them all start recording with the single push of a button.

360Heros 12-camera stereo rig for 360 video. (Photo by Bob Sacha)

Or so we thought. The first tests were a disaster as not all of the cameras were running, or some of them were set incorrectly, destroying the possibility of stitching the footage together into a sphere. Each camera records just a section of the scene, so stitching the footage from multiple cameras is like creating a moving quilt from many distinct pieces of video—and all the pieces have to be in the appropriate place for the whole to make sense. If one of the cameras doesn’t start or is set wrong, it’s like trying to create a quilt with a bunch of pieces missing in the middle.

We were eventually able to make a test shot in the school’s newsroom, only to realize that the stitching software could not yet stitch stereo pairs (this feature was subsequently added to the software).

Later I asked Marcelle Hopkins, who directed and produced (alongside Evan Wexler and Benedict Moran) the second video that inspired me—“On the Brink of Famine,” a 360 documentary about the crisis in South Sudan (funded by the Brown Institute for “Frontline”)—if she’d ever done 3D 360 video. She smiled and said wryly that she was still working to master 2D 360 video. This news, coming from one of the best in the 360/VR visual journalism field, was sobering. (See my Q&A with Hopkins below.)

Our experiment with the 12-camera rig was a disaster (thanks to my hubris and insufficient research), but it led to perhaps my greatest lesson: Learning what to shoot in 360 video to tell a story is more important than mastering the technology, because the technology is going to change. Feeling overwhelmed by the technology led me to change the way I thought. I felt defeated by the equipment but I realized that the technology was going to advance and get simpler soon. If I focused on mastering the technology in this complicated form but didn’t think enough about what I was capturing, I’d be lost when the equipment was simplified; all my equipment knowledge would be unnecessary. So I decided to shift focus and concentrate on what to shoot. The forms of effective 360 video storytelling — which are being invented — will work no matter which device you use.

Advantage: 3D video captures the depth that we perceive in the real world.

Disadvantages: Large rig hopelessly complicates shooting, organizing and stitching, and postproduction requires a lot of storage space and computer power.

The whole world in your hand

Or how come everyone doesn’t make it this easy?

Ricocheting to the other end of the spectrum in size, weight and money, I dove into the Ricoh Theta S. I had first seen what this camera could do in a great 360 story by Al Jazeera, “Hajj 360,” using the first generation of the camera to give people an incredible first-person look at the Hajj in Mecca, Saudi Arabia. The second generation of this camera was just as affordable ($360) and offers much better video quality; best of all, the footage can be stitched using Ricoh software on your phone or laptop. The Theta S shoots both still photos and video and even transmits a preview to your phone while in still mode.

This is the perfect tool to start exploring 360 video—a kind of digital sketchbook. The video resolution is low; the video image is not very clear and is barely usable for the web (although that did not seem to bother Al Jazeera, which goes to prove that content trumps video quality). However, the Theta S is the best entry point to test out ideas and shots in 360 video and quickly see what works and what does not.

Putting this into practice at the CUNY J-School, I worked with Matt MacVey to create a series of VR Jams for students. We passed out Theta S cameras so that students could shoot and experiment, then return to the classroom, assemble the shots, view them and discuss what worked and didn’t work — all in a three-hour period.

As usual, the students came back with interesting insights and 360 video shots, and we captured these on our Tumblr page.

Above, CUNY J-School student Alden Nusser takes the Theta S for a spin, literally, by attaching the 360 camera to a revolving door in midtown Manhattan. His takeaway? “Situate it on a precipice, where two or three environments meet.” (Photo by Alden Nusser, CUNY class of 2015)

CUNY J-School student Barbara Marcolini (white shirt, with phone, above right) experimented with placing the Theta S at different heights. “It’s a fly on the wall. Pretend that the camera is a little fly. You can experiment with scale.” (Photo by Barbara Marcolini, CUNY class of 2016)

Barbara wondered: “A bird among birds? What would a bird see?”

CUNY J-School student Kara Chin (above) worked with wireless audio and the iZugar Z2X on a moving monopod for a tour of 190 Bowery. Read her impressions on VRCUNYJ.tumblr.com. (Photo by Kara Chin, CUNY Class of 2016)

Advantages: Small, light and cheap; comes with stitching software; shoots video or stills; has an image preview for your smartphone; transmits still image directly to phone.

Disadvantages: Lenses scratch easily just from sliding the camera into the case; video quality is low; footage must be displayed on Ricoh website but can be embedded.

Sweet and simple

A clever hack, handmade in Hong Kong.

Looking for higher-quality video, we knew we’d have to return to the GoPro for larger video capture. We were also willing to do a bit of work in the computer. So we moved from the GoPro 12-camera rig to a custom GoPro two-camera rig made by iZugar, a small studio in Hong Kong (thanks to Ray Soto from Gannett for introducing me to these wonders). Adapting two GoPro Hero4 cameras by removing the standard wide-angle lenses and gluing on fisheye lenses that each see slightly more than 180 degrees, iZugar was able to reduce the postproduction problems and increase the possibility of success by six times. The video quality, while not perfect, is much better than the Theta S, and it’s possible for students to learn to stitch and assemble since there’s a single seam to stitch. The quality is also good enough for the web.

CUNY J-School student Guglielmo Mattioli used the iZugar during his summer internship at CityLimits.org to make a series of 360 videos of Jamaica Bay.

http://citylimits.org/2016/08/17/a-video-visit-to-jamaica-bay/

Here’s a 360 video I shot with Matt MacVey at the CUNY J-School when the architects gave a tour of the new space the school is building. We attached a wireless microphone to each of the three architects but had a tremendous amount of radio interference, which caused the audio to be unusable. So we ended up going with our backup recorder, which had been attached to the monopod.

.

Bob Sacha (above left) and Matt MacVey hide in the restroom during filming. Since the cameras can see 360 degrees, you need to start the cameras and then run to get out of the shot. (Photos by Matt MacVey and Bob Sacha)

One fun idea was to have Eric, one of the architects, carry the camera over to the window at 2:30 so we could get a better view outside.

https://www.youtube.com/watch?v=KpgQiTWgSLI&feature=youtu.be

Advantages: Simpler shooting with just two cameras; faster turnaround due to simpler postproduction and stitching of just two images; higher video quality than the Theta S.

Disadvantages: Not much coverage overlap with the two cameras, so a subject who is close to the camera and in the seam (the line where the coverage from two lenses meets) could disappear. Color correction in the stitching program not perfect.

The industry standard

And still a nightmare.

This six-camera GoPro rig is the most popular VR rig these days (outside of the custom-built rigs that companies like Vrse/Within, Google and Jaunt use). When it works — that is, when subjects are not too close to the camera — it’s a joy. But with six cameras, more visual information is captured so there are more possibilities for error. My first successful shot was a test on the street in front of the CUNY J-School. I learned two things: If you stand a ball of blinking cameras on a stick in the middle of a NYC sidewalk, people just ignore it. And passers-by who get too close to the rig (closer than four feet away) flip out the software when it tries to stitch videos of subjects moving from one camera field to another.

Finally, a VR story that worked! See the film here: https://www.youtube.com/watch?v=BhP2anrGUA0

Finally, I was able to assemble several VR shots together into a small story. I had long been fascinated by Silo City, a group of abandoned grain elevators in my hometown, Buffalo, New York. On a trip there I brought the Freedom360 GoPro six-camera rig and lucked into a poetry reading in one of the silos. The organizers, the Just Buffalo Literary Center, were kind enough to grant me permission to set up my rig inside. I thought through a series of shots, starting outside, then moving into an anteroom and finally capturing a performance by a local American Indian storyteller that included music. I was lucky because the location was visually interesting in every direction, including above.

One of the interesting things about VR is that the viewer can look anywhere in a shot, but when you cut to a new shot, you choose the first thing the viewer sees.

The main choices were actually made in the video editing software when I decided which parts of the longer clips to use and where to set the point of view when the shots changed.

I felt the story needed some context, and while I’m not a fan of narration, I added some things that people might not know.

What did I learn? Action in the frame helps lead the viewer’s gaze. A subject looking up encourages the viewer to do the same. Following a trio walking through the shot guides the viewer through the scene.

I also learned that I can’t do everything, so I’ve decided to let some minor stitching errors go. I’m rationalizing this by assuming that the next generation of 360 video cameras will feature stitching in the hardware, and these errors will disappear. Part two of my rationalization is that I’d love to move on to learning about sound rather than dive into After Effects.

Software

The wizard here is the software that takes these videos and helps you sync the cameras and then stitch the video views from multiple cameras together. When I started, we tested two programs: VideoStitch Studio and Kolor’s Autopano Video Pro (which is used in connection with Autopano Giga). Since then, Autopano Video Pro, made by a French Company that was purchased by camera manufacturer GoPro last year, has become the standard among the VR journalists I know. Many functions are automated, and the software keeps getting better and more intuitive with each release. But it’s not perfect yet.

Autopano Video Pro starts by syncing the video using either audio or motion detection. The workflow when shooting is to clap three times, once on each side of the rig, then spin the monopod—akin to casting a magic spell. I’ll often check the rig, turn on the cameras and perform this magic ritual, then carry the rig to the shot location, set it down and walk away.

To stitch the video from six cameras that are pointed in different directions, Autopano Video Pro creates a series of still photos and exports these stills to a separate program, Autopano Giga, which automatically finds common points in each frame, then moves those coordinates back to Autopano Video Pro for rendering and output. The video, which is rectangular, can then be brought into a traditional video editing program like Adobe Premiere (which recently added a VR viewer and some export features).

Viewing in a headset while working is important as the field of view is radically different from the widescreen view.

For distribution, there are dedicated 360 video services like Vrideo and Littlstar, but YouTube and Facebook are the major platforms. At this point, YouTube can support positional audio and allows you to use Google Cardboard on both Android and Apple devices. Facebook has the greater reach and is constantly improving its 360 video player.

Storming the classroom with VR

A CUNY J-School craft class dives into VR. (Photo by Bob Sacha)

One effort that proved to be wise was taking VR into the classroom to introduce it to students. We reached out to all the professors in the CUNY J-School with a proposal for a 30- to 60-minute session or a hands-on demo. We started with a short intro lecture (10 minutes), then a viewing session of a select number of VR stories (we chose “Clouds Over Sidra,” “Vice News VR: Millions March,” The New York Times’ “10 Shots Across the Border” and ABC’s “Inside North Korea”), giving each student a Google Cardboard donated by The New York Times. We followed with small group discussions based on a set of questions: What did you like? What did you not like? Was this story a good match for VR? Then we proceeded to a full class discussion of students’ experiences with the different stories each group watched. We ended with a VR story idea brainstorm.

All the professors teaching Interactive and Visual Craft set aside class time for a session, and many other professors strongly suggested to their students that they spend the first 30 minutes of their lunch hour in a session. We connected 85 percent of the student population to a variety of VR stories in these session. Students were overwhelmingly engaged and positive, and their reactions and responses were also surprising and insightful and educational, especially to me, on how a group of millennials reacted to VR. We captured their reactions in a post titled “Storming the Classroom with VR” on our blog, VRCUNYJ.tumblr.com. We knew we were on the right track.

Next steps

So, has it been worth it?

Weeks ago I assisted Marcelle Hopkins in a one-day J+ workshop, Shooting and Stitching 360 Video, attended by working journalists from The Guardian, Time Inc. and Hearst as well as several video freelancers, an artist, a film educator and members of various NGOs. About half the class were experienced videographers. In a single day they shot and stitched footage captured on the streets of NYC using the Freedom360 GoPro six-camera rig and the iZugar GoPro two-camera rig. They then made a rough stitch of their shots and watched their work on a Samsung Gear phone.

When they strapped on the headset to watch the 360 video they had filmed, they were as thrilled as little kids experiencing something for the first time. That’s a powerful impact when a technique can move a seasoned visual journalist to shout, “Oh, my God, this is so cool!” I’m excited for the chance to bring that experience to a wider audience.

What’s next?

Everyone is excited for the technology to get simpler, and then for it to get cheaper. There are new 360 video products announced every year (coming soon!) that will reduce postproduction effort and time and increase quality by stitching the images internally, eliminating the current workflow of stitching shots by hand. Everyone is hoping for a camera that has the size, ease of use and price of the Theta S but produces higher-quality video.

I’m slowly understanding how to make an effective 360 video shot. Since the camera doesn’t move, there’s no opportunity for the usual close-up, medium shot and wide shot; it all needs to be in a single frame. Next I want to start stringing together a sequence from these 360 video shots to tell a story. The last video above was my first attempt at a short three-shot story.

The next big frontier is 360 sound, or positional audio that mimics the way we perceive audio in the world. Positional audio maps the sound to the video depending on the position of your head, so if someone is speaking on your left, they’ll sound like they’re on your left. And when you move your head to the left, changing your position so that the video subject is in front of you, they’ll sound like they’re in front of you.

George Lucas has said that sound is 50 percent of a movie, and in VR it might be even more important—a major element that game designers are able to use in the worlds they create. Exploring how sound is recorded and mapped to 360 video is crucial and complicated, and has enormous possibility for storytelling.

What kind of stories work best in 360 video

In a medium that’s still developing a voice for effectively telling stories, what kind of story works best in 360 is the crucial question. The easy answer is that a good story for VR/360 video is one that takes advantage of the medium’s strength: in part, being able to look 360 degrees around you. I’m beginning to understand that where you point the camera when you start a scene is often more important than what’s behind you. There’s also the possibility of pointing a viewer in an opposite direction from the action and letting them move to discover it, which is something I’ve started to explore. Since the canvas is 360 degrees, I try to put the viewer in a place that creates storytelling in every direction.

Marcelle Hopkins has said her first question is: “Why is this a 360 video / VR story?” Then she follows with: “Could it be better done in a conventional video format?”

I think it helps to consider what 360 degrees might add to a story. One advantage of 360 video is the camera becomes the viewer, creating the possibility for deeper understanding. When a subject looks at the camera while you’re filming in VR, they’re looking directly at the viewer — something that can be unsettling or very powerful.

I do know what doesn’t work. Subtitles are hard to read. Sit-down interviews are pretty boring (see Michelle Obama’s 360 interview with The Verge and what the team had to do to make it interesting), in part because the interviewer has to be in the scene, and there should always be a pretty good reason for anyone to be in a scene.

Moving the camera can be rough, though I have seen a few examples where putting the camera on a slow-moving cart controlled remotely can work, such as RYOT’s “Artist of Skid Row.”

In the end, the best part of this project, and for me the success, is actually capturing 360 video and making a lot of mistakes in the process. There’s a value to guiding students away from a cliff, if only because I fell off it once. And I’m excited to continue exploring.

VR as we know it today might change in the future, but I’m positive that a new form of capturing reality will spring up in its place. And I know that I’ll be well along in my 10,000 hours of thinking of video in a different way. With news organizations likely to run out of funding to hire top-flight VR studios, the students from the CUNY J-School will be well on the road to filling those positions in journalism.

WORKFLOW: A short document with a basic VR workflow overview >

https://docs.google.com/document/d/1uLAuMbTHJw3SiikSqz7Xw2gX0vQ4x1AcpJBflyIbuI0/edit

Finally, click on this conversation I had with Marcelle Hopkins, a VR guru who is now the executive producer for 360 News at The New York Times, and in charge of The New York Times Daily 360.

__________________

Many, many thanks:

First to Matt MacVey, my fellow CUNY wanderer on this journey of discovery in the land of VR, AR and 360 video.

A huge thanks also to students and fellow teachers at the CUNY J-School, Dean Sarah Bartlett and Associate Dean Andrew Mendelson, Professor Jeff Jarvis, Hal Straus, Professor Travis Fox, Professor Jeremy Caplan and my fellow Tow Professor Miguel Paz; Dan Archer; Marcelle Hopkins and Jenna Pirog at The New York Times; Ray Soto at Gannett; Erica Anderson at Google; Sarah Hill at StoryUP; Professor Robert Hernandez at USC; Professor Ken Harper at Syracuse University; and many more…

This post was originally written in fulfillment of a generous grant from the Tow-Knight Center for Entrepreneurial Journalism at the CUNY Graduate School of Journalism, where I’m a Tow Professor and from a CUNY Strategic Initiative Grant.

Bob Sacha, New York City

  • Let me know your thoughts :)

--

--

bob sacha
journalism360

film maker, photographer, editor, teacher, 360 video freak...rabid digital immigrant.