360 Filmmaking for Designers

This read is for designers who want to understand what 360 filmmaking can do creatively with tools on the market today. So not what is possible tomorrow with equipment in the six digits and an R&D department. What is possible right now to get you started with your standard issue MacBook, 360 camera and some inexpensive video editing software. We will also look at how to make your films easily accessible for viewers with cheap or increasingly free cardboard headsets. The field is evolving at a blazing pace and as a result you can get surprisingly far with low cost and a very manageable learning curve!

The tutorial covers what 360 immersive filmmaking is and where it fits within the VR landscape, then move onto a key creative question. What is it good for anyway? From there we will cover our workflow and the small but critical list of hardware and software tools required. Finally, we will finish off with the step by step process you can follow to create your own film with key tips and gotchas.

What exactly is 360 filmmaking?

We define 360 filmmaking as monoscopic capture that allows for the viewer to freely move the nodal camera. Quite simply, they can’t move within the virtual space but they can certainly look around. This is the most common form of VR content currently available, for example on YouTube.

So how exactly is this kind of 360º view captured? The solution involves using two or more wide FOV synchronized cameras. The captured media is then “stitched” together into a seamless panorama. We will dig into this a little more deeply later. For now you can visualize the process as similar to what happens when you capture panoramic photos with your smartphone. As you move your phone camera around, more scene information becomes available and gets tacked onto the photo you already have. The difference with 360 video capture is that we are doing this 24 or 30 frames a second and for the entire 360º volume around the camera.

That is good segue to where 360 filmmaking fits within the overall VR landscape. Without stereoscopy can we even consider it VR? While 3D games and apps running on high end headsets such as Oculus or Vive are further up on the immersive pyramid, they require significantly more effort and cost and are also harder to access. Further more, 360 immersive filmmaking poses lots of interesting creative questions and is already a rich sandbox to explore, so why not start there? This is how we can build the foundation for a more immersive future.

What is it good for anyway?

360 filmmaking is certainly not for everything. Camera direction is a very important tool for storytelling. The composition of a shot and the camera’s movement are important creative choices and certainly not left to chance. So why give that control up to the viewer?

Creatively 360 filmmaking is still in its infancy so its a fun time to play around and learn but be ready to make mistakes! Also, its best to keep it simple — short films with bold, simple ideas and minimal edits. Three reasons that we have discovered in our own experiments and examples from others…

First, introducing new worlds. When we are taken to new worlds, we often see the establishing shot as we trail behind the characters and switch to their POV. However, it always feels vicarious. What if the viewer could experience these new magical places for the first time in the way the characters do? The entire story may not necessarily continue in this format but opening up a new world in this way can be truly powerful and memorable.

Second, building empathy. Making eye contact with a character in an immersive environment creates a powerful emotional moment. Not having the boundaries of the frame create an immediate connection that is simply not possible with traditional filmmaking. Naturally, story tilts the scales and makes us suspend disbelief. We have all experienced the magical moment while reading a book or watching a movie, where we can literally feel the joy or pain of the character. What is unique to VR is that belief is the default state.

Third is capturing context. This is a particularly interesting one for designers, for example ethnographic videos for design research. In addition to shooting the subject in their kitchen or studio apartment, imagine capturing the rich context in which the dialog is taking place. Or recording user tests in the wild that can have all this rich data that may not seem relevant in the moment but is nonetheless available to go back and parse later. With editing, it is even possible to annotate key aspects within the environment.

Our Workflow

If you have made it so far, you are hopefully asking yourself, how do I start tinkering around with this thing? So here is what our workflow looks like.

Hardware: Samsung Gear 360, Galaxy S6 or S7, Macbook

Software: Gear 360 app for phone, Android file transfer app for Mac, Adobe Premier for Mac

Subscription: YouTube account

This is by no means the only possible workflow. For example, there are several good options for 360 cameras including the even cheaper LG 360 Cam or Theta 360 to the prosumer Omni from GoPro. Similarly, there are more flexible stitching apps including the free Gear 360 Action Director from Samsung itself (PC only).

With this workflow, we have tried to strike a balance between cost, quality and managing the learning curve; and staying on a Mac since that is the predominant computer for most designers. Perhaps the biggest hurdle to most designers would be the Samsung smartphone. Unfortunately, the Gear 360 app doesn’t work on the iPhone or even other android phones. Rather than letting the Gear 360 shine in its own right, they have decided to tie its fortunes to their flagship smartphones. I do hope they will reconsider. (There is a stitching app available for the Mac but in my opinion the workflow gets convoluted. Another route is to get Windows running on your Mac and use Gear 360 Action Director and take care of stitching there. Please let me know if you find a better solution to this.)


Once all the apps are installed, pair Gear 360 to the phone. And that’s pretty much it. Time for lights, camera, action!


We will use the 360 Gear app on the phone as our camera control and viewfinder. In the Manager application ensure that:

  • Resolution is set to 4k, (3840 x 1920p)
  • Mode is 360º (both fisheye lenses enabled)

Its critical to use the highest video resolution since the viewer will only see about 110º FOV, so less than third of the 4k.

The Gear 360 ships with a mini tripod for table setup, however, a monopod with self-supporting base can be pretty handy and keeps the wide FOV clear. Some more tips on framing:

  • Camera should be positioned with a couple feet clearance all around to minimize distortion.
  • If you are shooting actors, its really important to be able to get to eye level with your tripod.
  • Frame the primary action within the center of the primary lens since that area will have minimum distortion and seams from stitching. During editing you will be able to re-frame horizontally to pick what is front facing, i.e., what the viewer sees if their head is not turned.
Frame primary action in the front lens to orient the user and minimize distortion and seams of the footage

For VR shoots the conventional wisdom is that lighting be kept as even as possible to minimize seams, however, I have not found this to be significant issue with Gear 360.

While empathy is very powerful and would motivate a fairly simple staging to encourage eye contact, we are nonetheless in an immersive environment and should take advantage of the space. The challenge is how not to make it feel gimmicky?

Another key challenge is how to motivate the viewer to follow the primary action since we don’t control the camera. This is one of the biggest challenges of storytelling in VR. Sound and light are two ways to lead the viewer’s eyes to naturally follow the action. The Duet short from Glen Keane uses a helical structure to determine when the viewer can switch between storylines that are staged in two parallel worlds.

When thinking about staging its important to consider if viewer will be sitting or standing. Sitting is far more common and although offices may have swivel chairs, it is unlikely at home or in social settings. For sitting experiences their is an optimal frustum, far narrower than the full 360º field of view.

Should we use camera movement? It makes the scene more dynamic but also leads to disembodiment since your viewer will not actually be moving. Ultimately it depends on what the story calls for, for example the skydiving video shot with Gear 360 simply won’t work without movement. Also, please be careful about speed. Moving very fast can be nauseating to a viewer. The Mr. Robot virtual reality experience has some great examples of moving the camera to convey the state of Elliot’s head. It is also a good example of motivating the viewer to look around with light and sound effects that feel integrate into the experience.


As we discussed before the scene is captured by two fisheye cameras.

Unstitched image from the two Gear 360 cameras

In order to allow the user to look around freely in all directions we have to stitch these images into a Equirectangular projection. This is the same projection used to create world maps.

Image after being stitched on the phone

After pairing the camera to the phone, the very act of saving the movie to the 360 Gear app causes it to be stitched. This process is not built for scale and can be pretty slow for long movies and cumbersome if you have many clips.

Cardboard FOV is about 110º

On a side note, the viewable area cropping caused by the panoramic format also explains why VR videos appear low resolution even though we are shooting at 4k. The FOV of Cardboard headsets is about 110º so only about a third of the image is shown at one time. This combined with the fact that the phone is shoved right up against the face with lenses to re-focus the screen break all the assumptions around retinal displays. That is why using descent lenses like in the Powis headset is so important.

The 360 Gear app is also capable of trimming the movie, so if that is all that is required, you are pretty much done and can upload the movie directly from the app as long as you have a YouTube account.


We will be editing on a Mac with Adobe Premiere Pro. First, we need to transfer the media over from the phone. On your Samsung set USB configuration setting to MTP. Then connect it to the computer via a USB cable. In the Android File Transfer app the movie files (.mp4) can be found at DCIM -> Gear 360. Drag the files over into the media folder you will be using for your Premier project.

The recent version of Premiere has added some essential enhancements for VR projects. First is the ability to view panoramic media in a special viewer similar to what you get by dragging around a 360º YouTube video on your phone. Second, is the ability to tag the output to identify it as a VR movie. This ensures that YouTube or other streaming services do the right thing when you upload the media.

Any editing operations that manipulate pixels but don’t move them around works as expected, this includes color correction, transitions effects, filter effects, etc. Any operations that move the pixels vertically, for example, a vertical offset will break the panorama. On the other hand, the horizontal offset works and is very useful to set the default front facing content area for the movie as visible in the VR viewer with both angles set to 0º. This feature is useful in case you didn’t position the front camera towards the action of your scene.

Quick dissolves through black between scenes can help keep the viewer oriented. But again it all depends on what you are doing and some times straight cuts, jump cuts or dips to white can be used for specific reasons. As with movement, very fast edits can be nauseating. The Fight for Falluja uses a combination of dissolves, cuts and dips to black in typical documentary style. Since the shots are long these transitions don’t cause much discomfort.

Keeping text to the equator or central in the image helps reduce distortion of text or other graphical elements

Adding titles and any operations that add pixels are distorted since they do not match the equirectangular projection. You can work around this to some extent by keeping the new layers close to the center vertically, i.e., the equator line. In order to add green screen elements or create 3D layers, ensure that you capture or render them with the same projection mapping as the rest of the 360 footage. There are also plugins that do the necessary pixel math to composite flat elements into 360 footage.

When your movie is ready, in the media export window, switch to H264 format for 4K resolution. Also, ensure that the VR export setting is checked to make sure all the right metadata is associated with the movie.


Upload your exported movie to YouTube. Since the metadata for VR media is already set, you shouldn’t have to do anything extra. Please note that some times there is some additional delay before the movie is converted from flat to VR.

We have assembled a short list of 360 example movies for inspiration as well as a couple of stereoscopic movies just to give a flavor for how they differ. Please send us additional noteworthy examples and we will add them to the list.

Happy immersive filmmaking!

Special thanks to Jason DePerro, my partner in Gear 360 exploration and for his invaluable contributions to this article, including the fantastic illustrations. He created some awesome headsets to give our fellow One Designers. You can get some too! Check it out.

Unless noted otherwise in this post, Capital One is not affiliated with, nor is it endorsed by, any of the companies mentioned. All trademarks and other intellectual property used or displayed are the ownership of their respective owners. This post is © 2017 Capital One.