Editing and match-cuts in VR

Nick Bicanic
10 min readFeb 3, 2016

--

It’s all a question of degrees ;)

OK — so a lot of assumptions have been made about editing in VR, most of which I think are wrong. Jessica Brillhart discusses a few at some length in one of her medium posts..

Many people have said that editing in 360 video is not possible and shouldn’t be done. (FYI — Jessica is NOT one of these people — she both edits and writes about her process — see the above link for detail)

As it happens the “you can’t edit in VR” argument appears to have only one through-line and it goes something like this:

“Since there is no frame, a creator doesn’t know what the viewer is looking at — therefore any kind of cut is too dangerous because you could confuse them.”

It doesn’t help of course — that the tools for cutting 360 Video are in their infancy. So far it’s been hard work editing 360 videos efficiently (not to mention the hassles with stitching etc)

In any case — I believe fast match-cutting with coverage in VR is absolutely possible — and I decided to prove this by spending a few hours on a quick scene called “Double Trouble” with the able assistance of Actors Dominika Juillet and Robert Sander. (Dominika wrote the scene on a napkin and we basically shot the rehearsal. Yes that’s how we roll — see below for youtube clip)

Any problems with the footage are entirely my fault and mine alone.

A few caveats before I break stuff down.

  1. This was written, shot, quick-stitched and edited in about 3 hours total — yes that’s an excuse but it’s true ;)
  2. Actors were not mic-ed (I was moving fast so I had no time for such niceties)
  3. No fine stitching, no color correct, no sound mix
  4. We had 5 camera positions
  5. Quickstitch was in AVP
  6. Edit was using Premiere Pro (with Skybox Player preview to DK2) and a beta version of Skybox Rotator
  7. Final length was 40 seconds and has 6 cuts — that’s just under 7 seconds average shot length — which is shorter than anything I had seen previously

My process was as follows.

We had a short script and a location.

We picked a few camera positions, had a few quick rehearsals and then started rolling. (in the interests of speed again, all our camera positions were static)

Having shot a bunch of 360 stuff before — I knew approximately how the camera rig(KC Lai’s Izugar Z4X) would behave — i.e. which kind of shot would work as a close-up of sorts vs. a more establishing wide-shot. And I had also played with a lot of OTS (Over The Shoulder) shots — traditionally a shot where you see the shoulder and head (or part thereof) of one performer as if you are looking…errrm…over their shoulder ;) — and something I call NPOV (Near Point-of-View) shots where the camera is right in front of a person’s face allowing multiple performers to interact while still giving the viewer a POV experience on playback.

We shot 2 or 3 takes of each position — quick-stitched our circled takes and then I started to play around in Premiere. Skybox Player was configured to output the Premiere timeline to my DK2 so I could preview any edits in real-time.

It’s worth taking a moment to mention Chris Bobotis and Skybox — without Skybox’s plug-in suite on After Effects and Premiere it would be a damn sight harder to do any kind of work in 360 video (including Nadir removal/patching, general roto work and even basic editing/previewing to Rift DK2) — thanks for that Chris.

Without further ado let’s talk about what happens when you try to make match-cuts.

Here’s the whole clip:

Double Trouble 360 match cut example

Think of that what you will — but lemme breakdown my thought process. Jessica has a funky animation of concentric rings to show how editing in 360 is like a cipher. That’s an interesting thought — but I think it’s a lot simpler than that.

Let’s consider the very first cut — off the door slam

But before we do that — I would like to bury the notion that people will look everywhere when watching a 360 video. I don’t have formal proof of this but consider these true statements

  1. When you enter a new space (e.g. a room) in the real world you don’t look around inspecting every corner of it if something is happening in the middle of the room. You might do this if you are explicitly exploring a place (e.g an Easter Egg Hunt) or if you are paranoid that Big Brother is spying on you but in most cases you are focused on a particular area of interest. Object, Person, Animal etc.
    (note that a special case here is 360 video professionals who are “looking” for stitch lines and/or trying to see everything in this new space — don’t fall for this selection bias — this is NOT how people will watch your content)
  2. As human beings our necks are not biomechanically articulated to be comfortable turning too far for long periods of time (while our shoulders point forwards)
  3. I often see people whining that spinny chairs are not provided when 360 video is being demoed…errrm…this argument makes sense if we are making installation art — but NOT if we are trying to create a new medium. People sitting at home are not gonna be replacing their friggin’ sofas with spinny chairs. They will simply never see more than a 270° Field of View. (actually most of the time they will only see about 100° but more on this later) That’s not to say the other content is irrelevant. But I just don’t think it’s very relevant.
  4. Case in point — I have shown 180° porn VR videos to a few people on my Rift and no-one has EVER complained that it’s only 180. (in fact many didn’t even notice) — why do we think that is? Because all the interesting stuff is happening in the front.

All that said — no matter how you cut 360 video it will always be possible to “break” the cut if you are a determined viewer. Unless you choreograph every single character and object movement in the scene in every direction across every cut — you will never be able to prevent this.

And guess what — I think that’s totally ok. If someone wants to deliberately destroy their own experience of a story — congratulations they can do it. It would be the cinematic equivalent of talking on a cellphone through a movie.

It is our job as storytellers to guide the viewer. We will do that using object based sound, movement, staging, dialogue, mise-en-scene etc etc — but if we can do that (and I’m confident we can) then cutting is entirely possible.

Anyway — let’s look at the first cut on the door closing.

Cut 1 leading frame — Youtube Field of View

This is the frame just before the cut. The following shot is the frame just after the cut.

Cut 1 trailing frame — YouTube field of view

Let’s define some vocabulary and look at the overall equirectangular video data.

Cut 1 leading frame — Full Equirectangular video

Now let’s overlay the YouTube field of view on this:

Cut 1 leading frame — approximate Youtube field of view overlay

Ok. So now what — well the default YouTube field of view(using basic arithmetic) is about 103 degrees. However I’d like to define something called an Arc of Attention — which I’m going to suggest is 25 degrees.

(The number is entirely arbitrary — color/contrast/movement/sound could help increase or decrease this arc — but essentially it’s a section of the visible area of the sphere that the viewer can reasonably pay attention to)

Our job as storytellers in non-interactive 360 video (directors and editors both) is control/manage this arc of attention.

Cut 1 leading frame — field of view and Arc of Attention

Ok so what about after the cut — well here’s what happens:

Cut 1 trailing frame — Arc of Attention

Notice that Dominika is still within the Arc of Attention.

Now it’s VITAL to note that your shots (unless you are extremely lucky) won’t match straight out of AVP (or whatever stitching program you use). The reason for this is when you are stitching you are unlikely to know key decisions about shot sequence in advance. (in fact it’s highly likely a different person is stitching). So for example the original frames directly out of AVP looked like this:

Cut 1 leading frame — before rotation/offset
Cut 1 trailing frame — pre rotation/offset

You can use a tool built into Adobe Premiere for rotating these clips (the tool is called Offset). However I prefer to use Skybox Rotator — for two simple reasons. Firstly, Skybox can rotate in all three axes (you can use it to fix/tweak your horizon settings from AVP). Secondly, Skybox rotator works in degrees which is how I think — whereas Offset works in pixels.

Second cut is straightforward enough:

Cut 2 Leading frame — on left footstep
Cut 2 Trailing frame — on left footstep

Points of note — cut 2 is not quite as matched as cut 1 — the horizontal arcs of attention are but perhaps I could have rotated the spheres somewhat (vertically) to better frame Dominika on the cut.

Cut 3 gets a little more interesting.

Cut 3 leading frame — hand grabbing curtain

If you’re paying attention you’ll notice that the active area has “moved” in position inside the equirectangular frame. This is because I’m assuming the viewer is following Dominika and turning their head. This is an important assumption — but in this case I don’t feel it’s particularly huge as the other actor is neither talking nor moving.

Cut 3 trailing frame — Arc of Attention still matches

And now we come to cut 4 — in my opinion this cut doesn’t work and can’t be made to work. I think it’s quite instructive why.

Cut 4 leading frame — viewer is looking way left

The arc of attention has moved within the field of view. (again these are just guesses — but more on this later)- but even with a best case scenario the cut jars somewhat.

Cut 4 trailing frame

Why doesn’t work? — a number of reasons

  1. Line cross (traditional kind)
  2. Visual and Psychological Anchor (Dominika) is not in Arc of Attention after cut

It’s possible that since the viewer has enough context with the room that they are not entirely lost. I debated dropping this shot or covering it differently — but I left it in as I thought it was quite instructive precisely because it didn’t work.

aaaand onto the next one.

Cut 5 Leading frame

This one seems fairly clean to me:

Cut 5 Trailing frame — Arc of Attention matches

This shot is my favorite shot. The camera is directly behind Robert’s shoulder — but from the point of view of Dominika — her moving approximately 4 feet during the shot nicely transitions her from a wide shot into a closeup (which in turns brings us to the last cut)

Cut 6 Leading frame — nice close-up

And now we reverse.

Cut 6 Trailing frame

Ok phew.

Let me summarise a few things. First of all — I deliberately decided to cut very aggresively. This will not work for every type of story (or for that matter every type of shot) however my main aim was to prove that if you manage interest and Arcs of Attention correctly — dramatic coverage cuts in 360 video can work. As I said before — one can easily “break” this type of editing — but my point is that I think most normal viewers will not wilfully do this.

One word of warning — the field of view in DK2, YouTube (native in PC browser), YouTube through Cardboard and Samsung GearVR are similar but NOT the same. If you’re skirting quite close to the limits — consider that a 10–15 degree error in your guess of the rotation of the viewers head could easily result in a 30 degree difference in the visible edge of frame. In other words do NOT put the Arc of Attention to close to the edge of your expected Field of View.

Lastly while Jess is one of the few people who is writing eloquently about editing in VR — she is certainly not the only one — here’s another worthwhile piece.

And writing aside, for sure we are not the only ones that are actually doing this. Mr Lewis Smithingham springs to mind — but I’m sure there are many others.

I like fast cuts — and I think the brain can handle way more rapid juxtaposition than people have tried. Remember that the current average shot length in mainstream motion pictures is somewhere around 2 seconds (often less)

360 video is a different medium — but we can certainly do a lot better than cross-fades, wipes and fades to black.

Get cutting ;)

--

--