Directing actors in scripted narrative 360

Alright. I’m feeling a little rusty because I haven’t posted in a while but I’m going to try and knock out a few posts in rapid succession.

First one is about scripted narrative in 360 video.

TL;DR

  • If you are shooting with a 360 camera — you cannot direct actors without on set real-time 360 preview which means you need a Teradek Sphere
  • Since emotions are (in my opinion) important, then we will need actors very close to the camera — which means you need a high overlap rig, such as the Z4X (or Z4XL) from Izugar

Over the last few months I’ve been busy learning, building, speaking, breaking things (including almost my face ;)), directing and fund-raising. It’s been fascinating to see how fast the industry is moving — enthusiasm in Hollywood, high profile content deals (Baobab, Jaunt part 3, Invisible, Mr. Robot, Suicide Squad etc etc), agencies (as in WME, CAA, ICM) are paying attention — Daydream got announced and of course Sony entered the fray in spectacular fashion with the PSVR. Christmas is going to be, for want of a less generic word, exciting.

Last time I wrote at some length about storytelling I focused on editing — since I felt that it was (and still is) sorely lacking in most 360 video productions. Although I have talked about and refined those ideas a lot (at NAB, CES, VRLA, VRTO and while consulting for Microsoft, Saatchi & Saatchi and others) I put many of the principles into practice a little over a month ago when I directed (and edited) a very fast paced project for NBC’s hit show Blindspot.

NBC’s Blindspot — “Inside the Scenes”

That piece was initially intended as a scripted narrative piece (but due to timing constraints on the days of shooting we ended up making a highly dynamic inside-the-scenes segment) — See another article for “The making of Blindspot”.

NBC’s Blindspot — “Inside the Scenes”

In the meantime I’ve been thinking a lot about scripted narrative — and why I believe scripted photorealistic 360 drama will grow faster in VR than interactive game engine driven VR worlds will for now. (For my full thoughts on the psychology behind that see another article entitled “Agency, Speed, Emotion — Pick 2”)

But let’s get to the task at hand.

My aim was to direct a scene with two performers on a small outdoor set.

Please note that my definition of the word DIRECT is quite important here — I specifically mean — do whatever it takes to make the story work better — with a specific focus on blocking/choreography and, of course, performance.

I do not mean just make the cast dance around the camera and hope for the best. I mean the old-school stuff — look at the monitor, see if the actor is understanding the subtlety of emotion/tone that needs to be conveyed and coax it out of them if they are not.

While it is of course possible to construct a 360 scene in multiple passes (either by shooting with a motion control system or a nodal head) I specifically wanted to experiment with directing from a full 360 rig (with cameras pointing in all directions). This means that I either stand in the shot (committing to the increased post workload of compositing me out of the frame) or I hide.

(Quick primer on the specifics of acting for VR were discussed in part here written by Dominika, one of the actresses in this shoot)

If you stay in the shot (e.g. next to the camera) you could theoretically observe the actors performance with your own eyes — but I chose to hide —not just because of the extra simplicity in post-production but also because I specifically wanted to free up the actors from having to perform with any kind of crew, and I wanted to see exactly what the camera sees (i.e. the viewer will see).

The equipment load-out was as follows:

Camera gear:

Full set of gear (Z4XL rig + Sphere + Nodal Ninja travelopod) designed to fit in one Pelicase 1510

When assembled, the gear looks like this:

Z4XL on monopod with Sphere on-location wireless setup

(note we also had audio gear — 2x Sennheiser G3 wireless lav + Rx/Tx going into a Zoom H4N)

If you don’t know exactly what the Sphere does look it up but suffice to say it makes it possible to see a live-stitched preview of the footage the camera rig is seeing, wirelessly, on a iPad a short distance away, at a high-enough resolution that directing a performance is actually possible.

It is hard to overstate exactly how important this is.

Without a Sphere the only way to get a 360 live-stitched preview of any kind of quality (today) is to use a Nokia Ozo, a Mac Tower PC and an Oculus Rift. This configuration is not only cumbersome and (extremely) expensive but also tethered — so it makes quicky moving yourself around on a set quite complicated.

My version of the Sphere configuration (see picture above) is entirely wireless. It’s not the current officially recommended configuration since Teradek recommends mains power is used.

Well, using mains power in a 360 context makes no sense to me unless I am in a highly controlled environment or live-streaming. I want to minimise the rig impact — so I designed and built a wireless configuration using off the shelf parts.

Wireless 360 monitoring on set with the Teradek Sphere and the Z4XL (and yes actresses Dominika Juillet and Kinga Kierzek are taking selfies while I configure the iPad Pro)

For those that are curious — Here’s what’s connected to what:

  • The Z4XL is composed of 4 cameras connected with sync-cables that are each recording to an individual SD card. Post production workflow is the same as for any multi-camera rig. The sync is not line-level but it’s less than 0.5 frames — so no syncing of clips is required in post.
  • Each camera’s HDMI output goes directly into the Teradek Sphere (which is clipped onto the monopod with a custom made zip-tie holder) ;) yes I said zip-tie…you wanna make something smaller — be my guest…I’ll be the first to buy it. It’s a common misconception that the Sphere itself stitches the footage. This is not actually the case — the Sphere just rapidly encodes each camera’s footage into a high-quality local RTMP stream.
  • You then need the Sphere app on the iPad Pro to ingest those streams and stitch them on the fly. For this to work wirelessly — the iPad Pro needs to be connected to the same WiFi router as the Sphere (since the sphere has no built-in WiFi). I use a small Xiaomi WiFi router powered from one of the Sphere’s USB ports for this.
  • Although the cameras are independently powered (each by their own battery) — the rest of the system (sphere + router) is powered from a Sony BP-U60 battery attached to a battery plate — also zip-tied to the monopod
    FYI as far as power usage goes — here are some numbers for you — over a 2 hour period — of shooting approximately 15 two minute long takes (cameras/sphere/router were never turned off and were broadcasting continuously — but only 30 minutes of actual material was recorded) The iPad Pro went from 100% charge to 65% charge and the 5800 mAh Sony battery went from 100% to 63%. (overall I’m surprised/impressed with that)
  • Although we didn’t do this for any length of time on this set — the iPad can also locally stream a stitched version of the footage to other iphones connected to the same WiFi (that are running the Sphere app) — I did bring an iPod Touch with me to test this out — and it worked — but I was worried about increased battery consumption since I had never done a multi-hour test before.
  • Audio wise — both Dominika and Kinga were wired with a lav mic and a Sennheiser G3 transmitter. Yes we could have hidden the transmitters better but errrm, we didn’t. Both G3 receivers feed into the Zoom H4N (seen at the bottom of the photo.
    There is a single white cable from the Zoom H4N headphone out that goes into the analogue audio in of the Sphere. (this is needed for audio output from the iPad)
  • The director (me in the picture) watches the iPad and listens to headphones to monitor exactly what the camera sees.

In principle — how well does this work?

(this is a quick rough stitch with no color correct or audio/video post — yes I know you can see the stitch line)

Roll sound, roll camera, hide and call action — but, importantly, you can observe the performance from a camera’s perspective the entire time. You can see this in full equirectangular mode (so you see the entire stitched canvas) or in what Teradek calls the rectilinear mode (essentially a magic-window preview)

You can listen to the audio directly from the on-board camera mics (or if you use an external recording device like the Zoom that I used you feed the output of that into the Sphere) — so in effect you are monitoring the entire experience, both audio and video. (if you were using a location sound mixer — his mixing board would output to an additional wireless Tx — the Rx of which would feed into the Sphere)

You now have some confidence of what you have just recorded and when the take finishes — you walk over to the actors and give them feedback.

And now we can resume our regularly scheduled programming.

Don’t forget that small camera movements make a massive difference in the emotional intensity of a scene. Moving the rig a mere 3 feet can shift a moment from feeling externalised (you are observing an interesting discussion that’s less intimate and beat-wise an “establishing” shot) to something very personal (an emotional and visual equivalent of a “close-up”).

That’s it. Now go out there and make some entertaining stuff.

Things to do differently

  • have iPad belt clip of some kind otherwise your hands are tied up holding the fragile thing all the time
  • Bring Bose QC headphones if close to set — the Sphere induces a slight lag in the transmission to the iPad (on my setup it was approximately 0.5 seconds) — to be clear the audio you are listening to on iPad is perfectly synced to the video however hearing the actors actual audio also is distracting. It’s possible that other Sphere configurations would reduce this lag — however unless the lag is almost zero — my point still stands
  • Since we were doing this in somewhat of a hurry — I didn’t construct a monopod mount for the Zoom and the the G3 Rx units — the reason for this is of course the speed of moving pieces of the rig around on an indie set. (Note that the Zoom F8/F4 come with inline 1/4" 20 mounts which is quite practical for this specific reason)
  • Ambisonic mic system on top of rig — the Z4XL has a 1/4" 20 threaded hole on the top — specifically to accept some kind of mount for an ambisonic microphone — a shockmount for a Sennheiser Ambeo would be perfect here — shame they don’t make one yet
  • Teradek should absolutely be making a client for the Galaxy S6/S7 with maximum priority so we can preview Sphere footage inside a GearVR on set. It would help with isolation for the director also. (The iPad would sit in the DIT’s hands — while the director gets a GearVR and headphones)
  • Additionally — Teradek should have headset presets on their rectilinear views — although it’s great to be able to zoom in to certain segments of the view — resetting it to Cardboard/GearVR/Rift/PSVR standard viewport sizes is extremely beneficial (sort of like action safe/title safe markers on a monitor)