Immersive Audio

Audio Production for 360 Degree Video

Orthentix

--

Mixing audio 360 degree video is emerging as a key skill set in the post-production industry. The skills and knowledge required for this type of mixing are related to traditional sound mixes but also draw upon key knowledge and concepts from other industries, particularly the gaming industry, with the use of spatial nodes for sound placement in the environment in association with spatial audio software applications, to create a surround audio mix that synchronises with 360 video, through motion tracking with head movement, creating an immersive audio experience.

“Crafting a compelling mix generally requires stepping beyond reality to offer an enhanced perspective on the action. The required artistic intent and creative integrity must be preserved from content creation to consumption by an end user, ensuring full immersion into the virtual world”. [1].

The VR landscape is broken into two categories, Interactive VR and Linear VR. Interactive VR is focused to games and interactive movies, as the viewer actively controls the experience in real time, as an active participant in the storyline. Objects are rendered in real time with metadata associated for environmental models etc. Linear VR is used in cinematic storytelling and experiential television, for example sporting events, concerts or virtual cinemas. Linear VR can be either post produced or recorded live, with spacial and direct microphones. The viewer is able to control their viewpoint in an overall linear timeline but not change the experience, though they may also be able to influence the audio story based on their gaze direction, for instance emphasising some elements that are in their direct line-of-sight, like making an object louder or mute i.e. gazed based interactions. There are two Flexible Audio Representations used in VR; Sound Field Representation and Object Based Representation. Sound Field Representation are created with Ambisonic microphones that capture the spatial information of a sound-field using spatial harmonic decomposition techniques to represent the spacial wavefront. These can be either 1st order/B Format Ambisonics, needing 4 channels like the Zoom H2n microphone or the 3rd order/Higher Order Ambisonics (HOA), needing 16 channels like the Neuman Ambisonic microphone. Object Based Representations are mono audio stems, either recorded or synthesised and further manipulated in a DAW creating a 3D space with positional metadata. Each object is a mono audio track/sound source, using point source emitters, positional metadata and spatialising plug-ins that can be modified via head tracking. These Object Based Representations and recordings are used in association with Ambisonic plug-ins, that envelope the head in a sphere of sound using virtual loudspeakers with Head Related Transfer Functions (HRTF), emulating many virtual loudspeakers, fooling listeners into thinking the sound is located within a 3D space, the head-mounted displays track the movement, rotating the sound dependent on direction. [1]. The spatialising plug-ins emulate the acoustic sound environment with: inter-aural time differences, inter-aural level differences and spectral filtering done by our outer ear. The visuals trigger the audio dynamics, amplitude, reverbs and high and low pass filtering.

Interactive and Linear VR

In both interactive and linear VR, the viewer is engaged with the content, able to look around the scene in all directions. With the advent of affordable motion trackers in VR systems, the viewer will soon be able to move around the scene, be able to peek around corners, walk around the space and change orientation to look at the scene from different angles with six degrees of freedom. This freedom of movement and the ability to influence the audio experience at playback-time requires authoring and delivering content using a flexible audio format that enables spatial transformations and modification of elements within the audio mix. Object based representation works best for a true immersive experience as it has 6 degrees of freedom of virtual viewing space with the objects rendered in stems according to their positions, with associated metadata containing the environmental models creating a flexible, extensive and precise interactive VR experience.

“Object Audio has already been proven to be viable and essential to improving immersive experiences in more traditional forms of entertainment like Cinema and Broadcast, and we have found that Object Audio for sound reproduction in VR as well provides a fully immersive experience”. [1].

Production Plan

Hardware Devices: AKG702 Headphones, Macbook Pro, UAD Apollo Duo Interphase, AKG220 mic, Zoom h4n mic.

Software: Reaper DAW, Ambisonic and spacialising plug-ins, as these are compatible with VR/7.1 sound and affordable along with my Hardware Devices.

Delivery Platform: YouTube 360 and Facebook 360 as I already use these platforms for my music productions, if I create further 360 degree productions these can be distributed along with my musical productions, keeping all my content together. Both these companies have invested a lot into VR technology, distributing on both doubles the reach to audiences with my content.

VR Video: ‘Inside The Chamber Of Horrors’. This video interested me as the audio element is a standout feature, as it has the ability to emotionally affect the audience and direct the viewers attention via sound in such an intense environment, leading to great audience impact and immersion via the audio production with the Object Based Representations. The Post Production, Foley Recording and Sound Design for this film will be a unique and gratifying experience.

Inside The Chamber Of Horrors. Retrieved from https://www.youtube.com/watch?v=ez17zODXNV0

For this production I am using a Linear VR Environment as the 360 Degree film is of Cinematic Storytelling format and will be crafted in a DAW for Binaural Reproduction over headphones. I will use Object based representation as I do not own a Ambisonic microphone. Object based representation works best for a true immersive experience as it: Provides the required flexibility, extensibility and precision to enhance the overall VR experience significantly; Natively enables complete movement along 6 degrees of freedom for the viewer in the virtual space, including translational movement; and Allows for high spatial resolution due to objects rendered in a scene according to their positions, rather than in a mix. [1].

[1].

Using a 6DOF VR mixing interphase, giving freedom of virtual viewing space with the objects rendered in mono stems according to their positions, with associated metadata creating a flexible, extensive and precise interactive VR experience.

The object metadata rendered into each object/sound/audio file contains; the non-diegetic sounds like musical score and background ambience with no head-tracking (stereo locked); the diegetic sounds, like sound FX and dialogue that is head-tracked (with some elements rendered with higher timbral fidelity, bypassing binaural processing at playback to make sound seem more realistic); the Environmental Models of reverberation, distance attenuation and source directivity; and Gazed based interactions, where the user can emphasise elements in the mix by looking at specific points or directions. [1]. These Object based representations are further manipulated in DAW using Spatialising Plug-ins that manipulate inter-aural time differences, inter-aural level differences and spectral filtering.

To make the VR experience realistic; direct sound level is 90dB, reverb time of 1 mili second; early reflections level is 84dB, reverb time of 2 mili seconds & more reflections; late reverb level is at 78 dB, reverb time of 4 mili seconds. The other thing to consider is head-shadow, which can affect high frequencies up to -15dB as these sounds are more directional, though with little or no change in low frequencies due to the large waveforms. The final mix is processed into binaural reproduction for use on headphones.

Immersive Audio — Part 2

Audio Production for 360 Degree Video

Hardware and Software manufacturers are continually creating new tools for recording, mixing and spatialisation, keeping up with the advances in Plugins seems difficult with new VR applications, hardware, spatialising software and plugins available each week. In this Blog I will be demoing a Spacial workstation and DAW along with spatialising plugins, experimenting by using them to place audio in a three dimensional environment with use of good quality headphones, to decide which are the best solutions for my project. As this project has a low budget, I will only be using free, demo or trial versions of software, therefore have limitations to applications that I will use, ensuring that trials last until completion of my project.

Spatialising Software

Spatialising plugins are essential tools to create an immersive mix for VR audio. Spatialising software attempts to mimic how ours ears (and brain) localise sounds in space (ie.inter aural time and level differences combined with and comb-filter effect of HRTF/Pinna and head movement). In addition to using a VR head-tracking device to send movement and positional information to your DAW, the spatialising plugin can be altered in realtime to ‘fool’ the listener they are in a true three dimensional space with binaural processing. In this section I will be analysing the Facebook 360 Spacial Audio workstation to explain why it is the best solution for my project.

Facebook 360 Spatial Workstation

The Facebook 360 Spatial Workstation is a software suite for designing spatial audio for 360 video and cinematic VR, and is a current trend in industry practice. It includes plugins for popular audio workstations, a video player/encoder with time synchronized 360 video player, with Head-tracking with Head-tracked Binaural choices. Along with a room modelling algorithm, recreating only the reflections that are essential for a convincing 3D audio experience. The room modelling algorithm in FB360 generates the first few orders of reflections inside a simple room model. This helps greatly with the externalisation of sound sources and also improves some common binaural rendering problems such as front/back confusion and elevation perception. You can control the level of these reflections as well as the reflection order (i.e. the number of individual reflections allowed in one reflection path around a room) using the sliders on the Spatialiser plugin. The size of the room being modelled is controlled globally in the Control plugin. These early reflections can be used effectively with any of your favourite reverberation plugins without any problems. Adding a small pre-delay to the reverb system (30–60ms) allows the early reflections to stand out and greatly enhance the spatialisation effect. It is also compatible with Ambisonic recordings with ambisonics and B format audio, and its free with heaps of online tutorials for support.

The FB360 Spatial Workstation is divided into five major components:

  1. FB360 Spatialiser plugin This plugin is meant to be instantiated on every audio channel. You can think of it as an advanced panner that helps you position audio in 3D space with binaural and room simulation algorithms. Behind the scenes, the panner plugin codes the audio into ambisonics with metadata that is then used to construct the full mix.
  2. FB360 Control plugin This plugin should be instantiated on an aux channel. Ensure only one instance is present in your project. All tracks processed with the Spatialiser plugin must be routed to this auxiliary channel. The Control plugin also facilitates communication with the VR Video Player and controls parameters for room modelling.
  3. VR Video Player The player is a standalone app that communicates with the DAW to ensure good synchronisation with your project. This enables you to mix directly to the VR video while previewing and navigating the video in 360 degrees, either on your desktop or with a head-mounted display.
  4. FB360 Encoder The encoder converts the mix from your DAW into a variety of formats that can be uploaded to popular 360 video platforms or deployed with a VR app, including the .tbe format that can be decoded with the FB360 Audio Engine to achieve high quality spatial audio.
  5. FB360 Audio Engine The audio engine can be integrated in a VR app. The mix designed in the DAW is reconstructed binaurally by taking head-tracking information from the VR device into account. The audio engine is not included with the Spatial Workstation.

Three additional utility plugins are included in the Spatial Workstation. The FB360 Converter plugin can be used to rotate a spatial mix in your DAW. This plugin must be instantiated in the signal chain after the Spatialiser plugin but before the Control plugin. The two FB360 Loudness plugins can be used to approximate the overall loudness of your mix. (Facebook Incubator. N.d.). There are also utilities to help design and publish spatial audio in a variety of formats. Audio produced with the tools can be experienced on Facebook News Feed, Android and iOS devices, Chrome for desktop and the Samsung Gear VR headset through headphones. This verifies why it is my chosen spacial workstation for the project.

The Facebook 360 workstation is available from FB360 Spatial Workstation

DAW

Most spatialising plugins require the Digital Audio Workstation(DAW) to have multi-channel Surround Sound, 5.1 and 7.1 capabilities. Reaper is the DAW I have chosen for this project as it is a good free/cheap option, which supports surround (multi channel) audio and consequently has become a popular DAW for many immersive audio creators and is compatible with Facebook 360. The other option that industry professionals are using that support multi-channel Surround Sound, 5.1 and 7.1 capabilities is ProToolsHD, though I only have ProTools10 and the upgrade is out of the budget for this project. The Reaper DAW is simple, intuitive and has a straight forward workflow for most audio engineers and producers who have been in a DAW environment. If there are lagging issues look into FFMeg for video preferences and data.There is also a user guide to download and many online tutorials for extra support from https://www.reaper.fm/videos.php

Reaper is available from http://www.reaper.fm/

Investigating Plugins

In this section I will be analysing spatializing plugins to see which ones will work for my VR immersive mix. I will be discussing:

  • How well they worked?
  • Did the plugins work the way I expected?
  • How intuitive is the workflow?
  • Are there tutorials/learning materials?
  • Were there any compatibility issues?
  • Which will you use in your project- and why?
  • How do they differ from each other
  • How useful they are to your particular production needs

WigWare Multi Channel Peak VU Meter

This plug in performed like any VU meter, it was intuitive with no need for tutorials and is compatible with Reaper. I would use it on the Master channels to check peaks etc, available from https://www.brucewiggins.co.uk/

Sound Particles Software

Sound Particles is a CGI-like software for Sound Design, capable of using particle systems to generate thousands of sounds in a virtual 3D world. Particle Systems are a common tool used in computer graphics and VFX to create fuzzy/shapeless objects like fire, rain, dust or smoke. Instead of animating all individual points (water drops, grains of dust or smoke), the user creates a particle system, an entity that is responsible for the creation and management of thousands of small objects. Sound Particles uses the same concept, but for audio: each particle represents a sound source (instead of a 3D object) and a virtual microphone captures the virtual sound of the particles (instead of the virtual CGI camera).

The main features are — Huge Sound with up to millions of sound sources playing at the same time; Immersive Formats that support for several multichannel formats, including immersive audio; Audio Modifiers that use random effects to make sure each particle sounds different from any other particle (gain, delay, EQ, pitch/speed, granular); Movement with automation or movement modifiers to move sound sources and microphones; Video via importing reference clips and see the particles moving on top of the image, obtaining a perfect time and space coherence; Sound Propagation with control of the propagation of air (speed of sound, air attenuation, Doppler); 3D Views to see what is happening, using the fantastic 3D views (top, front, perspective, etc.); and Granular Synthesis with an optional granular audio modifier, and use particles to reproduce small audio fragments of original audio files.

Its a really cool concept though the workflow isn’t so intuitive, though there are many tutorials on their website http://soundparticles.com/index.html It will be of use in my production for the atmospheric sounds and possibly on some of the screams of horror. The Price is $299 though there are demo and teacher/student versions available for free (not for use on commercial projects), therefore perfect for my project.

ReaPlugs

Reaper has an extensive array of free Plugins so I decided to try a few to see how they will work for my project as they will be compatible with the DAW. the full list is below. Here is more information on ReaPlugs https://www.reaper.fm/reaplugs/

ReaDelay

  • Multi-tap delay, no practical limit on tap count
  • Up to 10 second delay per tap
  • Tap lengths can be in time (s/ms) or quarter notes
  • Feedback, LPF/HPF, resolution reduction per tap
  • Stereo width per tap
  • Volume/pan per tap

This Plugin worked really well with an intuitive workflow. It had quite a dirty electronic sound s that I feel will work perfect for the film Inside the Chamber of Horrors, to give a similar feel to the processing used in Rob Zombie films.

ReaCast

Is a Reapers native encoder, it seems pretty basic, so will stick to the FB360 for encoding purposes, as this is compatible with the distribution method and Head-tracked audio for the musical elements of the mix.

In the following Blog I will solidify my production plan and process. For now I am researching Rob Zombie films along with The Ring, The Grudge and American Horror Story — Asylum for some sound design, music and processing ideas. I will finalise this Blog with a list of resources for VR Immersive Audio, including Plugins, formats and specifications compiled from the team at Spook.fm

Immersive Audio — Part 3

Audio Sourced Sounds

In modern audio production for Film and TV, a substantial amount of audio is added in the post-production stage, this is the same for 360 Video.The audio in head-tracked immersive VR is either/or a combination of:

  1. Monophonic files, place sounds on a virtual sphere around the user. You can move the sounds around and add effects to them.
  2. Ambisonic audio .Use an ambisonic microphone (like the SoundField ST450, TetraMic, or Zoom H2n) to capture the sound of an environment in 3D. You can load the captured sound into the Ambix plugin and run effects on it, rotate it if needed, and then mix.

Mix elements

The elements of a mix in modern audio production for Film and TV include, Atmostpheric sounds, Dialogue, Music, Sound FX and Foley, ategorised into Diagetic and non-diagetic. “Diegetic sound is any sound presented as originated from source within the film’s world. Digetic sound can be either on screen or off screen depending on whatever its source is within the frame or outside the frame.” (filmsound.org. N.d.). In VR the elements are explained in a Field of Audition. The three elements to the Field of Audition are; Diagetic which is the direct field of view in front of the audience (100degree x 100degree direct field of view), mixed in spacial audio. Acousmatic which is the other sounds happening behind the viewer also mixed in spacial audio, and Non-/extra Diagetic, which is scored music and voice over narration, these are mixed into head-tracked stereo and not influenced by the viewers head movement. (Beaudoin, JP, & Nair, V. 2016.). Some of these elements will be placed in a fixed three dimensional space. Others may move around within that space depending on the storyline.

Detailed Asset List:

The 360 video Inside the Chamber of Horrors is more of an experience (with no narrative) than a a cinematic piece (with narrative). Following is the Detailed asset list and where I will source those sounds from or how I will crete them. For the assets I need to record I will use the hardware described above and record these assets as a mono source, for object based representation. I also have an extensive sound library with sound FX, foley and atmostphereic sounds in Logic10 and LogicX, so I will utilise these along with the technique of sound design with digital instruments and synths in Logic X using Logic native instruments and Native Instruments Komplete9 Synths.

Diagetic:

Foley/Dialogue

· Lights blinking (Logic Sound libraries and Sound Design).

· Male breathing (Record and Sound Design).

· Broom falling and bang with echo (Record and Sound Design).

· Man groaning and moving in mirror (Record and Sound Design).

· Lights static (Logic Sound libraries and Sound Design).

· Sound from the computer of two men breathing (so will have static/digital sound), (Record and Sound Design).

·Clothes Dryer starting (Record and Sound Design).

· Footsteps of lady in computer (Record and Sound Design).

· Fluro turning off/ static out (Logic Sound libraries and Sound Design).

· Clothes Dryer stoping (Record and Sound Design).

· TV turning Blinking off (Logic Sound libraries and Sound Design).

· Door opening (Record and Sound Design).

· Breathing (Record and Sound Design).

· Audio from THE RING movie for intertextuality with the computer screen (Sample)

· Footsteps on screen (Record and Sound Design).

· Breathing (Record and Sound Design).

· metal scraping like medical instruments (Record and Sound Design).

· Stabbing and slashing (Logic Sound libraries and Sound Design).

· groans, screams, & breathing (Logic Sound libraries, Record and Sound Design).

· Dragging body and mush (Logic Sound libraries and Sound Design).

· Door opening (Record and Sound Design).

·Stabbing, slashing, intense (Logic Sound Libraries).

·Final groans and them dying in pain (Recorded, Sound Design and Logic Sound Libraries).

· Sounds of hands around mirror (Record and Sound Design).

· Man groaning in tub (Record and Sound Design).

Atmostpheric

· electrical sounds (Logic Sound libraries and Sound Design).

· cold, cellar, cavernous sounds (Logic Sound libraries and Sound Design).

· Drips (Logic Sound libraries and Sound Design).

· erie sounds (Logic Sound libraries and Sound Design).

· scary sounds (Logic Sound libraries and Sound Design).

· Cavernous room tone sounds (Logic Sound libraries and Sound Design).

Non Diagetic

FX & Music

· FX for impulse and impact (Logic Sound libraries and Sound Design).

· FX — Reveal sounds of girl in doorway (Logic Sound libraries and Sound Design).

· FX — suspense (Record, Logic Sound libraries and Sound Design).

· Music — errie music box playing or old toy piano (Recorded and Sound Design)

· FX — Whispering sounds from THE GRUDGE movie for intertextuality (Sample)

· Music -errie: Marilyn Manson/The Dope Show intro (Sampled)

· Music — intense and building: Marilyn Manson/The Dope Show (Sampled) layered with

· FX — Sounds from horror movies: eh eh eh eh eh eh (Sampled and Logic Sound Libraries).

· Music — building to and abrupt end: Marilyn Manson/The Dope Show, (Sampled) with the FX reverbing under to the end

·FX — synth filtering out to nothing, closing lights at end (Sound Design).

In regards to the music at the very end with Marilyn Manson and The Dope Show may have royalty implications. Professional practice would be to contact the artist and ask for a licensing agreement for the use of the music in the VR film. This process would be the same for the samples I plan to use from movies The Ring and The Grudge. I may decide make my own music and sounds in this case.

There are many good online Foley/Sound FX/Atmos libraries, I will make use of these if I cannot source the sound from Logic sound libraries or create it via recording or sound design. The online sound library of interest to me is Spheric Collection, as you can purchase individual samples instead of sample packages. This way you don’t end up with a head of samples you will never use, taking up precious hard-drive space. Spheric Collection is available from http://www.spheric-collection.com/

There are two online Ambisonic Sound FX libraries giving away free sample packs, available from:

Surround Mixing

Beaudoin & Nair from DearVR explain, “It is essential to Understand the Field of Audition in VR (FOA) to immerse the audience into the film, giving the audience a sense of presence in the VR space, giving depth to the diagetic space with sound design and use of Audio cue’s, but don’t overdo it as this breaks the immersion and presence.” (Beaudoin, JP, & Nair, V. 2016.). It is important to support narrative arcs spanning over multiple shots. This can be done by changing from spacial mixed dialogue to head-tracked stereo voice overs, carrying story arcs over scenes to keep presence and immersion into video, switching from diagetic to extra-diagetic and vice versa. Another method for supporting narrative arcs spanning over multiple shots, is to use this same process for musical score to carry over into the next scene, this also keeps presence and immersion. Using contrast between the spacialsed and non-spacialised audio adds contrast to the mix. (Beaudoin, JP, & Nair, V. 2016.).

Adding audio effects in space is different to adding audio effects to a linear timeline. Be creative with the mix, for example a low-pass filter to muffle sound not in direct view or in direct view. Make use of Focus-control with gazed-based interactions, where you can control the dynamics of elements in the mix. For example in an orchestra scene, bring up the volume of instruments by looking at the instrument, allowing the audience to be the conductor of the orchestra. (Beaudoin, JP, & Nair, V. 2016.). The full discussion from the team at DearVR is on the following YouTube link.

Retrieved from https://www.youtube.com/watch?v=Ya9GQeKosIw

Problems encountered with Workflow for Cinematic VR

Following is a Vlog discussing a problem I have encountered in regards to using the Reaper environment and an un-active media file. The un-active media file is the video Inside The Chamber Of Horrors, luckily it only missing from the Reaper native media player, I can still use the file perfectly within the FB360 Spacial Workstation video player and encoder, so this may not effect the final outcome of the project, though would like to remedy this issue. I have read every forum and watched many tutorials on the subject, to no avail. If anyone knows how to fix this issue, please leave a comment bellow, thanks!

Retrieved from https://vimeo.com/264931418/e18bea936f

The more you investigate ‘Audio for Cinematic VR’, the more likely you are to hear the phrases like “we are still trying to figure what works”, it is such a new medium and there are many producers trying to figure out the new process, to removed restrictions by the available tools. There are massive gaps in workflows in between linear and interactive VR. We need to bridge this gap by amalgamating the two, taking interactive VR’s object based meta data and linear VR’s binaural and ambisonic elements and making tools used for VR production compatible with a protocol across the board, similar to the MIDI protocol. Tim Gedemer, one of the most experienced artists working in sound for Virtual Reality with collaborations with studios including Jaunt, Specular Theory, and others, explains the workflow with linear VR production is fractured, as you need to take your VR Viewer off to change things in the mix and then put them back on to check, or even worse, having to render, encode and upload the mix to YouTube or Facebook to check the mix, then open the project again and edit changes, render, encode, upload and check etc. We need to move to Augmented VR, where we have the tools in front of us in the Augmented VR world, to change things in the mix rather than the stop start method we currently use in Linear VR. (Gedemer, T. 2017.). For more information from Tim Gedemer on sound for VR here is the full discussion on the Soundworks Collection Soundcloud page, available from:

In the following Blog I will show a preview of the mix in stereo, including all the sounds/assets and some information on Ambisonic microphones. There will also be a discussion on mixing in the VR environment with Reaper and Facebook 360 Spacial Workstation along with another Vlog, stay tooned!

Immersive Audio — Part 4

Audio Production for 360 Degree Video

The following blog explores capturing 360 audio and Ambisonics, along with Ambisonic microphones and additional Ambisonic resources. To finalise these is some information on exciting Immersive VR experiences on the Gold Coast, with the Gold Coast Film Festival VR Showcase, and the ‘Imaginarium’, a 360 degree projection room and Pod at Coomera Anglican College. To start here is a work-in-progress report of the immersive audio project ‘Inside the Chamber of Horrors’, and a stereo version of the project (1-dimensional before I start spacilising it in a 3-dimensional space).

Work-In-Progress Report

The Progress I have achieved so far has focused to creating all the assets/sounds for the film ‘Inside the Chamber of Horrors’. These were all created in Logic X DAW using the UAD Apollo Interphase and AKG 420 microphone for the recordings, along with logic library sounds and FX and sound design with Logic native synths and Native Instrument Komplete9 synths. These were processed to create the individual sounds with minimal processing as I will process all the sounds in Reaper in the 3 dimensional space. I bounced each of these as a mono .wav file and the music in stereo. Following are the stereo versions in audio and video of the film ‘Inside the Chamber of Horrors’, so you can hear the progress.

Inside the Chamber of Horrors: Stereo Audio Version

Soundcloud link:

Retrieved from https://soundcloud.com/orthentix/inside-the-chamber-of-horrors-stereo-mix/s-ujzEp

Viemo link:

Retrieved from https://vimeo.com/266058051/b89f490eb2

Capturing 360 Audio: Introduction to Ambisonics

Ambisonics is not only used to reproduce the sound sources, it is also a useful way to capture to the sound sources. We currently use a 1-dimensional sound-field with Stereo and a 2-dimensional sound-field with 5.1 surround sound in sound for Film and TV. Ambisonics uses a 3-dimensional sound-field, including height information. Ambisonics come in a variety of formats including A, B, C, D, E, and G-formats.

The main formats used in captuing and representing linear VR are 1st order Ambisonics or B-format microphones. These use a tetrahedral array of four near-coincident capsules, capturing 4 channels of audio (W, X, Y, Z). A-format Ambisonic microphones are also sometimes used, outputting the audio information directly from the capsules and are further processed to the W, X, Y, Z channels. The information captured from A-format and B-format Ambisonics must be decoded to represent the 3-dimensional sound-field. This information is versatile and flexible, as it can be encoded into other formats at any time, including stereo, quad, and 5.1. (Virostek, P. 2017.). Higher order ambisonic microphones are also used in linear VR. These can use a multitude of channels, anywhere from 16 to 32 channels. Capturing and representing the audio from a 3-dimensional sound-field, with better spacial resolution than A-format and B-format microphones, producing immense presence and immersion into the 360 environment.

For use in linear VR, depending on the Spacial Workstation you use, these 16 or 32 channels are then encoded into B format ambisonics (4 spacial audio channels and 2 stereo head-tracked channels) for mixing in the 360 environment. This is then further encoded to binaural audio with head-tracking with use of headphones, to follow the viewer gaze in the 360 environment. The Ambisonic decoding Plugins are necessary to record and process the audio information. WigWare and Soundfield both have free decoding Plugin available from https://www.brucewiggins.co.uk/?page_id=78 and http://www.soundfield.com/products/surroundzone2Harpex is another acclaimed Ambisonic decoder, though quite expensive at around 500 EU, available from https://harpex.net/index.html

Here is a detailed explanation of Ambisonics from The Audiopedia

Retrieved from https://www.youtube.com/watch?v=LrtSleKC11E (November 22, 2017.).

Ambisonic Microphones

“The original Soundfield microphone was first developed in the 1970s and since then a number of variations of this design have been released by different manufacturers. More recently, as Ambisonics has become the de facto standard format for spatial audio for VR and 360 video.” (Bates, E. 2017.). Following are descriptions on notable Ambisonic microphones used for capturing 3-dimensional audio information, available from very expensive to quite cheap.

Retrieved from https://en-us.sennheiser.com/microphone-3d-audio-ambeo-vr-mic

Sennheiser AMBEO VR Mic:The AMBEO VR Mic makes capturing real spatial sound as simple as any stereo recording. You don’t need to set up several microphones — saving you time, money and manpower. Furthermore, using the AMBEO VR Mic means you’ll never have to add other sounds in the mix anymore to achieve stunning 3D sound.

Especially designed for 360° spatial audio recording, the easy-to-use AMBEO® VR Mic is an ambisonic microphone fitted with four matched KE 14 capsules in a tetrahedral arrangement. This special design allows you to capture the surrounding sound from one single point. As a result you get fully spherical ambisonics sound to match your VR video/spherical 360 content.

ST450 MKII SoundField Portable: The updated portable, battery-powered ST450 MKII microphone system is aimed at location recording film and TV sound specialists everywhere, and builds on the success of SoundField’s previous ST250, ST350 and ST450 portable microphone systems.

SoundField SPS422B Microphone System: The SoundField SPS422B is a unique microphone that applies the SoundField concept to Stereo, Mid/Side and surround applications. In its most basic form, it is a mono and variable angle stereo Microphone with remotely variable pick-up patterns plus the option to deliver Mid/Side and B-Format outputs.

SoundField SPS200 Software Controlled Microphone: SoundField SPS200 Software Controlled Microphone

SoundField DSF-B MKII: SoundField DSF-B MKII Digital Broadcast package

mh acoustics Eigenmike: mh acoustics’ patented Eigenmike® microphone array is composed of many professional quality microphones positioned on the surface of a rigid sphere. Eigenmike® microphone array technology is a two-step process: First the outputs of the individual microphones are combined using digital signal processing to create a set of Eigenbeams. A complete set of Eigenbeams capture the sound-field up to the spatial order of the beamformer. Second, the Eigenbeams are combined to steer multiple simultaneous beam-patterns that can be focused to specific directions in the acoustic field.

Core Sound TetraMic: The first portable, single point, stereo & surround sound Ambisonic soundfield microphone to be available for under $1000 (TetraMic alone), or under $1350 fully configured for most multi-track recorders and audio interfaces. Individually calibrated, each TetraMic is the finest performing microphone of its type in the world.

Zoom H4n: The H2n Handy Recorder is the only portable recording device to come with five built-in microphones and four different recording modes: X/Y, Mid-Side, 2-channel surround and 4-channel surround. Other advanced features include automatic gain control and onboard MS decoding, plus effects like compression, limiting and low cut filtering. You can even use the H2n as a multi-purpose USB microphone!

Additional Ambisonic Resources

Gold Coast Film Festival VR Showcase

Gold Coast Film Festival is at the forefront of Virtual Reality (VR) cinematic innovation through a varied program, in partnership with Byron Bay Film Festival. You’ll be transported deep into films, stories, and places designed to expand your perception of the world we live in and the possibilities for immersive storytelling in the years to come. The selection on offer is a mix of inspiring, entertaining, enlightening and fantastic experiences representing some of the very best VR content from all around the world and Australia many of which will resonate with you long after the festival. The GCFF VR Showcase will be held on Saturday 28 April and Sunday 29 April, at HOTA Home of the Arts — Lounge.

Coomera Anglican College: Immersive 360 Education

The Pod — A future-focused centre featuring the latest immersive and interactive technology, designed to take learning out of the traditional classroom. With robotics, interactive touchscreen displays, 3D printing, writeable walls, a 360-degree climate-controlled immersive environment, smart glass and an indoor drone flying space, The Pod makes primary students the architects of their learning.

The Imaginarium — A climate controlled, 360 degrees Imaginarium is the centrepiece of the new learning facility, featuring six laser projectors and cinema-quality surround sound creating a seamless 360-degree sensory experience without the need for wearable technology. The climate-control technologies can teleport students from the icy cold environments of Antarctica to the sweltering Sahara Desert and even off planet to Mars with the wave of a wand.

The following Blog will cover another Work-In-Progress Report of the Immersive 360 project ‘Inside the Chamber of Horrors’ and an Immersive Audio Mix of the project, along with information on Mixing Audio for the 360 environment. Stay Tuned!

Immersive Audio — Part 5

Audio Production for 360 Degree Video

“With interactive 3D audio, there are more storytelling possibilities.”(Baker, E. 2017.).

The following Blog focuses to Mix considerations, Naturalistic vs Cinematic experiences and Loudness considerations in regards to the audio as I am putting the finishing touches to the spacilized audio mix. To start following is a Work in Progress Report and a 360/VR link to view the progress on the project ‘Inside the Chamber of Horrors’.

Work-In-Progress Report

This week I have been spatilising the mix into a 360 space using Reaper DAW and FB Spacial Workstation. I spatilised all the Atmospheric and FX assets, and left the Foley and Dialogue as mono stems with object based representation. The music is on as Head-locked stereo. There is a slight delay with the door slam, and I still need to dynamically process some of the sounds, do the automation and add the reverbs, though am pleased with the progress. I have been playing with the FX sounds, changing between mono, 5.0 and Ambix channels, I have gone with mono, as this keeps the mix less muddy and confusing as there are also foley/dialogue assets playing at the same time.

Inside the Chamber of Horrors: Immersive Audio Mix-Progress

Here is a 360/VR link to view the progress of the spacilised audio mix on the project ‘Inside the Chamber of Horrors’. (Please wear headphones).

Facebook Link: Inside The Chamber OF Horrors_Mix. Posted 3 May

https://www.facebook.com/orthentixVRpreview/videos/223547558229677/?

Mix considerations

Those of you familiar with surround mixing for film or picture will have experience mixing audio for a 5.1 (or 7.1) system with dedicated channels for Left, Centre, Right, Surround Left, Surround Right and LFE. This type of surround mixing has certain recommended protocols…with cinematic VR the rulebook is out of the window (and being re-written).

The new considerations stated by renowned VR Sound Designer, Sally Kellaway are:

  • Ensure non-critical sounds are non-attention-grabbing
  • Ensure non-visually represented sounds are non-attention-grabbing
  • Help the viewer understand what is happening around them

The non-critical sounds in my project are atmospheric and music, these are non-visually represented sounds therefore will use non-attention-grabbing methods when processing these sounds in the mix. For example spatialize these sounds with no direct panning, continuous throughout the film, lower amplitude and little or no compression, to make them not noticeable and part of a cohesive experience. To help the viewer understand whats happening in the storyline of the film I will direct the viewers attention on visually represented sounds and critical sounds. This will be done with direct panning, ensuring your audio is guiding the viewer where the director intends, with a higher amplitude and compression so they stand out in the mix, but not overdone. The visually represented and critical sounds in my project are Foley, Dialogue and FX.

“When the sound is invisible, that’s when it’s at its best — when it’s simply part of the cohesive experience.” (Baker, E. 2017.).

Naturalistic vs Cinematic Experience

Whether your immersive experience should convey realism or hyper-realism depends on the intention of the visuals. Is your video a cinematic experience with a defined narrative that needs audio effects to exaggerate the viewers experience? Or are the visuals intended to truly immerse the user in a natural environment. If so, the mix should therefore be natural and free from distracting audio. The video ‘Inside the Chamber of Horrors’ is a cinematic experience, therefore has a defined narrative and audio cues are needed to direct the viewers attention. This visuals of this film needs spine tingling audio effects to exaggerate the viewers experience and immerse them into the film with presence.

“Perspective is critical to story. With 3D audio, we recreate the way that we hear in the real world, within a virtual environment; sounds come from above, below, and behind you. It helps finalize the illusion of presence. We can cue a sound behind them and motivate them to turn around. Or we can draw their eye to something by triggering a sound. This helps drive the artistic intent of the director in terms of creating a coherent narrative experience for the user.(Baker, E. 2017.).(Baker, E. 2017.).

Loudness Considersations

As with many other aspects of VR, there doesn’t seem to be any standardised answer to: How loud should you mix be? Should it be the same as broadcast or perhaps games audio?

The Gaming Industry have had many discussions regarding loudness recommendations and are starting to follow the International Telecommunication Union recommendations like the Audio Industry.

“The loudness recommendations are in-step with the Sony document in adopting the ITU-R 1770–3 algorithms for measuring loudness over a minimum of 30 minutes of representative gameplay (-23 LUFS, tolerance of +-2dB). True Peak not north of -1dBFS (DB below a full-scale sample). The next step for the group is to publish a 2.1 version of the document that recommends numbers on iOS and Android devices as well as web browser, though again, the group at Sony has already done some great work in this direction with their -18 LUFS recommendation for the Sony Vita.” (Menhorn, J. 2013.).

The Audio Industry follow recommendations from the International Telecommunication Union (ITU), are LKFS (Loudness Units relative to Full Scale) -23 to -25 dB with a dBFS (Decibel Full Scale/True Peak) -10 to -2 dB, depending on being a stereo or surround mix. (Hinton, J. 2017.). Here is an guide by Jeff Hinton on his Blog Frame.io.

(Hinton, J. 2017.).

The loudness recommendations for gaming are -23 to -21 LUFS and -1 dBFS and audio broadcast loudness recommendations are -25 to -23 LUFS and -2 dBFS for surround mixes. In regards to my project I plan to follow the audio broadcast loudness recommendations, as I feel these are most suited to 360 cinematic content.

Check your work

Once I finalise the 360 experience, I will check the video, to ensure that what I see matches what I hear, in regards to timing position, reverberation and amplitude, ensuring that the sounds are clear and at a comfortable volume . This will be done on toggling between my AKG702 studio headphones and apple ear buds, as most users will be listening through headphones, not laptops or desktop speakers.

Now to finalise my mix, automate and process. The final Blog will cover information on the future of VR and a link to the final Immersive Audio version of the project ‘Inside the Chamber of Horrors’. Stay Tuned!

Immersive Audio — Part 6

Audio Production for 360 Degree Video

This final Blog to the series Immersive Audio: Audio Production for 360 Degree Video will cover information on the future of VR. To start here is the Facebook and YouTube links to the final version of the project ‘Inside the Chamber of Horrors’, and information on the production process and reflections.

Final Mix: Inside the Chamber of Horrors 360

Please Wear Headphones!

Facebook link: Inside the Chamber of Horrors. Posted 7 May https://www.facebook.com/orthentixVRpreview/videos/224822541435512/

YouTube link: Inside the Chamber of Horrors. Posted 7 May https://youtu.be/A6aKwWE0_Do

Retrieved from https://youtu.be/A6aKwWE0_Do

Production Process

After listening back to the First Facebook upload Inside The Chamber Of Horrors_Mix, posted on the 3 May. I decided to change the spatializing and direction of some of the assets, after taking consideration of the distance and volume of assets. I mapped out the direction of the assets in a 3D space. In Reaper DAW using the FB360 Audio Workstation the final room dimensions set into the FB360 Control Plug-in were 4mH x 4mW x 5mL. Following are the final stems/assets used and information on the spatializing and processing of these.

Atmospheric

Drone Hum — Spatialized x4. Panned inside the room with reflections set to late reverb level, with the 4 tracks EQued uniquely and offset for difference in the 3D space.

Room Tone — Spatialized x4. Panned inside the room with reflections set to late reverb level, with the 4 tracks EQued uniquely and offset for difference in the 3D space.

Drone Dark — Spatialized x4. Panned outside the room, with no reflections, with the 4 tracks EQued uniquely and offset for difference in the 3D space.

Pulse — Spatialized x4. Panned outside the room, on a diagonal, with no reflections, with the 4 tracks EQued uniquely and offset for difference in the 3D space.

Water Drops — Spatialized x 2. Panned outside the room, on a diagonal, reflections set to late reverb level, with the 2 tracks EQued uniquely and offset for difference in the 3D space.

These were set to a low volume to set the environment and setting, with the pulse increasing as the scene intensifies.

Foley/Dialogue — Diegetic sounds

Dryer — Mono sound source, represented on the object in the storyline, set in the room with early reflections level is 84dB, reverb time of 2 mili seconds & more reflections.

Door — Mono sound source, represented on the object in the storyline, set in the room with early reflections level is 84dB, reverb time of 2 mili seconds & more reflections. This asset had Automation on the Azimuth, Distance and Elevation, to give movement to the door closing in the mix/3D environment.

Broom — Mono sound source, represented on the object in the storyline, set in the room with early reflections level is 84dB, reverb time of 2 mili seconds & more reflections. This asset had Automation on the Azimuth, Distance and Elevation, to give movement to the broom falling in the mix/3D environment.

TV Screen — Mono sound source, represented on the object in the storyline, set in the room with early reflections level is 84dB, reverb time of 2 mili seconds & more reflections.

Fluro Lights — Mono sound source, represented on the object in the storyline, set in the room with early reflections level is 84dB, reverb time of 2 mili seconds & more reflections.

Hand in Mirror x2 — Mono sound source, represented on the object in the storyline, set in the room with early reflections level is 84dB, reverb time of 2 mili seconds & more reflections.

Mask Guy — Mono sound source, represented on the object in the storyline, set in the room with direct sound level is 90dB, reverb time of 1 mili second.

Creep Girl x2 — Mono sound source, represented on the object in the storyline, set in the room with early reflections level is 84dB, reverb time of 2 mili seconds & more reflections. The 2nd track was a Mono sound source, represented on the object in the storyline, set in the room with direct sound level is 90dB, reverb time of 1 mili second.

Footsteps — Mono sound source, represented on the object in the storyline, set in the room with early reflections level is 84dB, reverb time of 2 mili seconds & more reflections. This asset had Automation on the Azimuth, Distance and Volume, to give movement to the footsteps moving around in the mix/3D environment.

Mirror Guy — Mono sound source, represented on the object in the storyline, set in the room with early reflections level is 84dB, reverb time of 2 mili seconds & more reflections.

Tub Guy — Mono sound source, represented on the object in the storyline, set in the room with early reflections level is 84dB, reverb time of 2 mili seconds & more reflections.

Stab Guy — Mono sound source, represented on the object in the storyline, set in the room with direct sound level is 90dB, reverb time of 1 mili second.

Dragging Body — Mono sound source, represented on the object in the storyline, set in the room with direct sound level is 90dB, reverb time of 1 mili second. This asset had Automation on the Azimuth, Distance and Volume, to give movement to the body being dragged away in the mix/3D environment.

Stabs & Slashes — Mono sound source, represented on the object in the storyline, set in the room with direct sound level is 90dB, reverb time of 1 mili second. This asset had Automation on the Azimuth, Distance and Elevation, to give movement to the stabs and slashes happening in the mix/3D environment.

These were all set to a higher volume based to sound following the storyline.

FX

Air Lock — Spatialized to Ambisonic sound source, with no reflections.

Stab Scene — Mono sound source, set in the room 1–2m behind the Diegetic sound, with no reflections.

Girl In Doorway — Mono sound source, set in the room 1–2m behind the Diegetic sound, with no reflections.

Man & Girl — Mono sound source, set in the room 1–2m behind the Diegetic sound, with no reflections.

Outro FX — Spatialized to Ambisonic sound source, with no reflections, though at a higher volume for impact.

These were set to a mid volume, so they did not to interfere with the dialogue and foley as these sounds direct the viewers attention and are Diegetic, as in part of the storyline.

Music

Dark Dub Beat — Head-locked audio x 2, Panned L and R, for the Facebook render and the 2 tracks set to stereo and Panned L and R in the Spatializing plug-in for the YouTube render, set to the 3D Master.

This was set to a mid volume, so they did not to interfere with the dialogue and foley as these sounds direct the viewers attention and are Diegetic, as in part of the storyline.

Reverb

I used ReaVerb and Wigware Verb (4channel)on the Room Tone and Drone Hum assets and used ReaVerberate on the whole mix. These Reverb were Ok though in the future would like to purchase Trillium Lanes convolution Reverbs for Video based projects, and look into Dear VR Reverb for VR projects.

Rendering for distribution on Facebook

Facebook uses 2nd order Ambix, therefore there are 9 channels to render from the Control Plug-in, and 2 channels from the Head-locked Master. I worked in 44.1K though would use 48K for future 360 video based projects.I named the files Inside the Chamber of Horrors_Facebook_360 and Inside the Chamber of Horrors_Facebook_HL_360, for ease with the different versions.

Rendering for distribution on YouTube

YouTube uses 1st order Ambix, therefore there are 4 channels to render from the Control Plug-in with no Head-locked audio, I worked in 44.1K though would use 48K for future 360 video based projects. I named the file Inside the Chamber of Horrors_YouTube_360, for ease with the different versions.

Encoding for distribution on YouTube

In the Encoder choose YouTube, 1st order Ambix, upload your audio rendered for Youtube and the Mp4 of the video, encode and upload to Youtube.

Encoding for distribution on Facebook

In the Encoder choose Facebook, 2nd order Ambix, upload your audio rendered for Facebook, including the Head-locked render and the Mp4 of the video, encode and upload to Facebook.

For more information on how to use the Facebook 360 Audio Workstation, Plug-ins, Rendering or Encoding, head to the Introduction — Facebook Audio 360 documentation link below.

Reflection

I am pleased with the final version of the immersive audio mix, Inside The Chamber Of Horrors. While it is not my best audio mix as I was focused to learning new software, Reaper and FB360 Audio Workstation, and new mixing environments, the 3D space. The audio guides the viewer, sets the eerie environment, scaring and making the viewers heart race, suiting the film and achieving my goals. The audio volume, distance and spatialized sources/panning are mixed well. In the future I will focus more on the processing of the assets with EQ, compression and convolution reverbs, to create a better mix. The final version changed a lot when you compare the final to the 2D version and my initial plan. I learnt that in 3D audio you need to keep the mix simple, guiding the views attention and creating the environment. Layering foley, dialogue, atmospheric, FX and music assets can be too confusing for the viewer in this space. Now I have learnt these applications and found a workflow that suits me, with starting in the Logic DAW and bouncing stems for use in Reaper DAW, I will focus on mixing and immersing the audience with my future productions. I am excited for future innovations in Immersive Audio.

Interactive VR

The future of Immersive Audio for me is to learn Interactive VR, to make my Immersive audio interactive music video’s, with viewer based attention controls also known as UI User interactions. Following is some of the research into this field with information on VR Games audio, FMOD, Unreal, Unity and WWISE applications.

FMOD is a software tool that allows sound designers to create interactive audio content for use in games, simulators, and other applications.

FMOD Studio is designed to be used in conjunction with the FMOD Studio Programmer’s API. Using the Programmer’s API an audio programmer can implement content made in FMOD Studio into a software project and control its playback through code. There are tutorials available from: http://www.fmod.org/training/

UNITY is another games engine (Free for Personal Use) used to create 2D and 3D games that integrate well with VR (and FMOD)

UNREAL ENGINE 4 is a complete set of creation tools for designing games. There is an extensive feature set for designing in VR.

WWISE is Audiokinetic’s advanced interactive audio solution for games. It integrates with Unity and other games engines.

My Future in VR

Amazingly Unity has released a new version of the software Unity 2017.3focused to empowering filmmakers to create truly interactive 360 videos, this is exactly where my attention is with VR content creation to make my Immersive audio interactive music video’s, with viewer based attention controls also known as UI User interactions. You can now bring a 360 2D or 3D video into Unity to create standalone 360 video experiences targeting VR platforms. Unity now offering built-in support for both 180- and 360-degree videos in either an equirectangular layout (longitude and latitude) or a cubemap layout (6 frames) and has released an Interactive 360 Video Sample Project to use as a template makes creating this innovative content a whole lot easier.

“With Unity you can build real-time effects, interactions and UI on top of your videos to achieve a highly immersive and interactive experience. To make this process even easier, we have just released the Interactive 360 Video Sample Project on the Asset Store. It’s a free download and we encourage creators interested in making interactive 360 videos to give it a try.” (Stumbo, S. 2018.).

The full Unity Blog is available here:

As you can see Virtual Realities and Immersive Audio are such new and innovative technologies, changes are happening weekly with new software and plug-ins, it’s difficult to keep up. Facebook has brought out a new Audio Workstation, following is information on the new features.

Facebook Audio Workstation 3.2

What is new:

  • New! (Spatialiser) Directionality per sound source — frequency filtering based on the direction of the source
  • New! (Spatialiser) 3D visualization view in the Spatialiser plugin to view directionality cones
  • Improved: Sync with 29.97/23.97 FPS videos
  • Improved: Updated documentation for the object tracker
  • Fixed: (Spatialiser) UI flicker when using the object tracker

For now I’m going to focus on learning Unity and creating content while dreaming of the future of Immersive audio for VR applications with Augmented Reality and DAW interactions, so you can create the mix in the 3D environment without needing to fragment the workflow, with rendering, encoding and uploading the video or taking off the headset to check the mix. This finalises the series of BLOG’s on Immersive Audio: Audio Production for 360 Degree Video, I hope you enjoyed the process and information. To view my future Immersive Audio 360 Projects, please follow/subscribe to myFacebook or Youtube Artist pages/channels below.

Stay Tuned for future Blogs on Music Production, Creative Industries and Media Culture, Thanks!

VR Immersive Audio Resources — from Spook.fm

This will be a great reference and research tool for my upcoming project and future VR audio applications.

YOUTUBE + Spatial Media Metadata Injector

https://support.google.com/youtube/answer/6395969?hl=en

https://github.com/google/spatial-media/blob/master/docs/spatial-audio-rfc.md

https://github.com/google/spatial-media/releases

GOOGLE Daydream

https://vr.google.com/daydream/developers/

SPOOKSYNCVR

http://www.spook.fm/spooksyncvr/

https://www.youtube.com/watch?v=lzLxmEIYBl4

FACEBOOK 360 SPATIAL WORKSTATION

https://facebook360.fb.com/spatial-workstation/

https://www.facebook.com/groups/1812020965695437/

FACEBOOK GROUPS

https://www.facebook.com/groups/positional.audio.in.vr/

https://www.facebook.com/groups/spatialmusicinvr/

https://www.facebook.com/groups/ambisonictoolkit/

https://www.facebook.com/groups/EuropeVR/

https://www.facebook.com/groups/vraudio/

https://www.facebook.com/groups/wwisewwizards/

https://www.facebook.com/groups/1577098759209665/

AMBISONICS PLUGINS

AA

Audio Ease 360pan & 360Monitor https://www.audioease.com/360/

Facebook360 Spatial Workstation https://facebook360.fb.com/spatial-workstation/

Flux IRCAM Spat http://www.fluxhome.com/products/plug_ins/ircam_spat-v3

GAUDIO Works http://www.gaudiolab.com/cinematic#works

Harpex http://harpex.net/download.html

Noise Makers Ambi Head, Ambi Pan & Ambi Converter http://www.noisemakers.fr

SoundField http://www.tslproducts.com/soundf…/soundfield-surroundzone2/

VVAudio VVEncode http://www.vvaudio.com/products/VVEncode

AUDIO UNIT

Audio Ease 360pan & 360Monitor https://www.audioease.com/360/

Auratorium http://audioborn.com/auratorium/3d-audio-virtual-reality

B2X http://www.radio.uqam.ca/ambisonic/b2x.html

Flux IRCAM Spat http://www.fluxhome.com/products/plug_ins/ircam_spat-v3

Harpex http://harpex.net/download.html

Noise Makers Ambi Head, Ambi Pan & Ambi Converter http://www.noisemakers.fr

SoundField http://www.tslproducts.com/soundf…/soundfield-surroundzone2/

VST

AAT http://www.ironbridge-elt.com/products/aat.html

Audio Ease 360pan & 360Monitor https://www.audioease.com/360/

ambiX http://www.matthiaskronlachner.com/?p=2015

Auratorium http://audioborn.com/auratorium/3d-audio-virtual-reality

B2X http://www.radio.uqam.ca/ambisonic/b2x.html

Blue Ripple Sound http://www.blueripplesound.com/products/toa-core-vst

Facebook360 Spatial Workstation https://facebook360.fb.com/spatial-workstation/

Flux IRCAM Spat http://www.fluxhome.com/products/plug_ins/ircam_spat-v3

Harpex http://harpex.net/download.html

Noise Makers Ambi Head, Ambi Pan & Ambi Converter http://www.noisemakers.fr

SoundField http://www.tslproducts.com/soundf…/soundfield-surroundzone2/

VVAudio http://www.vvaudio.com/downloads

WigWare http://www.brucewiggins.co.uk/?page_id=78

JSFX

ATK http://www.ambisonictoolkit.net

ATK Github https://github.com/ambisonictoolkit/atk-reaper

MAX

Ricky Graham http://rickygraham.net/?p=176401532

HoaLibrary http://www.mshparisnord.fr/hoalibrary/

ICST https://www.zhdk.ch/index.php?id=icst_ambisonicsexternals

IRCAM Spat http://forumnet.ircam.fr/product/spat-en/

Graham Wakefield http://www.grahamwakefield.net/soft/ambi~/

PURE DATA

HoaLibrary http://www.mshparisnord.fr/hoalibrary/

Plogue Bidule

Aristotel Digenis http://www.digenis.co.uk/?page_id=59

STANDALONE

Sound Particules http://www.sound-particles.com

GENERAL INFORMATION

A well-organized, thorough primer on spatial audio from Google VR.

Daniel Courville, a lecturer at the University of Montreal in Quebec, provides ambisonic research and software.

The Wikipedia page for Ambisonics.

An excellent article on recording B-format ambisonics with regular microphones from Daniel Courville.

Microsoft research into spatial audio, including Head Related Transfer Functions (HRTF).

Ambisonic.net has a variety of information on ambisonic recording.

An Oculus blog post about binaural audio for narrative VR.

More research links compiled by Aaron J. Heller.

A research paper on the science behind acquiring and processing spatial audio by Oliver Thiergart.

Sally Kellaway published a resources page for Interactive and Linear VR audio design with excellent information on designing audio experiences and even composing music for VR.

Enda Bates has a great write-up on his experience using several different ambisonic microphones for an orchestral recording in an interesting space.

Facebook also has general information on 360 video production.

FORMATS AND SPECIFICATIONS

YouTube’s page on how to use spatial audio in 360 and VR videos.

YouTube’s support page on uploading 360 and VR content.

Google Jump support document on using 1st-order ambisonics in Adobe Premiere Pro.

Google Jump support document on using 1st-order ambisonics in Reaper.

Google Jump document on previewing spatial audio in a DAW using Jump Inspector.

Using the Zoom h2n for spatial recording.

Hub for the Facebook 360 Spatial Workstation.

AUDIO DELIVERY SPECS (this will be outdated very quickly)

SAMSUNG VR

Video: .MOV or .MP4

FOA

Format: AAC-LC 4.0

16 bit / 48 KHz

channel order & normalisation = AmbiX (=ACN/SN3D)

HRTF = ?

YOUTUBE

Video: .MOV or .MP4 (UHDTV)

FOA

Format: PCM bij MOV // AAC-LC 4.0 layout bij MP4

16 bit / 48 KHz

channel order & normalisation = AmbiX (=ACN/SN3D)

HRTF = Thrive Symmetric Cube

OCULUS VIDEO

Video: .MKV

format: ogg vorbis

16 bit / 48 KHz

channel order & normalisation = AmbiX (=ACN/SN3D)

HRTF = ?

FACEBOOK / OCULUS TBE

export: stereofile (FB_2D) + proprietary 8 kanaals (FB_3D)

WAV file

16 bit / 48 KHz

convert to .tbe or FB360 video met FB360 encoder

Voor Stereo Reference:

Export TBE HP

JAUNT VR

FOA & DOLBY Atmos

Format: PCM

16 bit / 48 KHz

channel order & normalisation = FuMa

HRTF = square decode

References

M. Altman, K. Krauss, J. Susal, N. Tsingos, ‘AES Conference Paper: Immersive Audio for VR’, Dolby Laboratories Inc: CA 94103–1410, USA, &, Dolby Germany GmbH: 90429, Germany.

Facebook Incubator. (N.d.). Facebook 360 spacial audio workstation: documentation [Website]. Retrieved from https://facebookincubator.github.io/facebook-360-spatial-workstation/Documentation/SpatialWorkstation/SpatialWorkstation.html#

Spook.fm. November 11, 2016. Spook.fm: VR days 2016 [Blog]. Retrieved from https://www.spook.fm/vrdays2016/

Beaudoin, Jean-Pascal, & Nair, Varun. (July 19, 2016.). Audio for cinematic VR[YouTube]. Retrieved from https://www.youtube.com/watch?v=Ya9GQeKosIw Published by Official VRDC.

Filmsound.org. (N.d.). Terminology: diagetic [Website]. Retrieved from http://filmsound.org/terminology/diegetic.htm

The Audiopedia. (November 22, 2017.). What is ambisonics? What does ambisonics mean? Ambisonics meaning, definition & explanation [YouTube]. Retrieved from https://www.youtube.com/watch?v=LrtSleKC11E

Virostek, Paul. (January 3, 2017.). Creative field recording: An introduction to ambisonics [Blog]. Retrieved from https://www.creativefieldrecording.com/2017/03/01/explorers-of-ambisonics-introduction/

Bates, Enda. (June 19, 2017.). Comparing ambisonic microphones [Blog]. Retrieved from https://endabates.wordpress.com/2017/06/19/comparing-ambisonic-microphones/

Baker, Eric. (February 23, 2017.). What can you do with 3D sound that you can’t do with 2D sound? [Blog]. Retrieved from https://nofilmschool.com/2017/02/3D-audio-virtual-reality-sound-designer-viktor-phoenix-interview

Menhorn, Jack. (February 28, 2013.). Designing sound: Loudness in game audio [Blog]. Retrieved from http://designingsound.org/2013/02/28/loudness-in-game-audio/

Hinton, Jeff. (August 9, 2017.). Frame.io: 5 Broadcast audio specs every editor must understand [Blog]. Retrieved from https://blog.frame.io/2017/08/09/audio-spec-sheet/

Stumbo, Sarah. (January 19, 2018.). Getting started in interactive 360 video: Download our sample project [Blog]. Retrieved from https://blogs.unity3d.com/2018/01/19/getting-started-in-interactive-360-video-download-our-sample-project/

--

--

Orthentix
Orthentix

Music Producer l Artist l Writer l DJ l Radio Presenter — Her blogs cover topics of musicology, music production, philosophy & media culture www.orthentix.com