Virtual reality applications rely on realistic audio rendering to create and maintain the illusion of being in another world. This post describes how to use ExoPlayer’s new GVR extension, which makes it easy to render spatial audio in VR applications.
The Google VR SDK documentation has a great introduction to how spatial audio works and is worth reading as an overview.
The ExoPlayer GVR extension provides GvrAudioProcessor, which wraps GvrAudioSurround from the Google VR SDK. This audio processor renders spatial audio to stereo, handling both standard multichannel audio streams and ambisonic soundfields as input.
The extension will be included in the upcoming ExoPlayer 2.3 release, but until then you can try it out on the dev-v2 branch. If you’re using SimpleExoPlayer, override buildAudioProcessors in the RenderersFactory:
If you’re constructing renderers directly, you can pass a GvrAudioProcessor to your audio renderer’s constructor.
To give an immersive experience it’s necessary to account for changes in the user’s head orientation. This is achieved by calling updateOrientation, passing in a quaternion specifying the current rotation. For example, if you’re implementing a GvrView.Renderer or StereoRenderer, you can update the orientation in onDrawFrame or onNewFrame respectively:
For information on supported ambisonic soundfield formats, consult the documentation on GvrAudioSurround.SurroundFormat.
We hope that the new spatial audio rendering extension makes it easy to add spatial audio support in your application. Please let us know how you get on in the comments below and report any issues on our issue tracker. The Google VR SDK has its own issue tracker for any VR-specific issues. We look forward to extending the ExoPlayer GVR extension with new functionality in the future.