Optimizing Audio for VR in Unity
After a few hours of research and testing, this is what I learned about maximizing audio performance for Unity, assuming you’re building for PC (sorry mobile folks).
Audio Import Settings
- Set “Decompress On Load”. Since we’re working with PC, you have a bountiful amount of RAM to work with (8GB plus!), and setting this will ensure you don’t waste precious runtime CPU decompressing audio files.
- Set “PCM” compression format, and preferably import from a WAV file. PCM uses more RAM memory, but uses the least CPU power at runtime, while offering the highest audio quality. Situationally, the other formats can work as well, but this option offers the fastest runtime speed while also providing the highest quality.
- Make sure “Preload Audio Data” is checked, which will load all your in-scene audio files when your game/experience first loads.
- If your files are not already Mono, check “Force to Mono”. Stereo files cannot be properly spatialized, and take up more memory.
- The Sample Rate settings will be determined by your audio clips. Be aware that if you use the Optimization setting, the volume of the clips will likely be lowered.
How to Use Audio in your Scene
- Make sure all audio you intend to use is attached to an object in your scene as an Audio Source. You do not want to Instantiate an Audio Source after the Start() function has been called, because then the entire audio clip will have to load during runtime (potentially tanking your framerate).
- Set direct references to the different Audio Sources in your scripts, so that when you want to play a sound, all you have to do is call yourAudioSource.Play(). At first I thought enabling/disabling the Audio Sources would be better, with an OnEnable() function calling yourAudioSource.Play(). However, when I placed 256 prefabs in a scene with their only components being a Transform and Audio Source, I found the active Audio Sources had no noticeable effect on performance. Keeping the Audio Sources enabled saves you the performance cost of calling yourAudioSourcePrefab.SetActive(true).
- For objects created at the game’s start which have Audio Sources attached to them (such as pooled objects like bullets), calling play from OnEnable() can be effective (just make sure you GetComponent<AudioSource>() when those objects are created, not OnEnable).
As a general note, you probably don’t want to ever use Instantiate or Destroy past your game’s Start() function, since it is both taxing on the CPU and causes Garbage Collection, which will tank your framerate at random times (i.e. it is very hard to control for). If your game is RAM bound, this obviously won’t work, but given you have 8GB of RAM to work with (Oculus’ minimum specs, although technically you only have 4GB of the video memory to work with), you should be able to get away with pre-loading everything and never destroying it (only using Disable). I think I can hear mobile devs crying right about now. If your game DOES need more than the available RAM over it’s lifetime, you can still avoid the performance drain of Instantiate/Destroy by loading the game in levels/sections. Either Asynchronously Load the next level (I havn’t tested the performance on this) or briefly have the user’s screen go black in between levels so that there is no jarring frame lockup (trust me, it hurts physically otherwise).
I hope you found this helpful! VR development is hard, but luckily the internet allows us to solve problems collectively.
Note, this does not cover 3D Binaural Spatialized Audio. Although I think many of the principles would cross over, I havn’t in any way tested it. One of the biggest limitations of 3D audio right now is it’s processor usage. Including 3D Spatialized Audio will mean making performance tradeoffs in other areas.