Changing the game experience by binaural, spatial audio

I’m a Game director of SignSine indie game studio. We are developing a first-person atmospheric game called PROZE.

The characters, screenplay and mysterious surroundings take an essential part of our game. Player’s immersion in the game world is a big thing for us. We want them to believe in what they see, hear and experience. So the audio is a crucial part of our game. We want a player to be embraced by the environment in a surround sound (360°).

That’s why we decided to use binaural, spatial audio technology. Our goal was to recreate the realistic behavior of the sound in the game 3D environment for maximum player immersion in the game world.

I will tell the technical details further. Before that I will recommend you to put on your headphones and check our demo video to hear how it works.

We started from the usage of FMOD middleware, GVR audio plugin, and Unreal blueprints to place the sound sources with attenuation radiuses in Unreal Engine. We used a binaural algorithm, sound attenuation, and filtering, so the player could feel where exactly the sound is coming from, which is essential in VR experience.

We pushed further and developed our custom audio occlusion system. It uses raycast and the graph theory from discrete math to affect the sound while hiding behind the walls and different kinds of obstacles. Simply it’s reaching the muffled effects of an original sound source with different filter characteristics.

Our occlusion system uses ten rays that travel from a single audio source and affects left and right ear separately. So the player can be very precise of what he can hear for example when peeking around the corner in the game.

The next step for realistic game experience is a material dependency. The sound occlusion should be affected by the different thickness of materials which you happen to be hiding behind. So the player could clearly hear the amount of absorption when standing behind the wood door or concrete wall.

As a result, our algorithm works in real time and it is fully interactive. It prioritizes the events that need to be heard at the certain moment in the current scene, so it is pretty much CPU friendly.

We are continuing our work on this technology as there are so many features that can be translated by audio to maximize the game experience.

Thank you very much for reading, you are welcome to download the build to test it yourself.


Feel free to ask any questions. Have a good one!