How to build an Interstellar VR CommLink with Google Daydream and WebAudio
Part 1: Making NASA style Radio Transmissions with Tuna.js and WebAudio API
Recently, I had the chance to design a small audio module for a space-themed WebVR simulation.
Many people don’t know that browsers support a remarkable set of tools for creating and transforming audio files.
We wanted our space simulation to be as immersive as possible. The user experience was all about communication. We needed immersive audio to match our VR.
Let’s see how we can create immersive audio for our interstellar radio transmissions, using only the browser!
Here’s what our original, clean audio sounds like.
Here’s what it will sound like when we’re done!
In this article, we’ll focus on getting the audio right. Later I’ll share how to use the Google Daydream controller to create a great, immersive experience so players feel like they’re using a real sci-fi Commlink!
First, we need the audio to feel like it traveled a long distance. You’re in space, after all.
An obvious choice is WebRTC (real time communication), which I tried at first.
WebRTC can be fussy. It also felt too immediate for a radio transmission. Once connected, the browsers would send a continuous stream of audio to the other user.
The messages weren’t discrete. They didn’t feel like they bounced off a satellite.
To make discrete messages, we added a push-to-talk functionality to the Google Daydream controller.
The user, holding the controller, would push and hold a button to record a message. Once they released the button, the message was finalized and sent to the other user. It made the controller feel like a sci-fi commlink. WebRTC is great for streaming, but was a poor match for push-to-talk.
Instead of using WebRTC, consider a database solution like Google’s Firebase. Firebase is a fast, real-time database that you can monitor from a Google dashboard.
Each time a user recorded a message, we would transform it to a binary string and send it up to Firebase. The other user had a Firebase listener checking if any new messages appeared in the database.
Once the new message was in the cloud, Firebase sends it to the other user automatically. Although this is probably not a common use for Firebase, we were satisfied with the results.
Sending audio this way provided two cool advantages:
- It’s slower than WebRTC. So it organically introduced a delay between the moment the user recorded the audio and when the other user received it.
- Firebase keeps a record of all messages that sent by both users in small, easy-to-use audio files rather than a giant recording of a continuous WebRTC stream. Since players recorded with push-to-talk, all the audio on our server was signal, rather than lots of dead air or noise from the WebRTC stream.
Authentic Radio Audio
For immersion, we can’t have the crystal clear audio that WebAudio typically provides. We wanted our radio transmissions to sound like the classic audio from the Apollo space missions.
To get this effect, I used Tuna.js, an audio processing library that wraps the WebAudio API. The vanilla WebAudio API has everything you need to transform audio: overdrive, filters, and tons of other effects. But libraries like Tuna.js make your life a lot easier!
The WebAudio API uses a series of connected nodes to process audio. You can define the nodes you need, then chain them together between the audio source and the audio context that plays the processed audio. Here’s a great tutorial for how to get started.
I used filter and overdrive.
A bandpass filter removes frequencies above and below the band you define. This ‘hollows out’ the sound, making it more closely resemble a radio’s limited range. Then I added light distortion to the audio with an overdrive node. This made the audio grainy and crunchy.
After overdriving the audio, the signal was a bit too loud. I used a gain node to lower the volume.
Here’s the module to process the audio.
We’re ready for final touches. The audio now sounds great. Let’s add the famous quindar beep at the end of each radio transmission to signal that the message has ended.
This function handles audio playback for our radio transmissions. It’s composed of sub-functions that do the hard work.
We convert the raw data from Firebase then process the audio using the chain of Tuna.js/WebAudio nodes that we defined. Then we play the transformed audio back using our audioContext.
When the audio source has finished playing, our event listener (‘onended’) triggers a separate audio node in the DOM. This audio node holds a small file with NASA’s famous Quindar beep. When our message has finished playing, the Quindar beep will sound. It’s show time!
That’s about it for this step. Let me know if you have any questions.
Originally published at gist.github.com.