Web Audio API

It’s Native?!

Audio synthesis is an art and a science, with infinite breadth and depth. So needless to say, when I started in on my first JavaScript project working with the API I was ready to get knee-deep in JSON and parse every object and function out of a giant nested data structure.

But alas, we pulled up the Web Audio API documentation in the Mozilla Developer Network documentation and there was not a single mention of AJAX requests, JSON — there wasn’t even a single mention of Promises!


~20 lines of code yields a deep bass sound on the page (in this case a 50Hz sine wave; put on some headphones, ‘cause your tinny laptop speakers won’t cut it). With the same lines of code an a few edits, I could just as well render any frequency in a number of waveforms (see lines 11–13 above)

But that’s besides the point:

Audio synthesis is native in the browser?! Holy $#*%!

The Web Audio API is that easy to get started with, and if you are using a modern browser you should be able to get this working automatically (if you’re an Internet Explorer user, shame on you). In fact, if you enter in the above 20 lines of code into your web console, it should just work.

I’ve put up a file with all of the text HERE, so you can just copy, paste in your console, and hit enter to unleash the bassy power of your browser!

So how does this work?

The Web Audio API allows you to create an audio context natively in JavaScript, and then within the audio context it uses a nicely encapsulated system of nodes.

Inputs — or sources — include OscillatorNodes, AudioBufferSourceNodes, or MediaElementAudioSourceNodes; these are instantiated and then can be connected to any number of Effects nodes. The GainNode is a very simple Effect node; it applies gain to affect the volume of the Input sources passed through it. Other Effects nodes include (but are not limited to):

  • WaveShaperNodes
  • DynamicsCompressorNodes
  • DelayNodes
  • BiquadFilterNodes
  • StereoPannerNodes

Awesome! A virtual outboard rig is right at our fingertips to affect the sounds that we have created. Each of these affects our sound in a different way, which I will not go into here; take a sound engineering 101 course or google them for more info. For the time being, the Mozilla Developer Network page for the Web Audio API has descriptions for each node type put into fairly simple terms. Here’s a more complicated mapping, just to demonstrate the tip of iceberg…

In addition to the Effects Nodes that directly filter the audio inputs, there are some visualization and data gathering nodes that are also plug-n-play:

The above is a live demo of the analyser that you can find HERE (make sure to use Chrome for that one) You can drag and drop mp3 files and it listen to them along with a sweet visualization of the dynamic range (above) and waveform (below). The author of that little app writes about it in a post HERE.

Mozilla also shows off their own visual audio analysis via their Voice-Change-O-Matic (you’ll also need Chrome for that one), but instead takes in audio from your computer’s microphone, passes it through an Audio Effect Filter of your choosing, and shows you the output!

All in all, the Web Audio API is an incredibly powerful instrument (pun intended) for creating audio right in the browser. It can be used for audio synthesis, for creating high-functioning web-players, audio visualization, or even for designing audio interfaces that respond to user input and dynamic video game audio design; the implementations are infinite!

The most incredible part is that it is already in your browser (sorry IE users) and you can get started immediately without any preparation or setup. I’m working with the Web Audio API in React, and there is even a fully implemented abstraction of it for React by Formidable Labs that you can find HERE. Enjoy!