The Simplest Possible WebAudio Synth
For web and javascript coders with a basic understanding of stuff.
I don’t know about you but when ever I start out learning a new area of technology I like to see the simplest possible example of how to use it and clear explanation of how it works. That’s what this is: how to build a software synthesiser in a web page.
Unless you pay close attention to the evolution of web tech you may not even have noticed an interesting addition to the standard browser toolset under the name of WebAudio API — but it’s here and available on all the mainstream browsers except Internet Explorer. (IE is in maintenance mode and new features are not being added.)
WebAudio API enables you to build surprisingly sophisticated audio application right inside the web page, in javascript. And you can crack into in just a few lines of code.
A Basic Feel
You can think of the API as being a series of components that you plug together, a bit like the components of an old analogue synth that plug together with signal cables. Or if you prefer alpha-tech speak — a directed node graph.
Let’s start with a simple web page:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Simple Demo Synth</title>
</head>
<body>
<button type="button" id="note-on">note on</button>
<button type="button" id="note-off">note off</button>
<script language="JavaScript">
</script>
</body>
</html>It’s just got a two buttons and a place to put some code.
As we said all we are really doing is connecting some nodes together to make the world’s simplest synth. In our case we want three nodes
- an oscillator node
- a gain node
- an output node
The oscillator node produces an audible signal. The gain node makes it louder or quieter. The output node sends the signal to the computer’s audio output.
All of these things can be created from an AudioContext object, the class for AudioContext will be sitting there on the javascript window object ready for you to use:
const AudioContext = window.AudioContext || window.webkitAudioContext;As per usual webkit has to be a bit awkward. By the way you will only find AudioContext inside up to date browsers so we will just assume that’s where we are and write modern javascript with const and let and class and all the rest of that good stuff. If you were writing for the general public you probably wouldn’t do exactly that because you would want to fail gracefully on older browsers.
We create a SimpleSynth class which takes an instance of the AudioContext as an initialisation parameter.
The whole javascript code looks like this:
class SimpleSynth {
constructor(audioContext) {
self.oscillator = audioContext.createOscillator();
self.gainNode = audioContext.createGain();
self.oscillator.connect(self.gainNode);
self.gainNode.connect(audioContext.destination);
self.gainNode.gain.minVal = 0.0;
self.gainNode.gain.maxVal = 1.0;
self.gainNode.gain.value = 0.0; self.oscillator.frequency.value = 440 .0;
self.oscillator.start(0);
}
noteOn() {
self.gainNode.gain.value = 0.6;
}
noteOff() {
self.gainNode.gain.value = 0.0;
}
}
function createSynth() {
const AudioContext = window.AudioContext || window.webkitAudioContext;
return new SimpleSynth(new AudioContext());
}
function createController() {
document.getElementById("note-on").onmousedown = (event) => synth.noteOn();
document.getElementById("note-off").onmousedown = (event) => synth.noteOff();
}
let synth = createSynth();
createController();
Reading the class constructor you will see we only have to create the oscillator node and the gain node, the AudioContext already has an output, ie the destination, ready for us to use.
The oscillator node does give us a default waveform of sin wave. If you’ve never delved into audio processing before you may wonder why the default is trigonometric function. Well I’m not going to give the explanation here but if I just said — it’s the simplest possible sound but it’s a bit complicated — you’d trust me, right?
The oscillator doesn’t have a default frequency so we set it to the magic number 440.0 Hz, or cycles per second. This is the defined value for concert A, five white keys to the right of middle C on a piano keyboard.
The gain node, surprisingly, doesn’t have a default scale so we give it a scale of 0.0 to 1.0 and we start off with the gain turned all the way down to 0.
And that’s about it. We just hook up the buttons to two methods on our SimpleSynth class. When we turn the note on we set the gain to 0.6 because turning it all the way up to 1.0 will probably be a bit too loud.When we turn the note off we set it back to 0.
You can see it here. And if you want to play with the code just inspect the web page or grab it with a curl command. (If you’re new to coding and not quite sure what those things are — go and find out, they’re useful skills).
If you want to see something of the full potential of WebAudio API I recommend looking up this great talk by Emil Loer . The full source code for his js303 synth is available on github.
Obviously all we wanted to do with this little project was take a really quick look and see what the tech is all about. Tucked away inside the API are classes and methods to play back samples, stream audio, filter, convolve, render and analyse complex waveforms: a feature packed toolbox.
Amazing as the WebAudio API is, it’s only the beginning of a revolution in web audio. The Chrome team currently have an experimental feature called Audio Worklet which enables you to run code directly on the audio rendering thread — and if those just sound like big computer words to you at the moment, don’t worry, it just means you can run your audio code faster and more reliably.
Even better you will be able to load WebAssembly compiled modules into those Worklets and get performance pretty much indistinguishable from that of native audio applications.
Great things are coming!