Sound generation with Javascript

A simple introduction to WebAudioAPIs and sound generation in the browser

Recently I’ve worked on web experiment, using the Web Audio APIs for sound generation. The initial goal for this experiment was to generate everything from code, because it’s fun and because I wanted to reduce the size of the app as much as possible, to prevent boring loading screens.

Another experiment with sound generation is this one, where the user can play various notes with his keyboard. The note’s frequencies are then mapped on the circle.

In this post I will talk about the basic method I’ve used to generate sound.

Web Audio API

The Web Audio API is a specification implemented in all major browsers (even though its behavior is still inconsistent across different browsers) specific for sound manipulation, it works along the HTML5 <audio> element and is incredibly powerful. You can read more here [1] [2].

The basic idea is to have a series of AudioNodes, connected together in a graph. This graph must have a sound source (where the sound comes from), a sound destination (where the sound is played) and something in the middle for sound manipulation. You can see all the available nodes interfaces at this link.
For this post we will be focusing on AudioContext, OscillatorNode and GainNode.

What we want to do is to use the OscillatorNode to generate waveforms and then manipulate the waveforms with our GainNode, in order to obtain some decent sounds.

The AudioContext is the base on which every audio graph is built, you can read the docs for more info.

Generating sounds

Let’s create our AudioContext

const context = new AudioContext();

We can then create our OscillatorNode

const oscillator = context.createOscillator();
oscillator.type = "sine";
oscillator.frequency.value = 196;

The type property indicates the waveform we want. We’re keeping it simple using a sine wave, but it’s possible to use different waveforms, such as: triangle, square, sawtooth and even custom waveforms.

The frequency value is what defines the actual note that will be played. You can pick the frequency values from one of this tables: [1] [2]

Create the GainNode

const gainNode = context.createGain();

Connect everything


The oscillator is our audio source and context.destination is the audio-rendering device (eg: your speakers).

Play the sound


Right now you can only hear a constant “beep”.

Generating notes

In order to transform this sound into a note you need to gradually silence the sound.

const duration = 2;

gainNode.gain.linearRampToValueAtTime(0.0001, context.currentTime + duration);
oscillator.stop(context.currentTime + duration);

gain.linearRampToValueAtTime is a handy function that linearly decrease the volume of the sound. In this case it takes the value of the gain to 0.0001, linearly, in duration seconds.
To prevent distortions in the sound I’ve discovered that it’s useful to stop the oscillator, after the sound has been silenced.

Here you can see (and hear) a simple example in action:


This is a basic approach to sound generation, but you can build a lot on it.
As you can hear in this experiment it is possibile to obtain some fancy effects just by playing around with the nodes in your audio graph. The source code of this experiment is opensource and can be found here. And here is the function responsible for sound generation.

Where can you find me?

Follow me on Twitter:
My website/portfolio:
My GitHub account:
My LinkedIn account:

Pierfrancesco Soffritti

Written by

- Software engineer @Google - WebSite: - GitHub:

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade