Web Audio API : Fire Synth
Billy Joel started the fire, but I shall program it
In the past two weeks, I went to a JavaScript meet up where the subject of one talks was the Web Audio API. I also went to see Billy Joel at Madison Square Garden. Of course only being only familiar with“We didn’t start the fire”, the piano man decided not to play it that night, so I built a key board right in the browser…and it is fire.
The Web Audio API is built into your browser using JavaScript. That means that you can use the API to produce all kinds of sounds. If you do not know me, or you are just now reading this blog, I love music and I love programming. So discovering the Web Audio API was a very exciting day in my life. It was like…..what’s the word…..HOT FIRE.
Getting a basic sound from the web audio is fairly simple. If you have had any experience with electronic sounds, audio engineering, or playing an instrument through an amp, the logic of connecting the different audio components together is similar and can be done in a few steps. I’ll start with a walkthrough of rigging the audio elements up in three simple steps and then walk through the implementation of the Fire Synth in the code pen above.
PART I: Wire it up
1. Create a new AudioContext
First, create a new AudioContext Object and set it to a variable. This is a native JavaScript object and it is simple to create.
const audioCtx = new AudioContext()
AudioContext
inherits a prototype from BaseAudioContext
. Methods on the BaseAudioContext include createDelay
, createGain
, createOscillator
and many others. Musicians, any of those look familiar? Take a second, open your console and type the following code….I’ll wait…:
const audioCtx = new AudioContext()
Object.getPrototypeOf(audioCtx).__proto__
Wow. There are a lot of audio manipulation methods on that bad boy! Think of the possibilities. Since we will need volume, let’s create a gain.
2. Create a gain node and wire it up!
Next we create a gain and wire it up to our AudioCtx
.
const gain = audioCtx.createGain()
Then, connect the new gain node to our audio context via the connect method. The connect method is called on the gain node and receives an argument of the desitination
of anAudioContext
to wire it up to.
gain.connect(audiotCtx.desination)
The default destination
property for the audio context are your external speakers that your browser defaults to when is plays audio.
Cool! gain
is now hooked up to our audioCtx
.
3. Create an oscillator node and wire it up!
Next, we need to create an oscillator to create a wave to wire up to our gain node. Remember our BaseAudioContext
prototype from before? Well, it also has a createOscillator
method on it. Let’s create one
const osc = audioCtx.createOscillator()
Sick. We are almost there. Our new oscillator node has a type
property that specifies what wave shape the oscillator has. The default shape is a sine wave. It also has a frequency
property that specifies the frequency of the oscillator in hertz (Wikipedia on Hertz). We will use this later to play musical pitches in the Western European tradition. For more about oscillator wave shapes check out this resource: Waveforms on Wikipedia
Now it’s time to wire it up! We connect osc
to gain
with the connect method once again.
osc.connect(gain)
Good news, gang! We are wired up. Let’s fire this badboy up by calling the start
method on osc
. If you have been following along in your console, that is awesome. Type osc.start()
in your console. If you haven’t, copy and paste this code into your browser:…..WAIT! Before you do, WARNING! DANGER! If you are at work, put on some headphones, quick! Also, shame on you for not working! Unless you are a dev, then keep working hard! Ok. COPY PASTE:
const audioCtx = new AudioContext()
const gain = audioCtx.createGain()
gain.connect(audioCtx.destination)
const osc = audioCtx.createOscillator()
osc.connect(gain)
osc.start()
WE HAVE SOUND!!!!!!!!!!
…. ….. ….. ok…..OK….MAKE IT STOP!
osc.stop()
The stop method
called on our oscillator will stop the oscillator which can never be revived again. RIP
PART II: FIRE SYNTH in the browser
Now that we can make sound and stop sound in the browser, let’s build a piano keyboard as an interface to create musical pitches. We’ll do this in a couple of steps. First, we will build the keyboard using HTML and CSS. Second, we will set up the synth playing logic with event listeners and handlers to simulate a playing a piano on the computer keyboard with JavaScript.
1. Build a Keyboard for a UI
The first step to playing musical pitches in the browser is to build an interface. Let’s make a keyboard using CSS and JavaScript.
First, create a <ul>
element with a class name of ‘keyboard-container’
in the index.html
. This will house out keys on the DOM. Next, we will populate the keyboard container. In our index.js
file, let’s build an array of objects with our key data that I am reusing from a React version of this app to save us some html coding.
Each keyData object contains properties of an <li>
element that we will use to build our keyboard. pitch
will become our <li>
’s id, className
will become our class designating the key as a black key or white key so that we can target it with CSS, our inline <style>
tag will position each <li>
in our <ul>
, and our keypad
property will contain the character on the computer keyboard that will represent a key on our browser piano. First, we will we will create each <li>
key.
Here is a function that will take in a key object an return a <li>
with the HTML we want for each key.
Let’s build a function to map through our keyDataOctave4
array and return a <li>
for each key object and join all of the our <li>
keys into one string.
Great! Now we need a function that will take our <ul class=‘keyboard-container’>
and change its innerHTML to our string of <li>
keys returned to us by keyBoardMaker
.
To display the keyboard in the browser, we set up an event listener to listen for the DOM content to load, grab our <ul>
element, and then populate that element with our keyboard. I’ll show you that in just a bit.
For styling the keyboard with CSS, check out the code pen above. I followed the example of Steven Goldenberg. He built an awesome synth and was the inspiration for this project. Thank you, Steven!
2. Synth Playing Logic
Great! We know how to make a sound using the Web Audio API, we have slick keyboard interface going, but how do we play this bad boy?
Since we have characters on the synth keys that represent keys on the computer, we will have to listen to keydown
and keyup
events on the DOM to simulate playing a real piano. We only care about keydowns on the computer keys that are represented on our synth interface. We want to start an oscillator on those keydowns. Similarly, we only care about keyups on computer keys that are sounding. We want to kill sounding oscillators on those keyups. We will need a way to keep track of all of the oscillators sounding so that we can play multiple pitches at once and find the oscillators to kill them on keyups. We also need to create an object that maps specific musical pitches to the oscillator’s frequency corresponding to the correct synth key in the browser.
Pitches. Here is our object that is going to connect our keydown event key values to the correct frequency in hertz to give us our sounding pitch.
Now, we set up our event listeners in index.js
inside our DOMContentLoaded
event listener. Check out the comments in the code for what each line of code does. We have already covered showKeyBoard
. handleKeyDown
and handleKeyUp
, andwaveTypeNode
will be explained shortly.
Still in index.js
we create our new AudioContext
and gain
. We’ll also create an empty object called soundingOscillators
that will house our sounding oscillators once they are triggered to sound. In this object, each key.value
from our keydown
event will be our a key in our soundingOscillators
object with the oscillator node being its property.
STOP! WAIT! SO MANY KEYS! Let’s straighten this out.
We have used ‘key’ to mean the following:
- A computer keyboard key that you press
- A key on our virtual visual synthesizer
- A property on our
keydown
andkeyup
event that specifies the character that was pressed on the computer keyboard - The address in memory on our
soundingOscillators
object that points to our sounding oscillators.
Wow. A bit confusing, I know. To clear some things up in case you are wondering “how many contexts of ‘keys’ can there possibly be!!!!??”, here are some uses of the word ‘key’ that we will not use:
- A metal object the turns a door
- A chain of islands off of the coast of Florida
- The world renowned pop singer with the first name of Alicia (though we will be able to play her hits by the end of this blog!)
Moving on. Let’s rig it up in index.js
.
Now it’s time for our handleKeyDown
function.
handleKeyDown
takes in an event and our waveTypeNode
. The waveTypeNode
is the select box in index.html
. First we grab the key that was pressed on the keyboard and assign it to the variable key
. Next, we grab the value from waveTypeNode
and assign it to the variablewaveType
. We can use this later to assign the waveform of our oscillator.
Now we have two if
statements. First we check to see that the key that was pressed is in our pitchPairs
object. If it is not, we don’t care about it and do nothing. If it is, we do another check. Since we only want one oscillator per key sounding at once, we check our soundingOscillators
object to see if that oscillator is present. If it is present and sounding currently, we do nothing. If not we create a new oscillator, assign its wave type to the type that the user has chosen, assign it’s frequency from our pitchPairs
object, connect it to our gain, call .start()
on the oscillator and finally assign that oscillator to the correct key in soundingOscillators
. We have sound!
Now, when we release the key, we want the oscillator to stop sounding. handleKeyUp
is about to take care of that business!
Again, we grab the key. We check to see if the key has a sounding oscillator. If it does, we disconnect the oscillator from the gain
and then delete that key value pair from the soundingOscillators
object.
That’s it! We built that Fire Synth! We have FIRE!
Special thanks to Steven Goldenberg, Maurice Roy, and Sharell Bryant for helping me with this project!
Thank you for reading! Did you enjoy the blog? Have an opinion about the code? Have a suggestion to improve the code or an idea for further exploration? Please leave a comment below and let me know! Let’s connect on Twitter and LinkedIn! Thanks again and stay learning!
Check out some more of my writing below!