How to record and play audio in JavaScript

Bryan Jennings
4 min readOct 7, 2017

--

Recording audio involves a series of steps:
1) Start recording the audio
2) While recording, store the audio data chunks
3) Stop recording the audio
4) Convert the audio data chunks to a single audio data blob
5) Create a URL for that single audio data blob
6) Play the audio

1) Start recording the audio

To start recording the audio, we need to create an audio stream by calling navigator.mediaDevices.getUserMedia and pass in { audio: true }. The getUserMedia function returns a promise that resolves to the audio stream. Once we have the audio stream, we can pass that stream into the MediaRecorder constructor and then call the start method on the media recorder. This starts recording audio.

2) While recording, store the audio data chunks

So far we've been recording audio, but we haven't started saving any of the recorded audio. The way we save the recorded audio is by collecting chunks of audio data as the recording continues. We can collect audio data chunks by listening for "dataavailable" events to fire, then we can push those data chunks into an array.

3) Stop recording the audio

So far we've been recording and saving our audio data chunks to an array. If we kept that code running, it would keep collecting audio chunks forever, but we want to stop recording after a few seconds. We can stop recording audio by calling the media recorder's stop method after 3 seconds.

4) Convert the audio data chunks to a single audio data blob

Now that we've stopped recording, we need to make it so that we can convert the audio chunks into a single audio blob. We do this by passing the audio chunks array into the Blob constructor.

5) Create a URL for that single audio data blob

Now that we have the audio blob, we can also create a URL that points to that blob by using URL.createObjectURL.

6) Play the audio

Now that we have the audio URL, we can play the audio by passing the audio URL into the Audio constructor and calling the audio object's play method.

If you run this in a browser, it will record audio from your microphone for 3 seconds, then play the audio that was just recorded.
This is a pretty simple example of how to get audio to record, but we'll probably need to play audio for a lot of different web apps, so it would be kind of annoying to have to write all this code every time we want to record audio. We can solve this problem by abstracting out the native API and instead use a much simpler API that we can create. We can accomplish this by converting our previous code into a function that returns a promise that resolves to an object that contains our API, which consists of 2 functions: start and stop. The start function starts recording audio. The stop function stops recording audio and returns a promise which resolves to an object that contains the audioBlob, the audioUrl, and a play function. We can use the audioBlob if we need to store the data on the server. We can use the audioUrl if we want to do any sort of custom behavior relating to playing audio that's more complicated than simply playing audio. Or we can simply just play the audio by using the play function. Here's the resulting function.

Here's an example of how to use the API we just created. This example records audio for 3 seconds, then plays the audio that was recorded.

As you can see, our API is a lot simpler and easier to understand than the native audio recording API. We can make this example even simpler by removing the callback passed to setTimeout and replace that with a call to a sleep function.

Now our code is even simpler and easier to understand. You can read the code from top to bottom and understand exactly what's going on. I simplified recordAudio even further by replacing the recordAudio's promise callback with an async function and used await in front of the call to getUserMedia. Here's the final result.

We could make this more robust by wrapping the recorder code in a try/catch statement and deal with any possible errors such as old browsers lacking support. If we want to have a better user experience, we can easily display the state to the user either with plain JavaScript or with a framework/library of our choice (e.g. React, Elm, Vue, Angular, etc). If we want to save the recorded audio blob on a server, we can send a request to the server with the audio blob as our payload. If you liked this explanation of how to record audio, let me know in the comments. If there’s something I can do to improve this explanation, please let me know.

If you want to run this code and try it out for yourself, here’s a link to a working example that I uploaded to Github: https://github.com/bryanjenningz/record-audio

Here’s an example app that shows you how to record and save audio to a Node server: https://github.com/bryanjenningz/record-audio/tree/master/record-server-example

--

--