Recording and Saving Audio - Pt. 1

August Giles
4 min readSep 13, 2018

--

React/Rails Stack + S3

While working on an application, I had a decent bit of difficulty finding good resources for handling audio with this stack. Recording audio has a decent amount of resources, but resources for saving audio was very sparse. I hope this little series helps anyone break through those walls!

In this 2 parter, we’ll walk through the series of events from making an audio blob in the front end, to storing it in S3, then all the way to rendering it back out in some other portion of your app.

The bit we’ll cover in this first post is setting up the recording component in React and relaying the info to the back.

Recording

Setting up the functions

I highly recommend using the MediaRecorder API to handle recording the sound — It’s very approachable, easy to manipulate, and applicable. Be sure to check out the docs and example for more detail. Since we’re in a component structure, we’ll want to handle things a little differently from the Vanilla JS in the docs. Here’s my proposed solution to creating the recording/recording functions:

Above: I’ve saved some variables in state that I like to have access to. Then in prepareRecording(), we are getting permission from the user to use audio/initializing the mediaRecorder instance that we will use to capture the media. startRecording() goes ahead and starts streaming the media and when data is available (event that ships with the API), we’ll keep pushing them up to one array so we can compile the larger media event.

Above: stopRecording() stops the media stream and does some handling for us to be able to provide immediate feedback to the user with their new recording. We save the new blob and url to our state for later use. emergencyStop() is for the occasion where the user unexpectedly quits the process and we want to be sure any processes are stopped.

Rendering

Now for rendering, we’ll put those methods to use as a modal and conditionally render bits for ease of the user - The changes in state above will inform the conditional rendering. Don’t forget to plug in some semantic ui for quick and pretty building.

When the modal is opened, we make the call to prepare the recording. At that point, the there is a prompt to start recording. Clicking will trigger our start recording method, that event dataavailable is capturing the sound. On the screen will be a button to stop which, when clicked, will generate our blob and a local url for the blob data. With the local URL, you’re able to play their recording back to them there with the audio html tag (with controls) and render a save button.

And that’s recording! If that’s all you’re looking for, well done! If you’re one of the sad souls who wants to persist this data, we have a bit more to go here, but mostly in part 2.

Sending

And this is where it gets a little tricky.

  • How do we bundle our new data so that our back end will receive it?
  • When received, how do we store that data?

As for bundling the data, see the last bit of our component below:

We’re doing a few things here. Blobs are best sent to the back end with some metadata — what is this blob for? A really clean way to send this ‘blob plus data’ is by creating a FormData object because it bundles up our data as key value pairs and sends it all in a way our backend digests really nicely. In addition, I’m also creating a file from the audio blob and sending that back as opposed to the blob itself. This is personal preference. You can just send the blob if you’d like.

Okay! If you haven’t done so already, go ahead and set up an S3 bucket (or create an account if you’re not set up but are adventurous!). Then check out the docs for Active Storage, which will help us connect to AWS S3. Consider this a working intermission. See you in part 2 — we will go over configuration for S3 and Active Storage there a bit, then we’ll figure out how to call on that data.

--

--