A React Native Sound Recorder and Player NPM Package

Breaking up Project Functionality into Separate NPM Packages

May 2018 Update

Now that I have learned more about using React Native and creating friendlier NPM packages, I’ve made some significant updates to the package detailed in this article.

The package is now called react-audio-player-recorder-no-linking. I chose this new name to emphasize that it includes separate Recorder and Player components vice just a single SoundRecorder component, and that the package does not require linking to native code. This lack of linking maintains the package’s safety to use with Expo.

Additionally, all UI (buttons, badges, etc) has been pulled out of the actually Player and Recorder components and are available as imports. The imported UI items can be provided to the Player and Recorder as props. More importantly, the user can provide their own UI items as props. This means the user can theme the components any way they want.

Finally, the GitHub repository has been change to reflect the new name. The package can now be found at https://github.com/reggie3/react-native-audio-player-recorder-no-linking.

All these changes result in significantly improved flexibility and functionality. However, it also means that the components mentioned in the article and the relevant props provided to components have changed significantly. Those changes should be covered by the readme.md file available in the GitHub repo mentioned above.

Introduction

A project that I’ve diligently worked on for several months had finally gotten to a point that I was ready to bolt in yet another piece of functionality. It was time to give my creation the ability to record and play back sound.

My project is built using expo.io, and they provide documentation of their Audio API along with a Github repo containing an example project that can be run using the expo app. With such a good foundation to work with, I diligently and methodically began to include the code from the example into my own project.

This approach was successful, but I began to realize that I was adding a significant piece of mostly independent functionality to an existing project at the cost of flexibility and testability. Flexibility would suffer because I would have to repeat the process if I wanted to add the same capability to a new project. Testability would deteriorate because I was already integrating this functionality in a non-functional manner by connecting it to my redux store and implementing every other good idea that popped in my head.

So, why not pull this capability out as an NPM package that would take some inputs, and provide me the audio clip information that I wanted? This would provide the testability that I desired, give me a ton of reuseability, and scope my work in by minimizing the number of input and output points that would connect to the final product.

The final product

Design

First, I determined that all I really needed was a screen (or page, or whatever other metaphor you prefer) on which the user could push a button to start recording sound, push a button to stop recording sound, and then play, pause, stop, and replay said sound.

This UI would interact with the sample code provided in the expo.io documentation to actually performed the recording and playback.

Additionally, the UI would also require buttons to reset the audio clip in case the user wasn’t satisfied with it, and a button to do something when the user completed their task. This could be as simple as going back to the previous page. This leads into the final requirement; the package should return something allowing the calling component to get information about the completed sound clip. Since the expo documentation shows that a successful recording results in a file saved to the device, my goal would be to pass relevant file information back to the calling parent component via a callback function.

Implementation

Beginning with those desired ends in mind, I started creating the foundation of my NPM package. Thanks to previous experience with my react-native-webview-braintree and react-native-webview-quilljs projects, I developed the following process for creating a project ready to turn into an NPM package.

First I created a blank project in Expo. This results in a working application along with an App.js file containing a react-native component called App to act as the “frame” for the component to be published. The App component renders the SoundRecorder component that will be the published product. One upside to using this technique is that I’ll get a working application out of it that can be easily shared via the expo client.

The App component also passes the relevant props to the SoundRecorder component like so.

Rendering the SoundRecorder Component

The list of props shown above is not comprehensive; see the documentation here for the full list of available props.

The only required prop is a callback function that receives the recorded sound information from SoundRecorder. In addition to receiving the sound information, the callback also provides the ability for the calling component (in this case App) to perform some follow on action with the information that SoundRecorder provides. This action could be navigating to a different page, writing to a redux store, a UI change, etc. The great part is that SoundRecorder doesn’t care; its job is complete once it provides the sound file information to the calling component. Once the SoundRecorder completes its task, it executes the following onComplete callback function.

Function that receives the sound file’s information once sound recording is completed

As previously stated, I heavily relied on the expo.io Audio SDK example application. The repository for that location is located on GitHub here. I added some customization to dictate the status of the record and play buttons. This involved setting component state variables to mandate conditional rendering of said buttons.

Finally, the ability to press a button and reset the recording was added as a convenience, as well as a button to finish the process so that the passed callback function can be called.

The final result can be seen below. Unfortunately, one limitation of the audio SDK means that I can’t record the sound of the app recording sound, so you’ll have to take my word that it does actually capture and playback audio.

Application in action

Conclusion

Overall, I like the idea of separating out monolithic functionality into a separate NPM package. It supports separation of concerns and testability. It also helps me eat the elephant one bite at a time.

It also presents the opportunity to contribute to and receive input from a community of user to potentially create a better product.

The final results of this project are hosted on in the react-native-sound-recorder-no-native repo on GitHub which also contains NPM installation and usage instructions.

You can scan the following QR code to run this app if you have the expo.io client installed on your Android or iOS device:

Try this project for yourself

This component was successfully tested on an Android and iPhone.