All you need to know about the Web Audio API

“Apple desktop computer with editing software on the screen with keyboard and printer” by Jakob Owens on Unsplash

Did you know Javascript has a constantly evolving high-level API for processing and synthesizing audio? How cool is that!

The goal of the audio API is to replicate features found in desktop audio production applications. Some of the most prominent features are mixing, processing, filtering, etc.

The web audio API has a lot of potential and can do awesome stuff. But first — how well is the API supported across the board?

green across all boards

Cool, worth digging into. 👍

What is the web audio API capable of doing?

Good question! Here are couple of examples demonstrating the capabilities of the Web Audio API. Make sure you have sound on.

Most of the basic use cases covered: https://webaudioapi.com/samples/
Complicated synthesizer example: https://tonejs.github.io/examples/#buses

The web audio API handles audio operation through an audio context. Everything starts with the audio context. With the audio context, you can hook up different audio nodes.

Audio nodes are linked by their inputs and outputs. The chain of inputs and outputs going through a node create a destination — destination being the audio frequency which we pick up with our ears.

Audio context schema

If you’re the type of person who wants to know all the tiny details, here’s a sweet link to get you started.

If you’re more into visual learning, here’s a great introduction talk about the Web Audio — check it out!

Steve Kinney: Building a musical instrument with the Web Audio API | JSConf US 2015

One of the most interesting features of the Web Audio API is the ability to extract frequency, waveform, and other data from your audio source, which can then be used to create visualizations.

https://webaudioapi.com/samples/visualizer/
https://tonejs.github.io/examples/#analysis
https://tonejs.github.io/examples/#meter
Show HN: Randomly generated metal riffs using Web Audio API and React

This article explains how and provides a couple of basic use cases.

If you’re keen on learning the audio API in depth — here’s a great series.

Web Audio API | 01: Introduction to AudioContext

Here’s a free book about the Web Audio API— by Boris_Smus (interaction engineer at Google).

https://webaudioapi.com/book/Web_Audio_API_Boris_Smus.pdf

A glance at the API

The web audio API is relatively intuitive to understand. Here’s an abstract example of how to use the API.

https://gist.github.com/wesharehoodies/608e5b99fa2f46a5ed3710c5ffe6e360

Breakdown of the steps:

  • We create a new AudioContext object by calling it with new keyword.
  • We bind our oscillator and volume controller to the audio context.
  • We connect our oscillators and volume controller to our sound system.
  • Set our frequency type and value (tuning)
  • Start our oscillator — The start method of the OscillatorNode interface specifies the exact time to start playing the tone.


Making music with the browser

Jake Albaugh — The creator of Tone.js showing how to create music with the browser

Wrap up

If you’re unsure about the use cases for such API — think about all the music composition software out there which is desktop only. Converting those desktop apps to web apps would be a very workable business idea.

Why is web better in this case? Well, for starters — saving and closing your workspace and continuing from another workspace. Musicians travel a lot, so this approach would benefit artists by a huge margin.

Another example would be enhancing our user experience with sound. (Careful not to over-do this!)

It would deliver new solutions and better experiences for visually impaired people who use screen readers for websites leading to better accessibility.

What else can we do with the Web Audio API? — Christoph Guttandin

If you’re interested in staying up to date, the Web Audio Conf is an excellent event to take part in.

https://webaudioconf.com/

Thanks for reading, stay awesome! ❤