Filtering images using Web Audio API

Some experiments with arrays of sound and vision

Statuscode
Published in
6 min readOct 10, 2017

--

While messing with canvas and web audio api, remembering Signals and Systems classes and listening to David Bowie I did some snippets to show how filters works with all songs and images. Here we go:

Let’s say we create an HTML file with a canvas element there. You can load an image to it by doing:

Now, imageData has a data property which is a typed array that looks like

[211, 118, 103, 255, 209, 116, 101, 255, 207, …]

Each group of four values corresponds to red, green, blue and alpha of each image’s pixel.

Quick reminder about rgb: a black pixel is defined by [0, 0, 0, 255] while a yellow one is [255, 255, 0, 255]. If the image has 400 x 300 (width x height) it results in 120,000 pixels and the data array will have 480,000 values.

Let’s use the classic “Lena.jpg” image

The test image to be used on the next experiments

When plotting the first 200 values (or 50 pixels) it’s visible how fast is the variation because red, green and blue are very different depending the color (alpha, however, is always 255). Let’s use the great Chart.js to do some graphs!

(1) first 200 values of the array with red, green, blue and alpha values, (2) 150 values of the array after removing alpha values

As you can see, even removing alpha, there is a considerable variation from pixel to pixel.

Let’s iterate the entire data splitting red, green, blue and alpha. We’ll have four different arrays with 160,000 values each (because image has 400 x 400):

First 400 pixels (the entire first line) of red, green and blue arrays

The graph with separated arrays shows how colors are more constant along the first line with some abrupt changes. If you look closer, you can see that there are objects on the background that are responsible for these changes.

The bars on the background changes the color to a lightest one, that’s why the graphs has that increasing/decreasing

But that was a stable line since things are smooth and does not change that much. Let’s take a look on line 200:

The image cropped around line 200
Red, green and blue splitted values of line 200 of the image

Let’s call this graph “evidence1” (because usual notations as “g1” are easy to forget, right?). You can see the code used to generate these here.

Why is it important?

Considering that we can express image’s pixels as arrays, we’ll see that we can visualize/change audio in the same way.

Let’s make these arrays sing

One of the coolest things that you can do with Web Audio API is to use arrays to generate sound. You can take your favorite function (everybody has one, right?), like sin(x) and listen how it sounds (maybe this sounds a little poetic, but you really can do it).

Here is a function to create a buffer and play the array content:

Now we can use playSignal to do a little sandbox and play some array best hits on browser:

The default function on the textarea is 0.5 * Math.sin(6.2831 * (200) * t). Which is a sin with a 200 Hz frequency. The 6.2831 comes from 2 * π because 2 * π * f is the angular frequency.

You can access this little sandbox here. Take care with the amplitude values (now is 0.5) while playing there. Very loud sounds can hurt your ears.

Now we know how to get data from images on canvas as arrays and how to handle Web Audio API to make some noise 🤘!

What about the filters?

Web Audio API has some built-in filters that can be used to manipulate audio. On scope of this article we’ll take a look at highpass and lowpass. The names are pretty explanatory, but basically the lowpass will attenuate signals with frequency that are above the given value, and highpass will do the same with signals below the frequency. To create one we need a few lines of code:

Function to return a filter node passing an audioContext, the filter type and the frequency

To illustrate let’s create a signal that has a variable and crescent frequency

Math.sin(6.283 * (50 + 500 * t) * t)

That’s a sin with a linear function (50 + 500 * t) on frequency instead a constant (which was 200 Hz previously). Take a look on the original signal and the filters outputs below:

Original signal with variable frequency
Lowpass filter result (in blue) compared to original signal (orange)
Highpass filter result (in blue) compared to original signal (orange)

This gives us a good idea on how filters attenuate signals according the frequency values. If you want to hear how real music is affected here is a little example. Fortunately songs are more complex than the example above. If we plot a piece of the peels.mp3 file available on opsound.org we can see more realistic cases:

The original signal is the orange one, but let’s take a look closer to check the filters effects

A sample of the original song

First, the lowpass:

a sample of the original song (orange) and the lowpass filter output (blue)

You can see how the filter opposes it to abrupt changes and it softens the signal.

Now, check the highpass result:

a sample of the original song (orange) and the highpass filter output (blue)

The low frequency part of the signal, responsible for the biggest amplitude variation was totally removed on the result! Looking closer you can see that the high frequencies are there:

A closer look on the highpass filter result

If we know how to apply filters to data arrays and if we know how to get data from an image on canvas we can use this as a filter input. Take a look at “evidence1” graph red color filtered:

Red color from line 200 of lena.jpg (red) and a lowpass filter output (blue)

So if we filter all the 3 colors using a lowpass with a 1000 Hz cutoff frequency and then use the arrays to recreate the image in another canvas we have the below image. It looks blurred because all the colors transitions was soften!

Original image (left) lowpass filter result (right)

Using a highpass with 4000 Hz cutoff frequency we’ll produce this:

Red color from line 200 of lena.jpg (red) and a highpass filter output (blue)

The filter will remove low frequencies keeping the abrupt changes. So basically it’ll keep the contours:

Original image (left) highpass filter result (right)

This is not the exact result from filters built to do the same operations with images but I hope to have inspired you to play with these fantastic APIs. You can check this great experiment from my friend Davidson Fellipe to see lots of cool stuff with image processing.

Here is the final experiment live and the repo with the code.

--

--

Rafael Specht da Silva
Statuscode

Web developer, gif sommelier, once called “weird webby wizard”