Visualizing Sound With Framer

Debashish Paul
Framer
Published in
5 min readApr 4, 2016

--

I think, we all at some point, have stared at our Winamp and Windows Media Players when we played music. There is certainly something special in visually experiencing a non-visual format.

As a part of a personal project I had to work with sound and while playing around I set myself for creating sound visualization with Framer. After couple initial setbacks and some deep digging this happened!

…And based on the interest for this in the Framer JS community, I decided to share it here in a short writeup so that anyone interested can easily make their own sound visualizations with Framer.

There are three core parts of this process:

  1. Setting up an assembly line to create, connect and pass the audio frequency data.
  2. Setting up a method to store the frequency data at a very fast rate. Like multiple times per second.
  3. Use the frequency data to create visualizations by manipulating layer properties at the same rate.

Part 1

The premise of this is the web audio API. First of all, we’ll define our audio object and connect it to a source.

Web audio API is very vast and detailed, but for creating sound visualizations we just need to understand the AudioContext interface, two of its methods — createAnalyser and createMediaElementSource, and its destination property. Let’s setup the assembly line.

At this point, Framer might start showing an error —

SyntaxError: audio resources unavailable for AudioContext construction

Solution — Don’t panic, refresh Framer once and the error should be gone. Reason —As far as I could understand this, Framer renders the output each time you make any change in the text editor. In this case, sometimes there is a tiny lag between the loading of the sound file and the screen render. So when this error occurs, the AudioContext is not getting the audio data and hence getting into a problem with its own construction. Pro-tip — Refresh each time you see this error henceforth.

Part 2

Now we will write a tiny machine. It is very important and needs to be understood well.

  1. We want to perform a set of tasks (store and use) repeatedly at a given rate. So, first we will create a function and use a webkit method called window.requestAnimationFrame. This method is called 60 times per second.
  2. Then we will create a special type of Uint8Array array (they represent 8-bit unsigned integers) of a length that is equal to our analyser’s frequencyBinCount property which is also an unsigned long value.
  3. Lastly, we will use our analyser’s getByteFrequencyData method and fill the Uint8Array with the data returned. This method is essentially giving the entire frequency data.
  4. Remember, 2 and 3 is going to happen 60 times per second.

Let’s create a function called looper and call itself in it using the requestAnimationFrame. We will also call this function once to initialize.

Before moving to our last and final part, lets see what has happened so far:

Part 3

We have everything we need to create our visualization! I will show a fairly simple example here, you can choose to create your own. It is important to have a sound idea of the visualization as that will help you to plan the creation and manipulation of layers.

One last thing to consider before we proceed — we have to understand some important aspects of the storageArr array.

  • It is 1024 elements long.
  • Element’s position correspond to frequency strength of the audio. For eg, storageArr[0] has the lowest frequency and storageArr[1023] has the highest.
  • This also means that the sensitivity of sound is the most at storageArr[0] and least at storageArr[1023]. This info will help us create a realistic visualization.

Here is the anatomy of the visualization we’ll create —

Create the background and reflection surface.

Now, lets create our main element stack. To do this, we will create an empty wrapper layer first. It is always a good idea to have a wrapper if you are dealing with multiple layers together.

It is important to decide where in the screen would your elements be. This is very close to the width of the screen, so I chose a 240 for loop and a width of 3px for each element. so, 240*3 = 720px and have also added a 15px gutter in the beginning, so that the whole stack in aligned to the center of the screen.

For the reflection element stack, we will do the same thing with another wrapper and holding array. Just one thing, we will decrease the opacity of these layers since they are relections.

We have setup our camera, lights, and dancers. Now we need to make them dance with some music!!

Since you are the choreographer, you can make them dance any way you would like to. In this case, I want them to go up with a beat but always tend to fall down so that the next beat can pick them higher. The reflections would do the same thing but in an opposite direction.

So, we will go back to our tiny machine we wrote earlier — the looper function and write the dance script for our dancers to be repeated 60 times per second.

And, that’t it! showtime!!

This system should be replicable for any number of visualizations. You should start with basic shapes and their properties. Once you feel confident you can try experimenting with things like color temperature, images or sine waves!

I am intentionally not sharing the framer project as it has many other things scattered inside. Hope this was helpful. I would definitely love to see some awesome visualizations and feedback!

--

--