Midterm CC: Audio Visualizer for the Hearing Impaired

Hima Bijulal
4 min readOct 26, 2021

--

For the midterm proposal, I had proposed making an audio visualizer for the hearing impaired. My original thought was to make something that people with hearing disabilities. Although I was able to translate this idea into fruition, I wasn't able to do it with live Audio Input. The current version of this project has two different types of audio visualization attempts: One active visualizer and one passive visualizer.

The Passive Visualizer lets the user sit back while the music (audio) plays and gets visually represented on the screen as arcs. The arcs increase in size and change color depending on the range of the Amplitude and the Bass values for each instance of the audio.

The Active Visualizer takes this one step further and lets the user enjoy music visually in the form of a live painting that keeps generating based on the amplitude. In addition to just sitting back and watching the art unfold, the user can also use their mouse and click around the canvas to create a trigger of strokes thus allowing them to directly interact with the sounds without actually being able to hear them.

Interaction

The user will be able to play the loaded music (will add a feature to be able to add music manually or live audio through mic) and watch it be visually represented on screen.

Passive Audio Visualizer

By pressing the ENTER key, the user can toggle between the passive and active visualizer. Toggling also resets the canvas so each time the art will be different. The user can actively engage with the art by clicking with the mouse. Clicking will make a bunch of new blobs each of which is scaled according to the amplitude. The user can press the SHIFT key to save the artwork.

Active Audio Visualizer

The sketch can be found here.

Reflection and Future Improvements/Additions

This project was very challenging. Especially the math behind getting the different patterns on screen took a lot of research and math to wrap my head around. Getting the blend-like pattern was also a huge challenge in this project. I was hoping to somehow include a Perlin noise into the pattern being formed by the blobs, but this part dint translate well. I will need to do more studying and research before trying another hand at making generative art using Perlin Noise.

While attempting the project, I also had any extra features that kept popping into my mind that I didn't have enough time to implement into this version of the project. I was hoping for a version that can toggle between both: Loaded audio and live audio visualization and was also hoping to add a feature where the user can upload any audio from their side rather than just the one preloaded into the project by me. I am definitely going to keep working on this project and will hopefully be able to successfully add these features to make a complete and well-rounded tool for the deaf.

Overall, I am pretty happy about the experience I have been able to achieve thus far. I spent a lot of time just watching the blob visualization and really found it to be a therapeutic experience when playing the right kin of music with it.

Updated Pseudocode

function preload(){

preload custom audio;

//load audio file;

}

function setup(){

//define maxR, R,nt and other such parameters

//initiailze fft for bass, treble etc;

// getamplitude();

loopaudio();

}

function draw(){

//getlevel()

//switch case for two animations

// case 0: for arc visualizer

push()

// map angle to the level of amplitude

// draw arcs and rotate

pop()

// case 1: for blob visualizer

// initialize all parameters like R,t,x,y;

// mouse pressed => add new blobs

// for loop: update(), display();wrap();fizzle();

}

class blob(){

constructor(){

//define position, velocity, acceleration, lifespan, diameter of blob, color;

// define functions

// update(){change vel, change pos, update angles}

//display(){circle()}

// wrap(){if (edge) position=0;}

}

}

if (keyPressed){

savecanvas(); toggle();}

}

--

--