Pharm Phresh

BRYAN WONG
Creative Labs
Published in
7 min readDec 21, 2017

My Creative Labs team created an online midi keyboard with sound-responsive fruits this quarter. The user can lay down beats, put them into loops and (todo) export the beats/upload them to SoundCloud. The fruits get the creative juices flowing. Link to the site: fruits.dance

Here are a couple screenshots of the final product:

And some mobile screens:

Before the keyboard…

We originally wanted to make a landscape that responds to music, and take the user through the landscape, like an interactive music video. Five weeks into the project, we realized it was too hard to place fruits and other elements in such a way that made the experience feel cohesive, but not awkward. So we decided to make a launchpad instead. Here are some prototypes of the ‘music landscape’ concept:

We generated these landscapes procedurally using Perlin noise and some trig functions. Thus, the environment is unique to each user’s experience.

Making the launchpad

When we started making the launchpad, we wanted to line up a bunch of boxes and have them morph into fruits when the user pressed the corresponding key. We did this using morph targets, but the morphed mesh ended up looking weird since it used the original mesh’s vertex indices for the draw calls. We tried updating the indices on the GPU a frame after the morph, but the animation still looked awkward. It was also a pain to make sure every vertex in a geometry had a morph target. Here’s a screenshot of what the boxes looked like:

Another complication we ran into was figuring out how to update the UV coordinates for the morphed mesh, since each box has a texture applied to it.

We decided to do something simpler and just have bouncing fruits on top of letters instead. Here’s what our initial keyboard looked like:

From here, we manually figured out the right materials/properties to use for each fruit and assigned them accordingly.

So that’s a general outline of our project. Here are some interesting features we implemented that I think are worth writing about:

Recording loops

Real launchpads have the ability to record loops, so the DJ can make live beats/sets with them. Originally we weren’t going to implement this… but we did.

We used the Web Audio API to send audio data from buffers and record them in a media stream. I didn’t know that an library for recording stuff existed already (and Google didn’t return anything when I looked the first time) so I tried to make our own recorder. I basically loaded a bunch of buffers from each .mp3 file and created a destination node for the data to be sent (media stream destination). Every time the user presses a key, the app creates a new audio node (apparently you can’t reuse nodes in Web Audio), reads data from the corresponding buffer and sends the data to the destination.

This worked at first, but we quickly realized that the timing of the sounds was off, since the data was being sent out of order. Looking back, it would’ve been smarter to send the output of each audio node to another buffer and then send the output of that buffer to the destination, to ensure that the sounds get sent in the right order.

At this point I gave up and used Recorder.js, which somehow fixed the mistiming issue. The rest of the implementation remained the same, except I used Recorder.js’s record method instead of the one provided by Web Audio API.

Recorder.js also allowed us to easily create a blob from the recorded audio data, which we generated a URL for and set as a src for a regular HTML audio object. From there, we can create an array of audio DOM objects which point to various URLs and play them back at any time.

Post processing using custom shaders

After we made the keyboard functional, we decided to spice up the visuals a little bit. I had recently written two fragment shaders that I wanted to try out: a pixelation shader and a ‘wavify’ shader. Here are some screenshots:

Here’s the code for both, and an explanation. The shader programs are wrapped in a JSON for compatibility purposes for Three.js, which parses through the string and creates the program for you.

The important things to note here are the uniforms “tDiffuse”, “amount” and “steps.” A uniform is basically a variable you pass into the shader program. tDiffuse is the texture we’re going to be manipulating (applying the post-processing effect to). amount represents the amount of pixels we want on the screen in each direction (x and y) and steps is the step size for each pixel (remember that pixels are just squares, a step size of 2 would make each pixel a 2x2 square).

So just assume that the vertex shader does what it’s supposed to do. The important thing here is the fragment shader. Basically, we pass in our regular rendered scene into the program as tDiffuse. At this point, we get a texture which is our original scene, and which contains many colors. Note that tDiffuse isn’t actually rendered to the screen, it’s just being passed into the shader program. We can do some math and manipulate where on the screen each color in the texture actually gets rendered.

We basically divide the screen into a series of squares, taking the step size and multiplying it by 1/(the total number of pixels). We do this for the x and y directions (dx and dy). On a regular render, each pixel is so small we can’t see its true shape: a square. We’re basically redefining the screen space so that each “pixel” is noticeably large. We’re redefining what dx (smallest measurable unit of change in the x direction) means and what dy means.

From there, we get a coordinate from the texture and render it to the screen. This is what vec2 coord is. It’s a two-dimensional vector that points to a certain point on the texture tDiffuse. The math basically says “for coordinates (1, y) to (1.999, y) in the x direction, just grab the color at (1, y).” We do this in the y direction as well.

Finally, we grab the color from the texture using the function texture2D and render it to the screen. We can control how pixelated it gets using “amount” and “steps.” You can play around with a pixelation demo here.

The wave shader follows similar logic in that it’s just manipulating how we’re grabbing points from the texture. We’re just displacing the original coordinate of the texture with some noise (basically adding a controlled factor of randomness to it) and then applying a sin function over time to create a wave effect. We use the uniform “magnitude” to control how wavy it gets, and “time” to apply time to the sin function, and thus animate the render.

WebGL Report

To optimize on older devices, we ran some checks on the user’s browser to see what version of WebGL they were using, max # of texture units, etc. We use this information in consideration with the rest of app and remove features as needed. Our implementation is based on Analytical Graphics’ WebGL Report.

That sums up our project. Created by Bryan Wong, Chang Liu, April Ding, Megan Shi, Lauren Yeung and Bryan Ong. Thanks for reading!

--

--