Last month, I was super inspired by Leon Fedden’s post that compared dimensionality reduction techniques like UMAP and TSNE on features such as STFT (short time fourier transform) and WaveNet features. The post came out right at the same time as a final project I was doing with Avneesh Sarwate for a course on Audio Content Analysis. Our project started with the premise of using techniques from Kyle McDonald’s Infinite Drum Machine to solve a producer’s worst nightmare: endless scrolling through samples!
GestureRNN: Learning musical gestures on an interactive XY pad
This project uses machine learning and deep learning to create a new kind of musical instrument based on the new Roli Lightpad instrument. Firstly, I use Wekinator’s machine learning capabilities to continuously interpolate between various sonic parameters in a custom-designed tension synthesizer in Ableton Live. More importantly, I train a three-layer LSTM that learns to generate gestures and swipes across the surface of the Lightpad based on user input called GestureRNN. GestureRNN regresses continuous values of (x,y) coordinates and instantaneous pressure (p) in real-time based on a user’s seed gesture.
Last week, team “mSynth” had the chance of demoing our 1st place winning hack at Outside Lands 2017, the largest music festival in San Francisco!
Inside a private “Outside Hacks” speakeasy at the festival; free-flowing beer and a LED grand piano awaited festival goers who had downloaded winning apps. It was in this blacked-out, disco-ball-and-LED tent (thanks spot.com!) that our team got to test out our crowd-sourced music generation system called “mSynth” in the wild. Festival goers were able to control the sound being generated by a neural network in real-time by tilting their phone. …
Science. Design. Music. Machine Learning