Escape the Turing Trap

Quincy
2 min readMar 24, 2023

--

Final Project for Stanford CS 470 / MUS 356

Demo video: https://drive.google.com/file/d/11OheuITcsemOG2dRgGkN424cLZWwMDvb/view?usp=share_link

Motivation:

Variability is omnipotent in classical and jazz music — composers turn motifs into a theme, performance is considered artful because musicians have agency to make musical decisions, and at a live show, you might hear a different solo, arrangement, or be sitting next to someone who just can’t stop coughing.

It’s not to say that modern music is any worse but when the only way we consume music is in these static 3-minute sound bytes on Spotify, that variability is lost. It’s hard to believe we’re hearing all the music that could be presented to us so the goal of this project is to apply variability to music in a way that is relevant to the user and blessed by the artist.

Build:

In order to accomplish this I built a wearable that tracks the state of the listener and software that renders music based on connections that an artist can define between sensor data and musical decisions.

The wearable uses an arduino nano board and communicates via radio with a NRF24L01 tranciever. It has a UV sensor, microphone, 6-axis IMU, and heart rate monitor. This information also lets us make high-level annotations about the user’s state like what activity they are doing and whether they are inside or outside.

The renderer enumerates several parameters that the musician can control. There are audio effects like a filter and a compressor, there is the ability to substitute stems and MIDI tracks, and the ability to select different sample packs to use on a drum line. Additionally, the musician can tune knobs that control “sparsity” and “artificiality,” which in turn influence stem and MIDI selection. It’s written in C++ and uses pieces of a library I’ve developed over the last year or so.

AI is not a massive feature here. The goal is not to automate human music production or consumption. All we used AI for was to connect the high level musical annotations (sparsity and artificiality) to musical parameters like how many hits a drum line at that level would have per measure or the MFCCs of a sample.

Next Steps:

The goal here is to have a creative tool and unfortunately I haven’t accomplished that yet. There is infrastructure to render music live but the core of it is software that lets creators “flow” as they make connections between the listener and their music. I’m excited to keep working on this and see if I can get people excited to use it.

Code found here (stanford login): https://drive.google.com/drive/folders/1jUKDdFzRcuiqa11luTcVVhZVnBQ3Groj?usp=share_link

Acknowledgements:

Big time shout out to Scott Green (Megaphonix) for making this song and sharing stems with me.

Thank you to Ge Wang (prof) and Yikai Li (TA) for designing and running this class. What a great experience :)

--

--