Generating More of My Favorite Aphex Twin Track

Have you ever heard a song you liked so much you wished it would last forever?

Take a moment to listen if you’ve never heard this before.

Musical Structure

Allow me to explain the relatively simple structure of aisatsana. If you’re not familiar with music theory, I’ll do my best to define the terms I’ll be using and avoid any that aren’t necessary. Try not to get hung up on the vocabulary though.

Every 16 beats, play a 16 beat phrase

I think about this algorithm in two parts:

  1. Play a 16 beat phrase.

An Algorithm For Writing Music

One totally valid strategy for making aisatsana last forever would be to separate the 32 phrases and write a program to select and play a phrase every 16 beats. The result would probably be nicely varied, and no doubt enjoyable to listen to for longer than simply playing the original track over and over. However, I feel this approach would still be too repetitive. My brain would learn to recognize all 32 phrases and the output of such a system would become boring.

Walking the chain
  1. Record the current state. Select one of the transitions from the current state based on the probability of each transition, and follow that transition to the next state.
  2. Repeat step 2 until you’ve reached end.

Building the Chain

The first thing I needed was a digital representation of aisatsana to work with. While our little example above was easy to calculate and walk by hand, doing the same with aisatsana would take a very long time. Note that an audio file wouldn’t work; I needed the actual instructions for playing the piece, like which notes are played and when. This is precisely what MIDI files are for, and the very nice folks who run the MIDM Database have a MIDI file for aisatsana!

  1. Count “half-beats” instead of beats.

102 beats per minute multiplied by 2 half-beats per beat equals 204 half-beats per minute

Then, convert it to seconds per half-beat:

60 seconds per minute divided by 204 half-beats per minute equals about 0.29 seconds per half-beat

Now it’s easy to determine which half-beat each note occurs on. For example, any note played between 0 and 0.29 seconds can be attributed to the first half-beat. Notes played between 0.29 and 0.58 seconds are attributed to the second half-beat, and so on. Just iterate through the array of notes to determine which half-beat each note occurs on. In music production this process is called quantization.

Generative Music

You can listen to my implementation on generative.fm as long as you’re using a browser which supports the Web Audio API, which you probably are. On the site, select “aisatsana (generative remix),” and then press play. If you’d like to inspect the code for yourself, you can find it in a Github repository here.

A web developer creating audio/visual experiences both digital and not. Currently making generative music at Generative.fm.