Creating Music From Numbers

How hard can it be?

Johan Belin
Dinahmoe

--

Creating music out of data has been a recurring request through the years. It is always a fun challenge to think how the stock market would sound like as an opera. Or global warming as Death Metal.

NYC rush hour, your emotions, twitter activity, an image, all can be described as numbers, as data, and all can in theory be used to create music. So then we just need to map the data to musical parameters and we are done, right?

Let’s do a silly experiment. Here are some numbers:

What did you feel? Probably not very much unless you are into Fibonacci numbers.

And now, listen to this:

It’s ok, you can cry. Or hate it for that matter. Any way, you are proving my point: music is emotional.

It turns out that data and music are almost each other’s total opposites. Music is emotional, data is not. Music has form, tempo, dynamic, phrases, data has at best trends but is most of the time just chaos. Music created by mapping them one to one will just give you a headache (or a music grant if you pretend to be of the stochastic school).

Bringing some order to the chaos

So we need to bring a little order in the chaos. The most basic properties of music are pitch and rhythm. We transform the data so that all notes sounds good together and that the notes play in sync. We could even make some pitches play more often than others to create a musical center, something that feels like a musical “home”.

This is what we did in SXSW Social Soundscape (see below). It feels like music, random but enjoyable for a while, it is clearly connected to the data.

But we are still pretty far from creating a data generated track that can compete with “my Heart Will Go On” that won an Academy Award, several Grammys, and a Golden Globe Award. No one has actually been able to pinpoint exactly what makes some songs massive hits but we can just look at the basics: we need a musical structure with verse and chorus, we need chord progressions that build to the climax, we need memorable melodies and of course some great lyrics.

Bum bum be-dum bum bum be-dum bum

Bum bum be-dum bum bum be-dum bum

Bum bum be-dum bum bum be-dum bum

Bum bum be-dum bum bum be-dum bum

But hey! Where did the data go?

The challenge here is that none of the things we add to make the music better is actually in the data, it has to be generated in other ways or added by human hands. The closer you get to “real” music, the further away you get from the data. It becomes a smaller and smaller part of the whole until we no longer feel any connection. And if we have lost the connection, why not save some effort and just make some great music the old-fashioned way, and if AdWeek asks… Nah, that would be cheating wouldn’t it? Let’s do it the proper way, both data driven AND musical! Let’s look at some examples:

#SXSW Social Soundscape

Case study and screen captures here

This is an example where data is allowed to control the music (almost) directly. We split Austin into a large number of zones, and attached different instruments to each, listened for the #SXSW hashtag, whenever a tweet was detected we extracted geolocation and triggered the instrument for that zone.

The incoming tweets did not play a note immediately but was adjusted so that the notes would come in sync. The notes depended on the activity but was also adjusted to a scale to make sure that it always sounded good. Every hour we made a change of key, pushing even minimalism to its edge.

This Exquisite Forest

Casestudy and screen capture here

A collaborative animation project created by Aaron Koblin and Chris Milk where users could build on others’ animations to create infinitely branching story lines. We were asked to build a music engine that allowed creating a unique music score to the animations. We did quite a lot of research and concluded that most generative music engines just sounded random and meaningless. We came up with a method that would give the user as much control as possible while making sure the result was always musical.

The solution we chose was to separate different aspects of the music into pieces that could be combined in totally new ways while still being musical. We split it up in:

  • instrumens, e.g. guitar, piano, strings, percussion, each type with many different versions, e.g. we had nylon and steel stringed guitars, electric clean and distorted and so on.
  • patterns, ways of playing a specific instrument type e.g. for guitars back beat, arpeggio, strumming. The patterns were connected to an instrument group to make sure that all patterns would sound great on all instruments of that type. The patterns were all short, two to four bars and were playing only one chord (explained below).
  • intensity, each pattern was made in two to five different intensity levels.
  • chord progressions, we created a lot of chord progressions, happy, sad and everything in between. The patterns were then transposed according to the chords and scale
  • key, randomized for each new composition

The number of variations are close to infinite, some a great, some are …odd? But since there is a musical form, a chord progression, it always sounds like real music. You can try it yourself here.

JAM With Chrome

Case study and screen captures here

It started with a simple question: “Is it possible to create a band in the browser?”. JAM with Chrome was a full fledged music workstation in a browser. You pick an instrument, invite your friends and just start jamming. The band members can be anywhere in the world, everything plays back in perfect sync. And it sounds great regardless of your musical skills.

JAM with Chrome is not data driven in the traditional sense, it is user controlled, which in theory means that the data makes much more sense. That is until someone with no musical skill and respect for art in general gets hold of the app, randomness have never been so painfully random. Luckily that is the whole purpose with JAM with Chrome, it will generate music regardless what you throw at it.

We did JAM almost in parallel with This Exquisite Forest but we were still able to include some learnings. For JAM we wanted to create instruments that were as realistic as possible. We recorded them meticulously with several intensity levels for each note. Guitars and basses were also recorded with different playing styles: long notes, short notes, muted and damped. Not only that, each string was sampled separately.

We used a similar method for the patterns as in This Exquisite Forest, playing styles that were transposed according to a chord progression. There was an important difference, we now included what is called inversions of chords. A specific chord, e.g. E minor, can be played in many different ways, with the root, third or fifth as the lowest note. By using different inversions of the chords we made the transpose sound much more natural and musical.

In addition the tempo was possible to change, you could add and change sound processing like reverb, distortion, wha wha and more.

Ray-Ban #MADEOFMUSIC

Case study and screen captures here

Twitter again but with a twist. In this case we analysed the content, the actual text of the tweet and created a personalized video based on that.

We analyzed the sentiment of the tweet, picked instruments, tempo, key and chord progression based on that analysis. Then we created melodies that were of exactly the same length as the words, animated those into a dynamic typographic video of your tweet.

The first step was to establish the sentiment of the tweet. Anyone who has tried sentiment analysis knows that the accuracy is pretty low, even services using machine learning. We developed our own logic for this that worked pretty good. From this we could discern whether the user was happy, frustrated, sad, angry, and also extracted other keywords that we wanted to affect the music, e.g. speed, time of day, activities. We then mapped these emotions to musical parameters. such as tempo, key, chord progression, timbre, intensity. As the final step we adjusted the melodies to the exact length of the words.

I have skipped the details on how we did the sentiment analysis and mapping since it is out of scope for this article, but if someone is seriously interested just let me know in the comments (seriously?!).

If you are interested to learn more about computer generated music in general, please have a look at this excellent article by Kyle McDonald. It covers the history and current state of things including deep learning in all its flavors, complete with sound samples too.

Data generated music is a huge topic, as usual I am just scratching the surface. Please let me know what you think in the comments. What do you agree/disagree with, what I am missing. And if there is anything specific you want me to write about in the future, just let me know!

Before you go

Clap 👏 👏 👏 5, 15, 50 times if you enjoyed what you read!
Comment 💬 I’d love to hear what you think!
Follow me Johan Belin here on Medium, or
Subscribe to our newsletter by clicking here

--

--

Johan Belin
Dinahmoe

Founder and CD @ Dinahmoe, passionate about digital, looking for likeminded