Machine Learning and Music

Machine Learning has been growing at an incredible rate over the last few years. This is due, in part, to the simultaneous and complementary grown of Big Data. To analyze such a massive amount of information it has become necessary to devise new methods for interpreting and associating data. Neural networks, among other methods, offer exciting solutions to this challenge and have elevated a new level of interest to the field. And with the direction that the tech industry as a whole seems to be moving in, machine learning seems like it is here to stay, and will only grow.

For this reason, it’s not a big surprise that in the last year alone, I have seen more movies about Artificial Intelligence than I have in my whole life. Most of these films (e.g. Her, Transcendence, Ex Machina, Black Mirror) highlight the concept of the Singularity, where artificial intelligence begins to exceed human intelligence, and we are left wondering where that leaves us. Even though I find that thought to be a bit haunting, I really enjoyed almost all of these films. But I am an optimist when it comes to the rise of AI, especially because I think it will unlock entirely new mediums for us to express ourselves. Let me share my favorite example.

As a classically trained guitarist, I have spent years teaching my body to play music. Perhaps that sounds like a strange way to describe the study of an instrument? But it’s true! Especially when we are first beginning. If you have ever studied an instrument, you are familiar with the self-frustration that comes from a lack of execution. When your body betrays you by failing to do what you want it to do. Over time, and with lots of practice, we eventually improve and our body slowly begins to behave more properly. It is a long process of refining the interface that exists between us and our instrument of choice.

Now imagine, instead of you learning your instrument, iterating over and over for years until you can produce the results you want, the instrument learns you.

Enter Rebecca Fiebrink, professor in the computer science department at Princeton University and Editorial Consultant for News and Announcements at Computer Music Journal. She also teaches an online class, Machine Learning for Artists & Musicians, that is built around an exciting piece of deep learning technology she built called Wekinator. This open source software learns to associate inputs such as hand gestures with sounds. Or, more generally speaking:

“Wekinator builds models using supervised machine learning algorithms. Instead of creating the model function by writing code, you create models by providing training examples of inputs and the outputs that should correspond to them. A learning algorithm builds a model from the training examples.”

This field of applied machine learning and human-computer interaction (HCI) has fantastic implications. With this kind of technology, anybody can create any kind of music as effortlessly as if they had practiced an instrument for years, but in a fraction of that time. The creative medium will no longer need an instrument for your body to interact with, your body can literally be the instrument! This direct connection between your body and the musical output could even offer more sensitivity than is possible with a physical instrument. Instrumentalists can finally combine the versatility of polyphony and new timbres with the control and expressiveness similar in the human voice and singing. Furthermore, this technology would allow anyone, regardless of any lack of coordination or disabilities, to express themselves by playing music.

As we move forward and our technologies develop, this kind of integration between human and machine is what makes me most excited, and is why I remain optimistic that humanity will not become obsolete. We will evolve with our technology.