Mixing AI with Music

In our class AI and Culture, we have had many discussions and readings on how AI can affect our personal lives, in what ways we are and will be using AI and how it can affect our lives mechanically as well as emotionally. I wanted to connect AI to the more emotional and creative aspect, something such as Music which we all love and more so due to the fact that every music piece involves tons of experience and training that is gathered over the years and every music piece has an amount of history attached to it which people usually love. I wanted to play around with neural networks and see if they have or could have same ability in the future and what would my experience be like creating music with the help of neural networks.

The first AI music was composed at University of Illinois Urbana Champaign between 1957–1958 where researchers L.A. Hiller and L.M. Isaacson programmed a computer called Illiac suite to predict few notes and chords. This experiment was met with a lot of hostility from the music community who saw Art being reduced to few logic and repetition. I have linked their music here. Their experiment was quite revolutionary for their time, though the music was played by humans, its notes and chords were computer generated. Currently we have projects like Magenta by google to create compositions and AIVA (Artificial Intelligence Virtual Artist) is an electronic composer recognized by the music society (SACEM) itself. In their TED talk for AIVA, Pierre Barreau talks about creating original compositions someday. But for now, we can surely say that AI curated music is nowhere comparable with music created by artists.

For my project I worked with deep learning architecture LSTM in keras, and a bunch of piano datasets that is open source from Magenta. My code was built upon the project listed here and my datasets were taken from Magenta’s Open source data. I experimented with 2 different architectures (LSTM,LSTM) and (Bidirectional LSTM, Attention). We code our neural network through a bunch of data so that it recognizes pattern in every note of a music composition until it is trained to predict those notes and chords itself. My github link is for the code is here.

Here are three compositions created with three different architecture and one of the original pieces from which it is inspired:

Original Sample Music
Music by Neural Networks — Model 1
Music by Neural Networks — Model 2
Music by Neural Networks — Model 3

We can see that AI generally has difficulty understanding some rhythm and rather than creating melody it instead churns out a lot of repetitive notes which can get better with more iterations in training. As we draw more complex architectures, gather better datasets can we create music comparable to human compositions ? The entire process of training neural nets and then creating music was a frustrating one due to the number of hours it took for it to train. I was unable to create music files initially had issues with architecture, had to eliminate some data such as offset value for predicting music. This highlighted the issue of being less in control with the output of the computer in spite of following rules by the book, since neural networks are still a relatively new territory, it would take a long time where everyone starts playing around with their architectures and training data.

Also, it could seem less fun to a lot of people who are attached to the process of creating music, but AI can definitely make a good companion and an ally in helping people learn music or try out different compositions. People and computer can team up, the invention lies in how many different models you can experiment with (given the patience) and how one can utilize various samples of music. In the end AI is as good as the data it gets, we can feed in data of our favorite past bands and help neural nets learn them, it could be fun experiment to see how our bands would release more new music through the lens of neural networks and deep learning. Although it was a frustrating process, the idea of being able to make music with no formal knowledge in music was interesting. Just like YouTube and Instagram gave platform to people explore content creation in many ways, AI music interfaces could also provide that platform leading to more exploratory music.

For a lot of musicians, they could be feeding in their own personal music and see what interesting compositions a neural network comes up with, it could help them with their creative thought process.

The question would be here though would we be accepting of an art form not created by humans, since a lot of people already see this as a threat. Electronic music also must have seemed less of an art way back while now it is one of the most beloved genres of music, maybe we can say that AI music could also follow the same trajectory and become a genre of it’s own ?

Or the entire point of creating music, which is supposed to be fun, become more mechanical due to AI and take the joy out of it? Is there some way that we can make this process fun ? We are also not at the stage to trust neural networks completely, they are known to make some silly mistakes and that can result in a very frustrating relationship where a human is stuck with an uncommunicative partner where human has to do all the spoon feeding. It does not sound like a happy partnership at all but would be interesting to see how things do turn out in the future.

References:

1. https://magenta.tensorflow.org/datasets/maestro

2. https://medium.com/@alexissa122/generating-original-classical-music-with-an-lstm-neural-network-and-attention-abf03f9ddcb4

3. https://towardsdatascience.com/illustrated-guide-to-recurrent-neural-networks-79e5eb8049c9

4. https://iamtrask.github.io/2015/11/15/anyone-can-code-lstm/

5.https://web.mit.edu/music21/doc/usersGuide/usersGuide_04_stream1.html

6. https://time.com/5774723/ai-music/

7. https://www.hackerearth.com/blog/developers/jazz-music-using-deep-learning

--

--