Artificial Intelligence is the New Bach

Miriam A. Feldman
5 min readJul 28, 2017

Can the trained human ear tell the difference between the work of a great composer and the work of a machine? In 1997, David Cope of the University of California in Santa Cruz asked students at the University of Oregon to attempt to distinguish between a composition by Johann Sebastian Bach, one by Steve Larson (a fellow music theory professor), and an artificial intelligence-based computer program called EMI (Experiments in Music Intelligence) that had been given Bach’s body of work as an input. One might expect that a group of trained musicologists would easily tell the difference between the master composer and the machine. But, when tallied up, the results showed something entirely different. Students thought the AI program was the real Bach, that the real Bach was the professor, and that the professor’s piece was composed by the AI system. This finding, of course, begs the question: What if Bach could, today, write a new piece based off of his best work? Artificial intelligence seems to be able to do just that.

Within the context of music, as the field of artificial intelligence continues to rapidly advance, we can expand upon the artistic legacies of our society’s greatest composers and musicians by using their existing bodies of work as input for sophisticated musical artificial intelligence. This isn’t all, though. There are now AI programs that create entirely original compositions. Artificial intelligence as it relates to the music industry has come a long way since the technology of the late nineties, when AI-based musical outputs were limited to classical music with strict parameters. Recent developments have pushed AI composition into the pop music space, and even original scoring.

Currently, major corporations that have the brainpower, computing skills, and time to invest in these creative projects dominate the field of AI as it interacts with music. In the realm of leveraging existing artistic inspiration, Sony Computer Science Laboratory in Paris is leading the way, with its set of recent compositions using Flow Machine technology, which creates modern tracks based on music from artists such as The Beatles and Duke Ellington. Flow Machines run an analytical program called the Markov chain to identify elements such as chord progressions and rhythms within a pre-existing database of music (in this case, the Beatles’ work, though the dataset could be much larger, encompassing an entire genre or specific time period) with the goal of eventual imitation. Benoît Carré composes the lyrics for these songs, complemented by AI-generated musical backing. Their Beatles-inspired song Daddy’s Car (Song CSL, 2016) was generated from an analysis of 45 Beatles’ songs and has amassed over a million views on YouTube.

Sony is far from the only player in this space. According to James Healy of Universal Music Group, the work of atmospheric and soundtrack designers is much more likely to be overtaken by AI, while genres that rely heavily on lyrical complexity and story will likely retain human influence. This soundtrack-authoring prowess of machines is apparent in AIVA, an AI that composes classical music. On first listen, the completely AI-generated compositions are stunning and full of feeling. The longest composition on AIVA’s debut album Genesis, however, clocks in at only three minutes and twenty-four seconds. In order for AI technology to make major impact in the field of scoring or classical music, the outputs will need to have the range and thematic development to support longer and more complex works.

In contrast to human composers, the AI outputs are sound, rather than sheet music. While much music today is composed electronically with MIDI instruments and similar technology, most classical music and movie scoring is done with live orchestra. This lends AI-generated music a slightly artificial feel that may not be avoidable until the technology begins to output work in notation that can be interpreted by human musicians.

One feature that is still constant across the AI composition landscape is the need for some level of human editing and intervention. In Sony’s case, for instance, creator Benoît Carré “might keep an eight-bar phrase he likes and reject the rest, running the program again until he has a melody and a chord sequence that he’s happy with” It’s clear, then that AI cannot yet entirely predict what is pleasing to the ear. Does this shortcoming represent a flaw in the amount of data, or is it the case that some degree of human curation and creative direction is needed for truly complex and meaningful music to emerge from AI?

If you want to try your hand at AI composition, there are YouTube tutorials boasting the ability to teach you to, for instance, “get you up and running with your first AI composer in just 10 lines of Python!” And, to some degree, they work. It’s important to learn the fundamentals of these systems. But, in terms of easily creating good music, the time has not yet come. Google’s Duet, a product of their Magenta project, has a similar flaw. The online application allows the user to play piano back and forth with the computer, which generates its replies using melodies and rhythms from Google’s Magenta neural net trained on examples of existing music. The result is a beautiful online tool that is remarkably fun and easy to interact with, but not one that generates music intended to be lasting art.

Yotam Mann, Google A.I. Duet

If the experts surveyed by the MIT Technology Review are to be believed, we are not too far from artificial intelligence being able to match human capabilities in music composition (Emerging Technology, 2017). A New York Times bestseller written by AI is predicted by 2049, which seems similar in some ways to composing hit lyrics.

MIT Technology Review, 2017

As artificial intelligence continues to evolve, the interplay of AI and music will have a direct impact on both our definition of music and the ways musicians’ careers play out. This has already been brought to the public’s attention through Spotify’s use of “fake” artists. The rise of art that is engineered to fit consumers’ taste may decrease the market share available to career musicians, or even shift our definition of music away from an expressive art form toward a targeted consumer good (e.g. “buy this song that is guaranteed to lift your mood/resemble those of your favorite retired artist!”).

Until this form of directed AI technology becomes widespread in the realm of music, the way we consume music and the artists who produce it will likely remain the same. The same production, media coverage, and metrics of success stand for now. As developments in this field are so relatively concentrated, we have yet to see what the long-term impacts will be on the music industry as a whole. Industry leaders are raising questions like these during conferences such as the British Phonographic Industry’s “Music’s Smart Future” session (2016). People may push back, preferring the connection that comes from knowing the sentiment expressed in a song came from the mind of another person. Is it some distinctly human aspect of music that creates meaningful art and connection? Only time will tell.

--

--

Miriam A. Feldman

Oxford University / Alexander von Humboldt Center for Internet and Society / Boston, Mass.