How Music Creators can Survive the AI Music Generation Revolution and Thrive

Valerio Velardo
The Sound of AI
Published in
8 min readMay 31, 2019

If you follow the music tech industry closely, you’ll surely have noted the appearance of numerous generative music systems on the market in recent years. But what’s a generative music system precisely? It’s a piece of intelligent software that automates, to various degrees, the composition and production of music.

A long journey

Although generative music systems have only recently become a press-worthy phenomenon, they’ve been around for longer than most people know. The first attempt (that I’m aware of) to build a system that autonomously generates music comes from the post-war era:

It’s 1957. Two professors of composition at the University of Illinois at Urbana-Champaign develop ILLIAC I, a computer programme that generates music for a string quartet in a contemporary style. The result, as you can hear in the video below, was already quite interesting (if you’re fluent in contemporary art music).

Since the ILLIAC I, thousands of generative systems have been developed in academia. As early as 2010, new players from the industry became interested in generative music. These are both startup companies like Melodrive, Jukedeck and Amper Music; and large tech companies which are experimenting with the technology. For example, at Melodrive we’ve built a music engine that generates music in realtime for video games. The music generated changes its emotional state dynamically depending on the visuals and the narrative. Among the big players, Google has launched Magenta, which consists of a suite of generative music tools that musicians can use to co-create with the AI.

In the long run, these creative AI systems are aiming to generate music that’s as good as what’s composed by human composers and songwriters. If generative music can deliver on the promise, this technology will revolutionise how music is made and consumed. Music makers, out of all the music-related job roles in the industry, should follow these advancements closely and be prepared to leverage them.

The wrong mindset

While running a company that’s at the forefront of the AI music revolution, I’ve had the opportunity to speak with many musicians and pick their brains on recent AI music advancements. Many are thrilled by the opportunities that AI will present for music creation.

A minority, however, feels threatened. When I talk to musicians who are skeptical about AI music, two arguments often pop up. First, they’re afraid that AI is going to replace them. They imagine a world where most music is composed by heartless machines, and human composers are marginalised, and, in the worst case, completely useless. Second, they’re positive that having an almost infinite supply of songs provided by tireless AI composers will devalue music.

These opinions push generative music skeptics to instill what I call the wrong mindset. From my experience, there are three main positions these musicians take. I’ve organised these in a pyramid, from least to most resentful towards generative music.

Some music creators simply aren’t curious enough to take a look at the advancements brought on by their nerd cousins — the AI music guys. It’s not that they’re openly hostile against AI music systems. They just don’t care. Another group of musicians is aware of what’s going on in generative music, but instead decide to ignore it. A final segment of skeptics is openly hostile towards generative music. This group are convinced that music is a human-only endeavour and that machines pose nothing other than a threat to music making.

The ‘wrong mindset’ pyramid.

In my opinion, all these positions are wrong. The fundamental misunderstanding is that AI is not here to replace musicians. AI is just another (high-tech) tool that musicians can use to enhance their music-making process. The transformational effect that the adoption of AI will bring to music, has already happened multiple times, in different forms, throughout the history of music. Consider the introduction of music notation or the invention of the modern orchestra, for example. These new technologies forever changed how we create, think about and consume music.

An example that’s closer to our modern experience comes from the digital revolution which brought us virtual instruments and digital audio work-stations like Pro Tools. Imagine traveling back to the 1980s. Up until now, musicians have always created music through analogue tools. Towards the end of the decade, digital technology is launched on the market that streamline music creation. Music creators can now use digital instruments and software to easily sequence and record music. This revolution has democratised music creation, enhancing the creative options available to music makers.

In the 1980s digital technology revolutionised the way we create music.

The adoption of generative music systems will have a similar transformational effect. What AI music systems won’t do is replace human musicians. Music fans aren’t interested in just mindlessly consuming the sonic results created by their favourite musicians. They want to connect with the artists and know, for example, what pushed them to write the music they wrote. Consuming music is an all-encompassing experience that goes beyond listening to a series of pieces on a playlist. Consuming music is an intimate human experience. It can’t be replicated by machines, no matter how sophisticated they may be at composing music.

New opportunities for music creators

Despite the position musicians may have regarding generative music, this technology is going to advance over the next few years. We’re still far from machines that can compose songs on par with humans. But research and systems are improving fast. There’s no way to stop this. For this reason, musicians should prepare to embrace the incoming AI music revolution. In order to do this, they should get acquainted with the current possibilities and limitations of the technology. Most importantly, they should understand how generative music can benefit them.

Generative music can enhance productivity. I have a dear friend, Billy Mello, who’s an amazing composer and sound designer. Lately, in his compositional activity for production music he often uses generative music systems to streamline his work. Systems like the Orb composer, or Jukedeck to quickly spawn a number of melodic and harmonic ideas — often the starting point for writing music for ads. Once he finds an acceptable music passage, he tweaks it to match the needs of the production he’s working on. He told me that, by using this process, he can write production music faster than ever. And his clients are happy with his work. Obviously, he’s well aware that the music output by these systems is qualitatively distant from the heights of a Mahler symphony, but it still serves as a handy entry point for his production. As pragmatic composers know all too well, not all music created needs to be a ‘masterpiece’. In the future, music creators will be able to consistently rely on AI music systems to speed up their music production.

Billy Mello giving one of his inspirational talks on music for VR.

Generative music can also enhance creative music options. During the 1980s, David Cope, a former professor of composition at the University of California at Santa Cruz, built EMI (Experiments in Musical Intelligence). EMI is a generative music system that analyses the compositions of an author and generates music that’s in a similar style. The story goes that David had a composer block. So he built EMI, fed the machine his compositions, and received music suggestions in his style.

A chorale generated by EMI in the style of J.S. Bach:

Just like Cope, music creators can use generative music systems as a sparring partner. They can bounce musical ideas off them and get inspired by what they get back from the machine. The great thing about AI is that it could give human musicians ideas that are outside their musical bubbles.

Another notable example of human-AI collaboration comes from the songwriter Taryn Southern. She co-composed the album I am AI along with a number of generative music systems such as Magenta and Amper Music.

Break Free — Song composed by Taryn Southern with AI.

AI can also bring in new musical forms. For example, at Melodrive we introduced the idea of deep adaptive music (DAM). DAM is ideal for soundtracking interactive content like video games. In DAM, the music is generated by an AI in realtime while a player interacts with a video game. The advantage of this new musical form is that the music can change dynamically in order to match the emotional setting of the visuals and the narrative, second by second. In DAM, the composer will collaborate with an AI creating what we call a musical script. In the musical script, the composer specifies important musical elements such as the musical themes that will be attached, for example, to different characters and locations, the overall instrumentation, or the way the music should change based on in-game events. While a player is busy fighting armies of ogres in a role-playing game, the AI music engine will use the inputs from the game to modulate and implement the music script in realtime. In this sense, the role of the composer ceases to be that of the person responsible for writing the notes of the score. Instead, it shifts towards that of a director, who sets up all the fundamental musical elements, which are then enacted by the AI engine at runtime. If you will, the AI can be thought of as an infinite version of the composer that generates the music and performs it in realtime.

One of the pioneers of musical expression in generative music is Brian Eno. The English musician published a number of generative music apps where users can interact with the screen and determine a dynamic change in the musical result, which was pre-programmed by the composer.

Brian Eno explaining the work he’s done for Bloom: 10 Worlds.

The future sounds bright

We’ve now got a basic understanding of how musicians could benefit from AI. All that’s left is to ask the most pressing question: how should music creators prepare for the AI music revolution? There’s really no simple answer. From my experience, however, a few actions will definitely ready us:

  • Musicians shouldn’t be afraid of AI. Technology is always neutral. It’s how we use it that determines its ethical implications.
  • Musicians should always have the upper hand over AI. In order for this to happen, they have to be actively involved in the research and development of generative music systems.
  • Musicians should work side by side with AI music geeks. This will help steer the technology in the right direction. Being hostile against generative music will be counter-productive. If anything, it’ll open up space for a misuse of the technology. The engineers won’t have an open channel of discussion with music creators, therefore won’t be able to figure out their needs and concerns.
  • Musicians should always be curious and get their hands dirty with generative music systems. They should play with systems like Magenta and be on the lookout of what’s coming next.
  • Musicians should become fluent in AI. Everyone should have a basic understanding of what AI is and how it can be used in music. Even the most ‘hardcore’ musicians should push themselves to play around with code, and experiment with machine learning.

The future of music is a fascinatingly bright one, where musicians and AI create together. Those music creators that are open to AI will benefit most from the incoming revolution. They must shine the light on what there is to learn. And they must educate themselves about the opportunities and limitations of this technology. That’s how they’ll equip themselves to carry the torch for future music creation.

--

--