The world is ready for music made by computers.

Laurent Martin
3 min readAug 10, 2017

--

The experts are wrong. If R. Kelly still has fans, then listeners will accept music made by a computer.

Music made by computers is cheap, sterile, robotic, and utterly devoid of passion — at least, one would think that if they were to listen to some of music tech’s most innovative thinkers.

I’ve yet to meet a musician that isn’t excited about the future of music technology

The narratives that music lovers will never warm to music composed by computers and that music is a zero-sum game pitting humans against machines are a threat to innovation. Those in the music tech industry should acknowledge that we may be internalizing this made-up battle and holding ourselves back.

I’m more apt to believe another musical luminary, Duke Ellington, who decreed, “If it sounds good, it is good.”

David Cope is the godfather of computer composition. He told NPR as late as last May that listeners “don’t want to see” software in the credits of their music. (Full disclosure: I got my bachelor’s degree from UC Santa Cruz where Cope is on faculty.)

Cope has great mind and is a fine artist. He is also the reason I don’t think people have a problem with AI-composed music.

When you start out as I did, a 17-year-old habituated to hearing talented musicians performing music composed by an algorithm, there is no outrage; it’s just music. My colleagues performed Cope’s music and created their own composition algorithms. The music was either good or not.

In fact, I’ve yet to hear machine learning create better music than even his decades-old algorithms — but that’s another post altogether.

If this were the view from your practice room, you’d teach your computer to write your music too.

Music technology writer Bas Grasmayer recently wrote that the popular acceptance of music made by AI will only be possible with human-led narratives. The Guardian recently asked if computer music will enslave us.

It’s as if they are expecting the anti-rock ’n’ roll hysteria of the 1950s. Click tracks and Auto-Tune took over the recording studio and not a tear was shed. Vinyl records disappeared from DJ sets and there was no rioting in Ibiza.

Cope famously renamed his composition software Emily Howell, changing the liner notes from EMI (Experiments in Musical Intelligence) to something a bit more human. It helped convince classical music critics, but it’s hardly necessary for a generation raised on Deadmau5 and Girl Talk.

The music tech field is getting crowded with companies pushing the capabilities of computer composition and creating their own narratives for the creative output of their products. My company, Aitokaiku, along with others like Jukedeck and Groov.AI are bringing this technology into the mainstream, regardless of the human narrative.

Humanizing computer-created music like Ford’s DJ YuMi comes across as inauthentic at best

Those who argue we need that human component should look at some of the despicable people who have made great music. If anyone stopped listening to Phil Specter albums, let me know in the comments.

The music tech industry needs to market computer-created music the same way traditional music labels push the names, faces, and stories of their artists into every part of our consciousness with their myriad cross-marketing campaigns. We need to normalize it.

Until the investment in computer music matches what is spent on human artists, we are still comparing apples and oranges.

To his credit, at least Cope thinks the mass market will accept music made by computers into the mainstream in 20 or 50 years.

I think the time is now.

Laurent Martin is the CMO and Co-Founder of Aitokaiku — maker of reactive music products that use sensors to create personalized music. He is also a professional opera singer. Download Aitokaiku’s augmented music mobile apps on the iTunes App Store and Google Play Store.

--

--