And of course there was nothing more repellent than the synthesizer,” said Morrissey, front man for the Smiths, in a 1983 interview, reflecting the arguments of the day that raged around whether the new electronic instruments of the 1970s qualified as “proper” music. In 1982, a branch of the U.K.’s Musician’s Union even tried to ban the use of synths, on the grounds that they were taking work away from musicians who played stringed instruments.

Those kinds of arguments may have a parallel today, in 2019, with the emergence of music created with artificial intelligence. Some of the questions (Is this “real” music? Can it compete with human-made melodies? If so, will it put those humans out of a job?) are eerily similar.

Can an A.I. create original music? Absolutely. Can it create original music better than a human can? Well, it depends which human you’re comparing the A.I.’s music to, for a start.

Human-created music already spans everything from the sublime to the unlistenable. While an A.I. may not be able to out-Adele Adele (or Aretha Franklin, or Joni Mitchell) with a timeless song and performance, it can compose a compelling melody for a YouTube video, mobile game, or elevator journey faster, cheaper, and almost as well as a human equivalent. In these scenarios, it’s often the “faster” and “cheaper” parts that matter most to whoever’s paying.

The quality of A.I. music is improving in leaps and bounds as the technology becomes more sophisticated. In January 2017, Australian A.I.-music startup Popgun could listen to a human playing piano and respond with a melody that could come next; by July 2018, it could compose and play piano, bass, and drums together as a backing track for a human’s vocals.

Popgun is just one of a number of technology startups exploring the potential of A.I. and what it could mean for humans — both professional musicians and those of us who can barely bang a tambourine in time alike. Startups include Jukedeck, Amper Music, Aiva, WaveAI, Melodrive, Amadeus Code, Humtap, HumOn, AI Music, Mubert, Endel, and Boomy, while teams from Google, Sony, IBM, and Facebook are also looking at what A.I. music can do now and what it could do in the future.

Jukedeck’s Make tool. Image: Jukedeck

Quick and cheap backing tracks

The first real commercial use for A.I.-generated music is as a quick and affordable way to knock up a track to use for a YouTube vlog or corporate video without having to license it from a record label, publisher, or production music library.

With Jukedeck’s Make tool, you pick a genre, mood, track duration, and optional “climax” point. Then you can tweak the instruments and tempo and see what comes out. Tweak the parameters and have another go. This is music to fulfill a purpose, and it’s cheap: Downloading your track and using it on a royalty-free basis costs 99 cents if you’re an individual or a small business, but it’s free if you give credit to Jukedeck. Larger businesses pay $21.99; there’s also an option to buy the copyright for a track outright.

The amount of video being created for social networks and video services, as well as for corporate use, is exploding, and so is the need for original music for all that content. Production libraries like Epidemic Sound focus on YouTubers, while a startup called Lickd is trying to make label-signed music easier to license for video creators. But A.I. is already showing potential to compete against humans on cheap backing tracks, as the startup Aiva is doing. Another startup, called Melodrive, is doing a similar thing for gaming, using technology that “composes an infinite stream of original, emotionally variable music in real time” — meaning it adapts to what’s happening on the screen. Eventually, we could see A.I. music composers being added to our social video apps or photo slideshows, giving us instant ambient music.

Amadeus Code’s app. Image: Amadeus

Creative foils and songwriter tools

The threat to human jobs has long been at the center of the A.I. debate, including within the music industry. All these startups have polished arguments about how A.I. can be a tool to enhance human musicianship, rather than replace it.

Amadeus Code claims to “enhance your songwriting with artificial intelligence” and is squarely aimed at people who are already writing and recording music. Its pitch: “Get unstuck with your songwriting with the power of artificial intelligence and say goodbye to writer’s block for good.” The app creates melodies and even entire songs inspired by existing tracks. A new app, called Alysia, developed by WaveAI, will even compose lyrics alongside its music.

This is A.I. as a creative foil: a generator of ideas that might nudge a songwriter out of their melodic comfort zone into somewhere a bit more interesting. A spark for human creativity, rather than a straight replacement for it—in a way, the equivalent of mucking around with the controls of a synthesizer to find interesting sounds. These startups regularly invoke the precedent set by synthesizers and point out that they turned out to be simply another tool for talented musicians to express themselves.

Alysia claims to “allow everyone to create original songs in minutes.” Apps like HumOn and Humtap are also in this space — getting people to hum into their smartphones and have a song built around it. Is this a threat to professional songwriters? The startups prefer to see this trend as democratizing music making in the same way Instagram opened up photography to hundreds of millions of non-photographers — without making them professional snappers.

Mood and activity music

One of the big trends of the music-streaming era has been the growth of playlists — including those focused on specific moods or activities. Spotify has a growing stable of playlists for working out, chilling out, productivity, eating, traveling, and even sleeping — more than 3 million people follow the flagship “Sleep” playlist.

Some startups are already exploring how A.I. might play a role here, creating continuous mixes of original music to help you drift off, smash your treadmill record, or get your presentation finished. They draw on principles of generative music that stretch back decades, but with a business model tuned to our current streaming habits.

The app Mubert, for Android and iPhone, serves up generative channels based on tags like study, meditation, relaxing, creative, and focus, as well as genres (dance, dub, techno and, er, ravebient). It claims to be big in Southeast Asia already and has ambitions to explore the potential for generative music as YouTube video soundtracks (see the first category) and as a Muzak replacement for public spaces.

Endel offers a laser focus on focusing and relaxation — sleep included — with an algorithm that responds to the time of day, weather, and location of the listener. Endel took part in Amazon’s 2018 Alexa accelerator, hinting at a possible future where the command “Alexa, play me something to help me relax” creates a stream of entirely original music, rather than just a playlist of existing tracks.

The future could be about not just music composed by an A.I., but music being composed just for you — based on data about your musical preferences, your physical habits, even the beat of your heart.

And bona fide pop hits?

Perhaps the real test is whether an A.I. could write a genuinely good track capable of zooming up the charts and outdoing Drake, Dua Lipa, and Ed Sheeran. Music is subjective, of course; you might chuckle that of course an A.I. could churn out a terrible mid-tempo EDM track like the Chainsmokers, but equally of course it couldn’t come up with a sparkly R&B banger like Janelle Monáe. Can an A.I. write a hit? Certainly one like the commercial hits we hate, but never one like the critically acclaimed hits we love.

More interesting might be the contribution A.I. can play in writing a hit with a professional songwriter, and there have already been some intriguing collaborations. In 2018, Jukedeck worked with a Korean company called Enterarts, with the former’s system creating tunes and chord sequences for the latter’s writers and producers to develop into full songs, which were then performed by its artists. The companies held a concert together to show off the results.

Also in 2018, musician Taryn Southern used an array of A.I. tools, including Amper Music, Aiva, Google, and IBM, to write an album called I Am AI, which similarly used the A.I. systems to create the initial melodies that would be developed into full songs. This is one song created using Aiva’s system.

One of the first prominent songs “created” by A.I. was called “Daddy’s Car,” released by Sony’s A.I. music composition tool Flow Machines. It was composed by A.I. in the style of the Beatles, but Sony didn’t hide the fact that the track wasn’t entirely Flow Machines’ work: French composer Benoît Carré arranged and produced the songs and wrote the lyrics. It was an example of human-A.I. collaboration.

Such collaborations are even capable of jumping on seasonal bandwagons: WaveAI’s Alysia released a Christmas EP at the end of 2018, including this track with singer Chloe Jean.

None of these tracks have troubled the upper reaches of the charts, but they hint at a better way of asking the question. Perhaps we should be asking not if an A.I. can write better songs than humans, but if humans could actually write better songs by using A.I. Even Morrissey might have to concede there’s some potential in that.