You’ve Favorited an AI-generated Track and You Donʼt Know it Yet

Platform & Stream
Platform & Stream
Published in
5 min readMay 8, 2024

From text-to-music to text-to-song

Generative AI, boasting so many fears and opportunities alike, has made a breakthrough last year with consumer outputs for images.

This year is seeing the emergence of the same accessible, affordable, and professional-grade solution for music tracks. Stakes are high that next year the generative video frontier will be crossed.

As far as text-to-audio is concerned, the first generative music AI models made available to the “publicˮ (meaning developers external to the original research team) surfaced last year actually, with MusicGen from Meta at the forefront.

Utilizing one of these models on a local system demands a certain level of technical expertise.

However, there are now several browser-based alternatives available that handle the complex computational tasks on your behalf.

Like MusicLM from Google, only accessible upon invitation and producing short low-quality outputs at first (24 kHz mono 20 seconds long).

This year came the last step: professional quality 3 minutes long track with vocals.

Weʼre speaking of tracks which easily made their way onto the streaming audiences, registering plays while the distributor, the platform, and the listener have no technical means to be aware of the fully generated nature.

Music generation AI models have a more malicious approach than AI covers (cloned character or artistʼs voice applied to lyrics they never sung) applications, and pose a much higher level of legal and financial intricacies.

They produce a derivative work from original recordings material used for training.

Thus making it impossible to identify, attribute, and provide splits.

⭐️ Music AI models trained on copyrighted works without licensing

I have two pieces of bad news.

** This is the common denominator of the most popular full track generation platforms (the likes of Suno, Udio, Limewire, …)

** Theyʼre aiming to generate millions of dollars of revenue by targeting basically anyone wanting to create full songs from a text prompt for a small subscription fee

So theyʼre simply doubling down. Through their accessible offer and straightforward user journey, they already hooked thousands of individuals and will soon hit the millions theyʼre aiming for, without presumably paying a cent for the copyrighted works theyʼve built their model on.

By allowing anyone to create songs with impressive musical qualities as to make money with it from streaming royalties, some individuals can easily build real businesses around it. The bottom line is that they arguably dilute the royalty pool that should rightfully go to real artists.

⭐️ Music AI models trained on copyrighted works with licensing

Interestingly, Ed Newton-rex, a former Stability AI exec (whose Stable Audio 2.0 was the first model to publicly disclose their recordings data source used for training), is now at the forefront of the battle to get fair and sustainable music AI models certified.

His nonprofit organization is called Fairly Trained and the certification ensures that training was made on audio material with consent (fully owned or provided to the model developer for this use, available under an open license, from the public domain).

So far, out of the platforms designed to create songs to be monetized, Boomy and Soundful are certified.

Yet, with no compensation model in action Fairly Trained leaves this up to each and every rightsholder), these models are still diluting the royalty pool, to a certain extent…

⭐️ The music industry stakeholdersʼ responsibility

All the digital music supply chain, from production to distribution and consumption, should be concerned.

In a statement, Andrea Gleeson, from TuneCore, tells Variety, “In order to effectively prevent bad actors from diluting the royalty pool for real artists with real fans, all companies need to be a part of the solution,ˮ she writes.

Indeed, the DIY distributors, by letting any individual submit tracks on which their quality control process is not equipped to identify those types of content, have to play their part.

Speaking of distributors, the majors and independent distribution partners should pay attention.

Given the quality of the outputs generated by the music AI platforms and the lack of identification technology, it is not excluded that a fake artist succeeds in breaking a deal with these established structures.

Streaming platforms are looked at with skepticism for their unclear statement and face heavy criticism for their lack of action regarding this phenomenon.

Spotify, which currently holds more than 30% share of the global music streaming services market, recently crossed the line by going as far as promoting deemed AI-generated tracks on their platform.

But, maybe first and foremost, it is the responsibility and own interest of the generative music AI companies themselves, if they want to build a sustainable space for their business.

Otherwise, it will end as one of the boldest one-shot capitalistic moves.

⭐️ Identification will pave the way

Whether itʼs takedown, not surfacing in search results, displaying them as such for the user, adopting a specific remuneration model, or discreetly flagging them over the supply chain before taking any action, data is needed to make informed decisions.

First, we need to bring light over the shadows.

Thus, the ability to identify a track fully generated by one of these text-to-song models could serve as the go-to resource the industry needs to gather data and make volume speak.

This is precisely what weʼre offering at Ircam Amplify with our AI-Generated Detector, the first-of-its-kind independent tool capable of delivering an accurate verdict on the real or fake nature of the audiotrack, passing through the most popular prompt-based generative music AI platforms’ filter.

⭐️ The future of generative AI for music

We are collectively responsible for fostering a smart and sustainable AI framework for music creation. It’s imperative to move beyond the era of superficial rewards and shallow addiction.

AI should enhance creation, not mimic it.

At Ircam Amplify, weʼre pursuing the goal to develop generative AI tools to serve as companions for music producers and artists, not replacing them.

We are calling every company and involved individual to join us on this journey.

✏️ By Alexandre Louiset, Product Marketing Manager, Ircam Amplify

--

--