The next steps in live music recognition
Music Recognition Technology (MRT) is a relatively new field of action. Still, this market is undoubtedly destined to rapidly grow within the next few years.
It was mid 2018 when this article appeared on Digital Music News:
“One of the biggest challenges confronting the music industry is simple recognition. If a song is played, is it actually recognized, processed, and paid?”
Spoiler: the answer is no. Even so, we might be on the right track.
What do we exactly mean when we talk about MRT?
In this sense, the best definition is the one given by PRS For Music website:
“MRT refers to technology which is used to help identify music through sound. Shazam is the best-known consumer facing example of an MRT service where users can record a short sample of music, and software will match it to a database and identify the song”.
The application Shazam can identify pre-recorded music — broadcasted online, in movies, advertising, television shows or radio — based on a short sample played and using the microphone on the device (Android, Mac, iOS or Windows). Launched in 2002, Shazam is nowadays one of the world’s most popular apps, used by hundreds of millions of people.
Even though the technology for music recognition has been around for some decades, it is only in the past few years that this service has gone mainstream. Alongside Shazam, other popular audio-identification service providers include SoundHound, BMAT, DJ Monitor, ACRCloud, Audible Magic, Gracenote, Yacast.
Have you ever wondered how does MRT work?
Shazam and similar software identify songs thanks to a technique called audio fingerprinting, which is based on a 3D time-frequency graph known as spectrogram. It uses a smartphone, computer or device’s built-in microphone to gather a brief audio sample (5 seconds circa is the necessary time for a song to be recognised) and then creates an audio fingerprint. Each provider stores a catalogue of audio fingerprints in its database. It works by analysing the captured sound and seeking a match — based on the fingerprint obtained — in a database of millions of songs. If the system finds a match, it sends information such as the artist, song title, and album back to the user.
Apart from applications like Shazam, music recognition is widely used in other fields:
Copyrighted Content Identification
By using advanced third-party audio fingerprinting technology, user-generated content operators such as YouTube and SoundCloud are able to easily identify content presenting copyright issues and remove duplicated content as well.
Radio and TV Monitoring
Fingerprinting technology is also used to monitor music and commercials on radio airplay or TV. In this case, the intentions can be of different kinds: enriching user experience to let them know what is playing on radios or track popularities and trends of artists and songs to generate charts and data analytics.
Music reporting for Performance Rights Organisations
One of the biggest challenges musicians face today is the fact that the state of music reporting is not corresponding to reality. Due to the difficulty of tracking intellectual property across online and offline platforms, a huge part of music goes undeclared to PROs.
Hundreds of millions of dollars belonging to artists are collected by PROs globally, but they cannot be matched and distributed to artists and composers for a variety of reasons such as missing/ wrong metadata or incompatible database systems.
Many radios and music venues around the world still report playlists in old-school formats, instead of real-time digital systems. The revenues collected are then conveniently divided via an archaic estimation system, instead of being correctly distributed. This generates a lack of fair compensation for performers rights holders because artists cannot be paid accurately. In 2018, the value of unclaimed music royalties is more than $2B.
So, are the fingerprinting-based services the perfect solution? They surely help, but: not yet. These Shazam-like programs present some limitations that affect their ability to identify a song and correctly send the related information to the user. These limitations depend on both external and internal factors. For example:
- if background noise level is high, it will not be possible to take the accurate acoustic fingerprint.
- if the song is not present in the software’s database, it will not be possible to identify it.
- audio fingerprinting technology is not able to recognise songs played during live performances or cover versions.
This last point especially is extremely important. It is estimated that only a small percentage (30–40%) of live songs is recognised.
Nowadays there are no applications able to recognise live music or cover versions but things might soon change. Want to know how?
Here’s the answer: https://www.yukilive.com/
Yuki is the younger brother of Flits and has been created by the same international team of tech and music lovers (yes, it’s us!). We have developed a game-changing live and cover music identification technology with the aim of reshaping the collectors market and make the life easier for every actor involved. Yuki technology is supported by Xavier Serra, a researcher in the field of Sound and Music Computing, and Barcelona’s Pompeu Fabra University (UPF). Xavier is the founder and director of the Music Technology Group at UPF.
Why we do it?
- To help creators get the recognition they deserve.
- To help PROs correctly distribute the rights.
We promise performance royalties will never be the same again.