Deep Learning and the Future of Music Listening Experiences
In recent years, our musical horizons have expanded dramatically with unprecedented access to a panoply of songs via the Internet. But, how do we wade through this melodic overload to get a truly personalised listening experience?
The secret seems to lie in the bounding advances made in deep learning.
What’s this deep learning all about?
Deep Learning refers to a sub-branch of Artificial Intelligence that gets machines to extract patterns and find rules. By using neural networks, which simulate the billions of neurons in our brains, computers can “learn” how to understand German or recognise when they see a cat (as opposed to a dog or rabbit).
With more powerful computers and huge amounts of data, companies like Google and Microsoft can harness its potential on a bigger and more viable scale. Although the focus is currently on speech and images — for example, words errors were reduced by 25% when deep-learning-based speech recognition was added to Android smartphones — this approach could and is being applied to different fields…like music.
But let’s not get ahead of ourselves. Before we look at deep learning in music, let’s take a look at the music industry as it stands.
The much talked about decline in physical music sales has made way for the rise of the internet as the place to “get” music — whether legally or illegally, by paying a subscription or sitting through lots of ads.
You can now access music whenever you want and in massive, gluttonous quantities.
In this highly competitive context, how can music sites stand out from the crowd and add genuine value?
“creating a true soundtrack to your life”
By developing ever more sophisticated options to customise your listening or, in the words of Spotify chief executive Daniel Ek, by “creating a true soundtrack to your life”.This digital music giant has just launched“Now”, which gives you musical recommendations based on whether you just got up or you’re in need of an after-lunch boost.
This is certainly an exciting step in the right direction.
Yet, for the most part, this new “contextualisation” draws on songs humanly curated to appeal to the majority. The same can be said for Apple Music which mainly targets people who have no idea what to play. This means that the latent recommendation power of the millions of long tail playlists uploaded to YouTube, Google Play or SoundCloud is left untapped.
Finding exciting new music is highly rewarding, but the effort that goes into it can frustrate even the most determined of us. Too many new songs, too many new talents, too much “noise”.
And, this is where deep learning and Niland come in
Niland was set up in 2013 by a group of young researchers from IRCAM specializing in machine learning.
Basically, we use what we know about machine listening and deep-learning algorithms to capture all the musical, cultural and emotional characteristics of songs based on their acoustic features — and not just their popularity. This way we can provide the right sound for the right user at the right time.
The easy-to-use API “understands” the mood and content of each song.
The first version of this technology won the MIREX competition in 2011 and these yet-to-be-beaten results have been honed over the last 2 years through relentless R&D. Our service is currently being used to provide music licensers like SongFreedom, TBWA and Jamendo with the perfect song for adverts and films. Automatic classification helps them automate a costly process, while the music-search-similarity option builds a completely new client experience.
And, now, we want to apply this technology to help you provide meaningful contextual playlists and recommendations.
Watch this space.