Spotify’s Recommendation Engine
Behind the models that power Spotify’s Discover Weekly
Music isn’t like news, where it’s what happened five minutes ago or even 10 seconds ago that matters. With music, a song from the 1960s could be as relevant to someone today as the latest Ke$ha song.
— Daniel Ek
As in other fields, the music industry also benefits from Machine Learning, providing every listener with a personalized playlist. Spotify is a prime example of this technology. Every Monday, over 100 million Spotify users receive a new playlist, Discover Weekly, tailored to them. This custom playlist contains around 30 songs that they have never listened to before but will probably enjoy. Many people love and rely on this service to increase their exposure to new music.
History of Music Curation
In the early 2000s, Songza developed a product for automating music curation. Manual curation meant that a team of music experts put together playlists by hand that they thought sounded good. Songza built a respectable user base, but the major drawback of their approach was that it did not take into account the nuance of each listener’s individual taste of music.
Like Songza, Pandora was one of the first players in the music streaming business, but they used a different approach from curating playlists. Instead, the music experts manually tagged attributes for each song. They listened to each song and added descriptive tags like ‘folk,’ ‘slow,’ ‘rap,’ or ‘love.’ With that information, Pandora created playlists that had similar songs, i.e., songs with related tags.
Last.Fm worked on yet another approach called collaborative filtering to identify songs that users might like. I discuss the details of this approach later in this chapter.
Spotify, however, developed an engine that used three different recommendation models. Their strategy changed over time, but one of the algorithms that Spotify used to create the Discover Weekly playlists was a mix of the best strategies used by their competitors. Spotify combined three different models to analyze the similarity of songs:
- Collaborative filtering examines and compares an individual’s behavior to other people’s taste.
- Natural Language Processing (NLP) analyzes the text in each song.
- Audio modeling uses a song’s raw audio to understand the tune of the song and compares it to other songs.
Netflix made collaborative filtering popular when they employed it to power their recommendation engine. They used customers’ movie ratings to determine what movies to recommend to other similar users. After Netflix deployed it successfully, its use spread quickly throughout various industries and is now often considered the starting point for anyone trying to create a recommendation engine.
Spotify, unlike Netflix, does not have a star rating for its songs. Instead, Spotify’s data is implicit feedback, i.e., not based on ratings but on the user interaction with its software. Spotify uses the stream counts of the tracks that people listen to as well as other data points like if the user added the song to a playlist or went to the artist’s page.
So, how does collaborative filtering work?
This algorithm makes predictions about the interests of a user (filtering) by collecting preferences from many users (collaborating).
Collaborative filtering works by comparing people with similar taste. In Figure 27.1, person 1 on the left enjoys songs 1, 4, 8, and 10, and person 2 on the right likes songs 2, 3, 4, and 8. Both consumers agree on songs 4 and 8. Since they both like songs 4 and 8, then the odds are high that they will like the other songs that they have not heard yet. For example, person 1 will probably like songs 2 and 3, and person 2 might enjoy songs 1 and 4.
The way to define when a user is similar to another is to see if they listened to some of the same songs that another user heard. Collaborative filtering analyzes data from many users and determines the patterns of similar ones and predicts a user’s taste based on what similar users listen to.
Natural Language Processing
Spotify crawls the web looking for blog posts and text written about music to figure out what people are saying about specific songs and artists. They determine which adjectives and language are used to describe them and also which artists and songs are discussed alongside them.
It, then, analyzes the top terms that describe a particular song or artist. Each artist and song can have thousands of terms describing them. Each one has an associated score, which describes how significant that description is for a song/artist. These tags or words are added to the model of each song and artist, which is then used for modeling what songs to recommend to a user.
Raw Audio Models
The third model that Spotify uses not only improves the accuracy of the system, but it also has a very important secondary purpose of finding new songs. If Spotify only recommends the same songs, then new songs are never recommended or listened to. This model solves that problem, adding new songs to playlists.
Let’s say that someone uploads a song to Spotify. It only has 25 listeners and no mentions on the internet. For this song to end up on someone’s Discover Weekly playlist, Spotify must use the raw audio model, which analyzes the audio of a song to see the similarities between other songs. This method analyzes every song equally, whether old or new. So, when a likeness of the new song is found to be similar to other songs you like, Spotify adds it to your playlist.
To analyze the raw audio, the track goes through the same kind of neural network that analyzes images, called Convolutional Neural Networks. It processes the raw audio and produces characteristics like time signature, key, mode, tempo, and loudness. After being processed by CNN, it produces metrics that make similar songs fall into the same category. This understanding allows Spotify to compare songs based on those key metrics. For example, someone who likes heavy metal might like songs that are more “loud.” By combining these three models, Spotify analyzes the similarity of different songs and artists and recommends new, not-listened-to-before songs to users’ playlists every week. These models made Discover Weekly one of the most popular features of Spotify.
But music and A.I. are more than merely determining playlists. The field has grown to also include music-composing platforms, such as IBM Watson Beat and Google’s NSynth Super. This technology opens the door for anyone to create music, regardless of their musical aptitude. And for most people, it may be their first hands-on experience with A.I. Some argue that the music industry as we know it will disappear. I, however, welcome the new possibilities.