“Context is the New Genre”

Why it’s all about the Why


Spotify’s recent press event was interesting for a number of reasons. Not only did it indicate the direction in which the company was going. It also highlighted a number of product releases which sparked varying reactions from the media.

  1. Content — they’ve confirmed that they will be including video, podcasts and news into their products. This is mostly what the press focused on (here, here and here).
  2. Running— they have developed technology that recognises your tempo while running and matches the song choice with that. This was mentioned only as a footnote in most reports.
  3. Relevance — they are trying to improve the user experience on their products by better understanding a user and serving up the right content at the right time. This feature didn’t elicit much of a reaction either.

By focusing on whether Spotify may have become a label or how Spotify may be competing against YouTube or why additional content might not be the right fit for a music service, the media managed to overlook the huge potential of the Running and Relevance features. It’s easier to compare existing offerings in a marketplace than it is to think about how a new product will be received by a market. This post will examine how context is going to impact the future of music discovery and consumption.

“Music is moving away from genres — People don’t search for Hip Hop or Country anymore, but rather they search around activities or a particular experience.” — Daniel Ek

At Soundwave, we echo this and believe that the future market leader in digital music will be defined by its (1) songs (2) sound and (3) sequencing.

Songs — The race for content is over. The best services have extensive catalogues that can satisfy the modern music listener with every conceivable genre and sub-genre.

Sound — The quality of digital files are now subject to the laws of diminishing returns. Only the most demanding audiophiles are willing to pay a premium for better sound.

Sequencing — The winner of the digital music space will be the platform that can provide the best sequencing of music to match a listener’s preference and desired experience. This requires a maniacal focus that starts from the ground-up with a user centric approach.



Problem/Solution

The existing solutions to the sequencing problem either rely on (1) throwing too many options at listeners providing a vast database of music trying to cover all genres and context situations (2) matching song algorithms to try and blend similar songs together or (3) celebrity type curated playlists by ‘music experts’ who try to influence a listener’s choice. None of these methods put the user first. In a world of unlimited content with on demand access, these generic solutions fall short.

Only by truly understanding the listener’s tastes, preferences and day-to-day habits can the experience be personalized for each individual. One of our advisors, Matthew Hawn, has labelled this the ‘Interest Graph’ which is a fitting description. Focusing on the Interest Graph will lead to better products, more engagement with increased listenership figures and additional revenues from advertising.

This is something that Spotify have obviously been thinking about for some time. The title of this post was a soundbyte provided by the EchoNest’s Paul Lamere at SXSW earlier this year (slides here). Their recent product releases essentially turned theory into practice.

For each song play, Spotify analyses the Who, the What, the How, the Where, the When and the Why — #HowListen. Think of it like the musical equivalent of Cluedo.

  • Who = User profile/demographic info
  • What = Song metadata
  • How = Listening habits
  • Where = Locational info
  • When = Timestamp
  • Why = Context signals

Why Now

Since the inception of digital music, it’s been easy enough to work out the Who, What, How, Where and When. For the most part, these data points were accessible strings of metadata accompanied with each song play.

The Why has only recently become available thanks to a number of technological advancements, primarily in the field of mobile sensors. The latest iPhone devices (5S onwards) contains a motion coprocessor that can handle data from the iPhone’s sensors (the accelerometer, gyroscope and compass) without triggering the phone’s SoC.

The latest Samsung smartphone devices also contains amongst other sensors, a heart rate sensor, a finger sensor, a barometer, a light sensor, a proximity sensor, a gesture sensor and a hall sensor. There are also numerous signals available from other third party services such as Google Fit and Apple’s HealthKit which store aggregated data in the cloud.


You might be wondering how all of these mobile signals effect what music you want to listen to? In short, context signals make up any available reference points that allow a digital music service to understand the environment in which the music was consumed.

Do you like running while listening to music? Do you listen to music while you’re studying in library? Do you prefer to rock out in the comfort of your own car?

All of these environments can now be detected while you consume music on your mobile device. Your accelerometer and biometric data combined with locational and timestamp information can indicate that you may or may not be the next Usain Bolt while running around Central Park for your weekly hour of exercise.

As you can see below, 41 of the top 100 playlists on Spotify are context based! Only 17 of the top 100 are genre based. The way that we discover music has evolved to meet the consumption habits of today.

Context is the new genre.

@plamere presentation on How We Listen to Music

Historical Consumption

Thinking about it in terms of historical music consumption habits, if you could only listen to music on your vinyl player, then sorting vinyl in your local record store by artist name or genre made perfect sense as the technical limitations of carrying around a record player were impossible for all but the most hardened hipsters.

Although cassette and CD players allowed for music consumption on the go, there was no way to understand what you were doing while listening to that music. The advent of the digital music file and the Internet allowed for more metadata to be sent up with each song play but this was typically limited to song, user and approximate location information. Knowing my IP address is very different from knowing that I’ve just passed the Zoo in Central Park running at 6mph at 5.45pm on a rainy Tuesday evening.

We often have fond memories of certain events based on the music we were listening to at the time. Hearing the first line of the Red Hot Chilli Pepper’s ‘Californication’ instantly brings me back to a summer spent working in the US as a student. What about the song you listened to on repeat when your first love broke your heart? What married couple doesn’t remember the song from their first dance (*cough *cough)?

@plamere presentation on How We Listen to Music

The above context related playlists are representative of many of the emotional and/or activity based connections we all have with music. We’ve always mapped certain songs with certain memories. This is the first time that that it’s been technically possible to index them outside of our own heads. That progression shouldn’t be underestimated.

Although increasing the song tempo to match your BPM might seem like a strange use case, the endless possibilities of this new context based approach shouldn’t be examined through the lens of one use case. Context signals should be examined instead through a much wider aperture of what they could represent.


Predicting the Future

The promise of the Internet of Things is starting to be realised in many industries, not just music. This trend is only going to accelerate as we go from roughly 2B smartphones to 5B smartphones in the next 50 months. Each smartphone will be fitted with even more advanced sensors and there will be better connectivity between each of these context signals.

These technical advancements remain obsolete though until there is a better understanding of the user and this starts with an intention to truly put the Interest Graph above everything else. The IoT has often fallen short because the value exchange between privacy and personalisation is too consequential. Letting strangers on a dating app know where you live is still creepy. Letting a music service know about your Interest Graph is much less intrusive and this is why music is poised to highlight the best parts of the IoT while minimising the privacy risk.

That means that we’re about to witness the biggest change to music discovery and consumption since the MP3 file made it’s way into our world. This is an exciting area that we’re currently working on at Soundwave. To give some practical examples of how this change might manifest itself, I’ve set out a list of potential context based listening use cases that may occur in the not too distant future:-

  • Your alarm clock wake up will be the most suitable song to ease you into the day. This alarm will trigger based on the best time to wake you within a specified time range, the amount of sleep that you’ve had, the day of the week and the weather outside. Your mobile device can already work this out using date and time information combined with motion and environmental sensors.
  • You jump in the shower and the humidity and orientation sensors on your mobile device recognise that it’s time to blast some Phil Collins (my guilty pleasure shower music!) as you get ready for the day.
  • You jump into your car for the morning commute and your device suggests relaxing music because the traffic ahead is about to break your heart. The soothing Beethoven will connect with your wireless speakers in your car using co-presence device technology combined with third party traffic info.
  • When you get home that evening and start cooking, your favourite music to cook to is preloaded and ready to go. The playlist for tonights dinner party is also ready to go thanks to a quick look at your ‘Dinner Party’ event flagged in your calendar app.
  • As you wind down for the night and retire to bed, some relaxing music lets you gently drift off to sleep. Light sensors combined with locational signals and device orientation confirm that you’re at home and going to bed. The music stops when you’ve fallen asleep.

If music is getting commoditised, then the service that can provide the best sequencing stands to benefit the most. While everyone else is playing lip service to their user first approach— Spotify stepped up and shipped a forward-thinking product way ahead of the competition.

I’m looking forward to seeing which service will be next to grasp why it’s all about the why.

It would be great to hear from you. If you’re into music, feel free to check out @Soundwave. If it’s the chats you’re after, Tweet at me. If you liked this post, please recommend it.