In the year 2025…

Parker Nussbaum
MHCI 2020: Amazon Music
6 min readJul 7, 2020

And other musings about the future of music listening IRL

Welcome back fellow readers and UX junkies! This publication follows the MHCI Amazon Music team as they hurl further towards finishing their capstone project exploring the future of music experiences in IRL. We’ll be cataloguing our sprint-by-sprint process, as well as any insights we gain along the way.

Coming hot off a pivot last month, the team spent the last couple of weeks busy as can possibly be. We decided a narrative (or structure) for our project based around an ever present AI, we laid out the core use cases for the AI based on our previous research , we ideated in these core use cases we have since dubbed “the five pillars”, and also broke into small teams exploring these pillars further through ideation and prototypes. Let us dig into it!

For more information about our project, MHCI, and our team, check out our first blog post. ✨

Our (New) Narrative

After spending several meetings examining the most prominent friction points and areas where the value of algorithm ownership can truly shine, the team came to a novel understanding of algorithm ownership. This idea is based around the concept of rather than seeing your algorithm or interacting with a card, you instead interact with it as a companion AI. Our framework is listed below:

  • The algorithm is an entity — it flows and connects all the access points we’ll present in this document.
  • Users no longer own and listen to a music app, they own and listen to an algorithm presented as an artificial intelligence.
  • The project focus will be on designing the algorithm and the way that users will interact with it across different IRL experiences through this AI.

Assumptions and Core use cases

With our new narrative in place, we began to wonder what would be the core use cases of this AI and what interactions would need to be explored with users. To really ideate on those areas, we first as a team came to some conclusions internally as to what our assumptions were about the future of music listening in 3–5 years.:

  • Habitual listening recommendations are perfect
  • Spontaneous or active listening recommendations depend on user control
  • Access to Alexa is ever present at home and on the go
  • Frictionless data transfer between devices
  • Users are willing to share with strangers

From this consolidated list, we looked back through our research and dug up the core problems users faced in music listening today, as well as the core needs we validated. This lead us to our core use cases with the interactive AI:

Ownership

  1. Use your habitual stations
  2. Learn more about yourself/your music identity
  3. Control your active music listening

Discovery

  1. Present an awesome music discovery for you
  2. Refine your algo based on your own natural inputs

Group curation

  1. Group curation with friends/family
  2. Group curation with crowdsourcing/strangers/geo-location

Sharing

  1. Among close friends (new mixtape)
  2. With strangers/not-close friends/co-workers
  3. Between artists and fans

“The Five Pillars” and blue sky ideation

A screenshot of the “five pillars” Figma document

Based on the core use cases, we then spent a day doing individual ideations on each of the four separate pillars. This also included bringing back old concepts from previous studies that had been forgotten. These ideations were consolidated in a figma file where we began to see a pattern internally with what features we were aligned with. It also exposed the need for a fifth column — specifically a column that supported multi-modal experiences utilizing other Amazon products.

These pillars were rebranded based on their value to customers and how we felt their function could be summarized

  • More accurate music recommendations
  • Learn about music identities (your own / others’)
  • Make IRL discovery of music more seamless
  • Co-curation/co-listening
  • Easier multi-system experiences (e.g. Prime Video, Prime Shopping, etc.).

From this point, the team broke into 3 separate sprints to explore the pillars further and test some base level experiences with users!

“Hang out with Boo” your personal music guide

Image from our DJ chatbot study

Based on the notion that more accurate music recommendations are borne out of better conversation and feedback with users’ music systems, we wanted to test how users would like exploring conversing with an AI. To pretotype this “cheaply”, we asked subjects to send us print outs of their Spotify history and gave them a list of questions that could be answered by our customized chat bot for their music tastes. This chat bot (us in disguise), would then interact with their questions and start a dialogue with them.

Overall, users really enjoyed this experience and liked the idea of having a “digital companion” Dj for them to assist in their curation needs. There were some also more nuanced insights as well including that users were not that interested in “tuning” the AI’s personality, that participants were most excited by non music streaming functionality (like suggesting what record to buy, or what concert to go to), users wanted to hear more of a characterization about their listening behaviors rather than habits, and that this AI could fill a void for them in their life as a friend to discover through.

Learning about your musical day and seamless IRL music discovery

Visualization exploring the “bookends” concept

Following up from a blue-sky ideation, one of the concepts explored from pillar two was this notion of a visualization of your musical day. Dubbed the “bookend”, this would be a feature for users that would plan your day musically and visualize it as you move between IRL modes (ex.commuting to working). While this concept proved to be popular internally, there were mixed feelings from users that explored the concept and from the client during its reveal. We have decided for now to place a hold on this feature as we ideate more on column 2 and look for other ways to incorporate identity to our system.

Visualization exploring discovering music based on geolocation

Alternatively, a concept that proved to be very popular with the client was an idea centered around seamless IRL music discovery. Users could use a map like interface to see what music was being played near them. Then users could choose to visit that location, or listen to music as if they were there. Building off of this notion, if AR is more readily available, maybe users could see others music bubbles and began to mix genres as they collaborated together in public spaces.

“The Coffee Shop Study” and music Co-Curation

Co-curation is one of the biggest friction points in IRL listening experiences based on previous research. To test a possible solution to help streamline the process of group curation, we created a fake digital coffee shop where users could DJ and influence the public space. Each user had the opportunity to like, dislike, comment, and add their own songs to mix. By placing the scenario in the most challenging case (co-curating between strangers) we felt the results could easily apply to co-curating between friends.

Some of the findings include :

  • Common interest can enable curiosity and foster interaction to facilitate music sharing between strangers.
  • Listeners would use the public group curation feature to discover new music and broaden their music taste. According to one of the participants, “it was refreshing to discover and hear the songs that weren’t on [their] tracks.”
  • A sense of ownership with a venue — in this case, the coffee shop — can be formed through the interaction because listeners are essentially defining the music fingerprint for the cafe by tuning in.
  • Listeners value anonymity in public space group curation. It is crucial to provide a means of opting in/out when sharing their data.

So what’s next?

Coming together and finishing up all the deliverables before our presentation date of July 29th! There is quite a lot of work to do over the next few weeks — but we look forward to updating you then.

Thanks for reading! Stay tuned for more updates from our team! 💖🎵👏

--

--