Alexa, play me a love song.

Researching and pretotyping the future of music

Gautham S
MHCI 2020: Amazon Music
5 min readFeb 12, 2020

--

A few of the favorite love songs from around our lab!

Welcome back! 💖 This publication follows the MHCI Amazon Music team as they seek to eliminate the barrier between private digital music experiences and “in real life” (IRL) music experiences. We’ll be cataloguing our sprint-by-sprint process, as well as any insights we gain along the way.

In the spirit of Valentine’s Day, here are some of our favorite love songs:

For more information about our project, MHCI, and our team, check out our first blog post.

Follow this publication for updates on our progress and milestones!

Insights from our kick-off

Post-kick-off!

We facilitated a three-hour long virtual kick-off meeting on Friday January 31st, where we had meaningful conversations about the project with our client and got to know our contacts — Mike, Landon, and Danny — better on a personal level. The meeting was successful in that it helped us learn the goals and expectations of our client while reversing (and confirming) certain assumptions about the project.

Some areas that Amazon Music has been focusing on include:

  • Amazon Music aims to create a coherent product service ecosystem by improving the segmented experience when transitioning between platform/devices.
  • Conversational agents — Alexa for instance — can serve as the vehicle for creating such coherent experience.
  • The current streaming service can be improved by a smarter, more personal music recommendation system, and closer connection between users and artists.

Although all these problem spaces have great potential for deeper exploration, our client encouraged us to examine the broad landscape of music experience and identify the problem through our own research. With our solution, we should aim to inspire novel interactions with music, which will eventually inform innovation to the Amazon Music service.

Stakeholder Mapping

No need to squint — insights are down below!

After our kick-off, we had a deeper understanding of our problem space and a vague grasp of the major players operating within it. We conducted a stakeholder mapping activity to capture the key stakeholders in the music streaming space, decipher the interactions between these stakeholders, and gain a shared understanding of our stakeholders.

From the stakeholder map, we were able to draw a few key insights.

First, our pain points:

  • There is a lack of direct connection between content providers (musicians, album artists, lyricists, etc.) and users.
  • The customer journey as a user transitions from one music touch point to another (e.g. in-home listening to in-car listening) is unclear and disjointed.

Next, potential opportunity gaps:

  • Opportunity for co-creation between users and Amazon Music, and between users and content creators.
  • Opportunity to deepen the connection between users and content creators through concerts, artist bios, and more.

The Cove

Our team read a piece by Alberto Savoia entitled Pretotype It, in which Savoia coined a new term to encapsulate the concepts of design methodology exercised by Jeff Hawkins & the Palm Pilot; a pretotype. Pretotypes are a way to test the need for an idea early on in the design process in order to see if it is worth pursuing. Just entering the second sprint of our 8-month-long Capstone, our team decided that a pretotype was the best way to pursue the first immersive experiment for research.

Welcome to our pretotype, The Cove.

The Cove is a play on Plato’s The Allegory of the Cave; when participants enter The Cove, they are immersed in the “shadows” of the future realities we want to explore in music streaming.

Our first iteration of The Cove pushed the boundaries of users’ existing mental models of VUI performance, specifically the formality that is commonly associated with voice agents like Alexa or Siri. Voice agents, as we know them today, feel like butlers. We engage them with a prompt, and then they deliver the information or music we’d like to hear.

To flip this personality on its head, our team developed two voice agents with the same functions, but very different conversational properties: a formal “Butler,” and a friendly “Pal.” In The Cove, we had participants engage with both the Butler and the Pal during a music discovery exercise. What we found was:

  • The informal VUI was seen as more “likeable”
  • Context and social cues highly influence whether someone prefered the Butler over the Pal
  • The novelty of the Pal was enjoyed by all participants
  • The Pal was seen as more intelligent than other typical voice agents

We ran into several issues while first setting up The Cove. This pretotype relied on real-time input into web apps to simulate our VUIs, and in turn many of our participants experienced lags before the VUIs would respond to their answers. Also, we built The Cove in a public space as a way to access as many participants as possible, but in turn it was difficult to control for noise levels outside the curtains (the major concern being that participants would break from the immersive environment we had built).

Pretotyping gave our team the ability to work with our hands, the chance to explore ideas that would be hard to fully-implement early on, and the forgiveness to fail fast. The Cove lives on beyond its first iteration, the next iteration delving even deeper into the world of music streaming, VUI, and the connections we build with the robots around us.

Thanks for reading! Stay tuned for more updates from our team, and comment your favorite love song down below! 💖🎵

Sure thing, MHCI Amazon Music Team. Here’s “Just the Two of Us” by Grover Washington Jr.

--

--