Hello from the OMI Summer Lab at IDEO, where four teams are envisioning the future of music. We are Elias Jarzombek, Gabriel Rothman, Jenna Klein and David Kim, and we form team BREAD. Together we are working on a way of experiencing musical provenance. We were tasked with the brief:
Identifying individuals for their contribution to single tracks in new works.
Using the human-centered design approach championed by IDEO, we have been developing our ideas and have recently honed in on a concept. We envision an immersive experience that encourages the manipulation and exploration of the individual components that make up a musical work.
We began to address our prompt by exploring song provenance and how music has traditionally been distributed. This led to the exploration of several themes: transparency, discovery, efficiency and engagement. From there, we formulated questions for how we might approach these converging topics around a central venture.
- How might people distinguish who did what, how, when, and where?
- How might songs provide split history, usage of samplings, and licensing music?
- How might audiences be able to access the history of songs?
- How can we reduce the need for middlemen?
- How might people have realtime knowledge of when sampling occurs?
- How might audiences pay digital tips?
We asked ourselves how we might establish a direct link between creator and consumer, as the current systems involve many complex layers that are difficult to navigate. This led to the question: What if a song was autonomous and managed its own history and distribution? If it could document all contributors, changes and adapted versions from its creation to its current state, while also maintaining its own copyright and licensing. Additionally, it could be able to control its distribution channels, monetary and otherwise, between creators, collaborators and audiences.
To do this it would utilize what is known as a smart contract, which, just like a normal contract, is an agreement between two (or more) parties. The “smart” aspect means that the process is automatic and is embedded in the song itself. For example, if you wanted to license a song for a remix you are making, a smart contract would allow you to interface directly with the song. It would then issue the proper licensing fee and handle the transaction, no middlemen required.
This idea overlaps with Team Blue’s concept, who are “reimagining the process by which artists are compensated for their digital works.” Since both teams are working in the same environment given the same context, and the process of compensating artists is dependent on the proper identification of those contributors, it is easy to see how we both saw a solution in semi-autonomous music.
While Team Blue is focusing primarily on distribution, we expanded on musical provenance: How can we track a song’s changes and contributors through time? We conceived a tool that would operate like a Github for music. There are already version control tools for musicians (such as Splice or Blend), but ours would include functionality for acquiring rights for licensed material. Importing licensed material would initiate a dialogue to allow you to negotiate a contract for that content. The changes would be tracked and each version would be stored.
Week 2: Refresh
So we had a vague idea for a product, now what? A vital aspect of the IDEO human-centered design approach is the iterative cycle. This is the practice of improving an idea by repeating the design process using important elements from the previous iteration. Our initial concept was somewhat uncertain but we had come up with ideas that we found engaging. During a session with Eric Chan on how to perform IDEO style brainstorming, we generated and focused in on a single venture concept.
For this brainstorm we identified two major themes that we wanted to address. They are: the Evolution/Engagement and History/Transparency/Attribution. From there we came up with five subcategories: community, generative experience, composition/distribution, version control, and impact. Throughout the process we filtered our ideas by voting on the ones that we particularly liked. For each of the categories we generated as many “How Might We’s” as possible. A ‘How Might We’ (HWM) is a way of framing an insight so that it can be seen as a design opportunity. The HMWs that we chose to explore further were:
- How might we represent songs as fluid and how might we experience songs as fluid?
- How might we track the rippling effects of a song?
- How might we gauge the impact of a song on individual’s life?
The underlying theme that we retained from our first iteration is the idea that a song is a dynamic, living entity that has tangible and intangible effects on all of us. We produced potential solutions for each HMW in the form of “what if” scenarios, like “Since music is fluid, what if we consumed it through a straw?” or “What if music, when released, emitted a shockwave that could be propagated by others in the vicinity?”
After deciding which solutions we felt were most intriguing, we converged on a concrete idea: a virtual reality world where you can immerse yourself inside of musical works and experience how all of a song’s various elements flow together.
Immersing yourself in a song
Our prototype is a music exploration experience. It takes place in a virtual desert-like reality that is populated by musical works. These works manifest themselves as living tracks — interconnected clusters of shape and color that pulse and flow. As you wander the landscape, you hear an ambience, or musical wind — faint melodies and song elements that you can follow if you wish. As you approach a work, this ambience dies down and you hear the song that you are approaching. When you actually step inside the song, you can dissect it and view the individual contributors that helped to create it. Colors flowing around you represent different sounds and you can highlight one learn the story of who created it and isolate that sound from the rest of the song. In this way, you can interactively experience a song’s provenance.
As a data structure, the network of connections is just a graph, or group of nodes linked to other nodes. Each individual work can be viewed as a tree (a graph with a root node) that has connections to other works and individuals. This visualization shows how we might explore a work in two dimensions. Works (red) and people (green) would both be roots of their own trees. The graph below is using dummy data, but with real data, one could keep going deeper and deeper. We are exploring options like Musicbrainz to get this relationship information.
In three dimensions, we imagine a song as a series of shapes, each shape representing a musical element. Hovering over one allows you to isolate the individual’s contribution.
We also wanted to let the user explore how music is connected. Direct links between songs (ie. one song sampling another) are represented by lines on the ground between the two works. You can follow these to discover their origins. When inside a song you can also select and hold on to an individual contributor. This will cause all works that this person has worked on to appear in the vicinity, until you deselect or “drop” them. Selecting multiple contributors will show songs that both people have worked on. Another feature we are currently discussing and in the process of prototyping is the ability to move through time as a way of interacting with the history of a song. As you can probably tell, we have a lot of work ahead of us as we continue to develop and improve our idea.
From here, we will be user testing a preliminary round of digital mock-ups for user flow. We will be looking to this feedback to make sure that our approach is in line with the way people would inherently experience our venture. We also plan to conduct continued research for how we will be delivering music through our anticipated platform directly to audiences, and how the interface will vary depending on use (i.e. listeners versus artists and potential collaborators.) For next week, we hope to get through a second round of mockups for user testing and further solidifying a visual language that will be used to convey this immersive experience.