Open Music Initiative Summer Lab 2017
Open Music Initiative. June 5- July 28 in Cambridge, MA.

The Open Music Initiative Summer Lab is an 8-week program led by MIT Media Lab digital currency initiative and Berklee College of Music. During the program, four teams are challenged to envision a future of music and its industry through the lens of human-centered design, distributed ledgers, and the OMI API. Throughout the program, each team generate prototypes to develop and refine a venture concept managed by IDEO.
All the teams started out with a different brief to tackle. In this article, I will recapitulate my experience in the summer lab. If you are also interested in other groups’ work too, please check out!
EchoWe
Before describing my role, let me give a brief summary of what our team has created over the past 8 weeks.

Our brief is “Commercializing mix tapes built from back catalog and original material”. By exploring the license and copyright industry surrounding music covers and sampling, we realized that musicians need a platform that could understand music from many different rights holders and compensate them properly. So we have created Echowe, a sample marketplace that allows creators to access to sample songs while allowing copyright holders to easily specify the licensing rules that also allows them to express their intent of the song.

The process is to upload a mashup to Echowe and our system will recognize the samples that creators used from their database by using machine learning technology, and generate a smart contract that makes the licensing process easy and streamlined. When the mashup is downloaded, streamed or performed, a certain percentage of generated revenue will be distributed back to the original copyright holders of the samples used.
You can read more about our prototype here: Chapter1, Chapter2, Chapter3, Chapter4 and our Poster.
My work
My primary role at the summer lab was to implement audio identification system and smart contract function onEchoWe.
Audio Identification
I used Chromaprint Algorithm to identify audio from mix tapes. In the Chromaprint algorithm, the input audio is converted to spectrogram image, which shows how the intensity on specific frequencies changes over time. This spectrograms are processed further by transforming frequencies into musical notes. Then we get a representation of the audio that is quite robust to changes and it isn’t hard to compare them to check how “similar” they are. Using Chromaprint, Echowe can detect songs with very good rates when users upload songs which are “not very different from original songs”. However, Echowe still cannot detect songs from very complicated mashups. This raises a question about the granularity of the sampling. For instance, the sequences of three notes, C, D, and E can happen in any kinds of song. So if I make a song containing these sequences of notes, does it mean I sampled someone’s song? It is crucial to define what is the minimum quality of songs in order to identify songs in mix tapes. We are open to suggestions.
Smart Contract
We attempted to deploy Ethereum smart contract function on Echowe in which creators of original songs get paid directly and automatically when their music get used for mix tapes. The percentage of the revenue the original creators get are calculated based on their popularity. I retrieved the popularity from Spotify API. However, despite the help of developers in the relevant industry, our team could not complete the smart contract functions on Echowe within the allotted period due to the difficulty of creating the (above mentioned) consistent audio identification system.
After the summer lab, part of our group members still continue to develop Echowe remotely, any helps and suggestions are appreciated.
