Music Tech: How to Make Pied Piper Work in Real Life

I was recently able to acquire a Seaboard GRAND — a new keyboard instrument made by a emerging British company named ROLI. Having put the order in almost a year ago (it finally arrived last month after numerous delays), I was eager to give it a try.

The basic idea of the Seaboard is that it allows keyboard players to add expression to their playing by letting them to “bend” the pitch of their note directly on the keyboard itself. In tech-lingo, this can be seen as an improvement to the “knobs and buttons” UI of electronic music instruments that had mostly preceded it before. Given how little progress has been made in those areas in the last 100 years (knobs are the worst!), the emergence of this new product was a welcome sight to many keyboard players out there, including myself. It’s rumored that some musicians even started crying after playing it for the first time.

I had expected to spend grueling amounts of months, or even years, to be able to get around the instrument, but I had underestimated the amount of detail and craft that went into the Seaboard’s design. After getting used to the size and squishiness of the board itself, I found that I was able to easily adapt most of my keyboard/piano skills into the instrument right away. It was “backward compatible” enough to accommodate the training I had acquired in the classical world, but at the same, allowed me to expand the possibilities of performance further out than before.

Since I was a kid, I would be bothered that there was no way for me to play the notes in between the “cracks” of the piano keys — but now there’s finally a way for me to explore the full spectrum of the audible range. I consider the Seaboard my instrument of choice now, even over the pianos and keyboards that I've played in the years before. (In spite of the fact that it doesn't have official support for the PC yet — had to jerry-rig it somewhat to make it work.)

Given that the Seaboard manages to deliver on all of its promises, I do think ROLI’s ambitions of revolutionizing the way people think about music-making does have a good chance at succeeding at this point. But the Seaboard itself is basically a MIDI interface that executes simple commands to the computer in order to produce its sounds — no fancy algorithms or complex action tracking here — just a good, simple design that communicates with the computer in a straight forward manner.

I do think that MIDI is an under-appreciated, underutilized medium that often gets overlooked in recent iterations of music-tech products. Back in 2007 I wrote a piece called Autonomous Fantasies for General MIDI where the MIDI file itself was conceived as a piece. It was a little too “out-there” for its time (probably even now) but my belief in the potential of the medium still hasn't waned. In a way, the Seaboard had validated my belief that there was still lots of opportunities to be had in building innovative products around the MIDI language.

In Season 1 of the hit TV show, Silicon Valley, Richard’s idea of creating a compression algorithm that shrinks media files to incredibly small sizes became a central part of its plot, and the main reason for his success. The idea itself seems plausible enough to potentially become true, but for now, it still exists as largely a fictional idea. But if sharing music and media over the internet at real-time speeds is the ultimate goal here, there’s already a way to go about building something like it.

Music as a Language vs. Music as an Object, MIDI and MP3s

During the commercialization of the music industry during the 80s and 90s, there was a strong push towards the “objectification” of musical products, emphasizing the importance of music as something that could be bought and sold like a commodity or publicly traded good. So it became the norm for musicians to emphasize album sales, merchandise sales, licensing deals, fashion and clothing lines — basically anything that could be quantified — as their “marks for success”. The philosophy of “the medium is the message” also relates to these developments, since it favors the notion that the value of art is derived from its framework, rather than what’s necessarily contained within.

Mp3s and file sharing networks, starting with Napster, disrupted the music industry in immeasurable ways, but largely served as an extension of this type of philosophical framework. So we’re now basically living in an accelerated, exaggerated version of what had been building up until that point — aesthetically speaking, anyway. We tend to think of music as “being” rather than “saying”, since the former makes it much easier for us to work with as sell-able products.

The antithesis to the “music as an object” idea is the notion that music is actually a language, that there’s a certain grammar and syntax to the way we listen and appreciate the musical experience itself. But the music itself, like the words we speak to one another every day, doesn't actually have a physical form, hence it’s ethereality. It also means that music is not subject to individual ownership as well, since it belongs “to the people”.

The language of music exists within the “hows” — notes, volume levels, rhythms and durations — like a script written for an actor, it gives instruction to the performer/renderer as to how to go about creating the music. Whereas the object of music are the “whats” — the specific renderings of the sounds themselves in its specific, fleshed out form. The easiest way to explain this divide is referencing the mediums that epitomizes each respective philosophies the best: MIDI vs. MP3s. MIDI gives the computer instructions how to play the music, whereas MP3s simply play the music that’s already there.

In recent years, MIDI has come to be associated with “bad, cheesy” music because the ways in which its typically rendered tends to be very poor. But this is not the fault of the system itself — MIDI is only a set of instructions sent to the computer, if anything. If it sounds bad, it’s because the on-board drivers themselves are cheaply made, which is usually the case in most native systems. Although MIDI is an indispensable tool for professional musicians out there, most operating systems have underdeveloped MIDI systems since its generally assumed that the average consumer probably wouldn't really care either way. (Which is probably true, for the most part.)

In terms of storage, however, MIDI has a huge advantage over what traditional media files have to offer — my 55 minute piece mentioned above came out to be less than 500kb in total, even after applying panning and program events and filling up all 16 channels. Even if I were to create a 10 minute piece every day for the rest of my life, I could easily fit my entire life’s work into a small USB drive at any time. If I were to do the same with high quality audio files, I could only store a few of them at a time, at most.

It’s also worth noting that MIDI is immune to degradation, since it’s not reliant on fidelity in order for it to exist. If you buy a book, the book will eventually crumble, but the words will live on as it’s copied from format to format. Sound files, on the other hand, has a tendency to gradually lose its integrity over time since small artifacts are introduced into the file every time it gets accessed.

MIDI also has a flexibility advantage that sound files don’t offer — if you wanted to change instrumentation, tempo/bpm, or alter the song itself, it’s simply a matter of editing the commands within the file. Its language-based properties also makes it ideal for networking and effects-chaining, which are tremendously useful for building online and streaming-based applications and products. Sound files, on the other hand, are much more difficult to alter because of its larger file size and being essentially “locked” into what it is as a thing-in-itself.

Fidelity: MIDI vs. MP3s

Even a non-audiophile that isn't too picky about sound fidelity probably wouldn't tolerate MP3 files that were less than 128 kbps, which comes out to be about 1 MB per minute. For that amount of music, MIDI-based data would be less than 10kb, which makes the latter a much better candidate for doing real-time streaming. If you compress MP3 files too much it starts to sound unbearably grainy, whereas MIDI playback typically suffers from poor sound quality due to the lack of natively built high fidelity sounds. Solving the problem of the former would require a genius-level algorithm similar to Pied Piper’s product, but the latter problem is actually one that’s solvable since it’s limited by its design, rather than its technology.

The MIDI-based approach basically uses the power of the hardware of the user’s machine in order to render high-quality sounds, as opposed to burdening the network itself with high-bandwidth data. If you can convince the user (or companies making operating systems) to install a MIDI driver package with their device/computer, then it makes lossless, real-time streaming of high quality music very much possible. Windows MIDI drivers of today tend to be unbelievably poor — Apple products are slightly better but not by much. But this is a problem that can be solved using existing technologies and products if the will to improve it is there.

A natively-built MIDI driver package also opens up a new “sandbox” for people and future musicians to play around with on their own, potentially spurring the creation of a new creative community and landscape. People won’t have to buy expensive sequencers or virtual instruments to get a good sound out of their computers anymore — they can do it directly from what they already have, right there.

MIDI is the perfect solution for musical styles based on instrumental, electronic, and soundtracking (would be ideal to build a driver into a video-game engine, for example) but it is lacking a crucial element that will make its wide-spread adoption difficult: there’s no real way to reduce lyrics and singing voices into MIDI commands. The human voice is too unique, varied and complex for it to be captured in this way, so there’s no choice but to use direct audio capture for these types of endeavors. But there is already a dedicated pipeline for handling this type of data: the phone. Something that uses the two in conjuction could potentially turn into something workable somewhere down the line.

Future Product Design Using MIDI

Jam for Chrome

In 2012 Google released a web-app called JAM for Chrome, which allowed users to do musical jam sessions with anyone across the globe directly through their web browser. Although in its current form it looks mostly like a toy (since you’re only “jamming” with your keyboard, not a real instrument), if the application were to have a full MIDI interface and driver support it’s not inconceivable that it could be used for professional use. Band members would be able to rehearse and play with one other in real time even if they’re not physically in the same location, and have a documented copy of the voices and MIDI data so that they can further refine it for the final product. The system won’t replace the immediacy of a live performance, but it could become workable enough to be useful for both amateur and professional-level musicians to stay in touch with one another artistically.

MIDI also has a huge potential for video and web game titles, since its commands can be embedded directly into the code itself. Altering sound files is a CPU intensive process so most games rely on pre-made samples and clips in order to render their sounds and music properly — you might have a different visual experience with the game every time you play (aided by the video card’s GPU), but the music will always stay the same no matter what. With MIDI, the sounds themselves can be altered in real time and be directly responsive to the gameplay itself — the soundtrack might get higher and higher as the player moves faster and faster, as one example.

Spore’s space stage music (by Brian Eno) is a good example of music adapting to gameplay, although done with traditional sound files.

Although including sounds/music into the web browsing experience has become a taboo in recent years, another potential application for MIDI is to do exactly that. Folks who were around for the early days of the internet probably remember the horrors of having music blast in your ear whenever you landed on the “wrong” website. Sure, it was sometimes annoying and occasionally traumatizing, but in a lot of ways that’s what made the internet a fun and lively place to visit. Nowadays the web is dead-silent, lonely and clinical — pushing you to question your existence every time you log on. Which will be the first platform to break the silence?

Part of the problem with the “music blasting in your face 90s” (before the great purge of the 00s, after Facebook managed to overtake MySpace) was that the music wasn't really responsive, so its presence was largely getting in the way of what the user was trying to do. It wasn’t so much that people disliked music, but its unrequested intrusion into their personal space. Perhaps something more subtle and nuanced may bring music and sounds back to the web once again — either way, you’re only likely to get that type of responsiveness from a MIDI-based platform since sounds files tend to lack the flexibility that Web UX requires.

Not having changed much since the 90s, the MIDI language itself could probably use an update to include more instruments, functions and features as well. Given its long time neglect mustering the interest in its development may take a while — but it’s certainly easier than trying to invent a new language from scratch. The committees that regulates these standards have kept things relatively untouched since the need for changes weren't really there, but new and innovative products such as the ROLI Seaboard may be able to spark a new wave of interest in the medium if it garners enough interest from the general public.

Most of these ideas require people to re-imagine the way they think about music-making in a fairly significant way — not an easy thing by any means. But I do think that a lot of the innovative products in the music-tech space will come out of the MIDI space, since there’s a lot more opportunities to be had where improvements can be built in the near, rather than distant, future.


Oh, in case you might be interested, here a recent live soundtracking video that I made with my Seaboard — it’s attached to a Lets Play video I made with the Kerbal Space Program. I’m planning to put out a video or 2 weekly so like and subscribe if it floats your boat!

Show your support

Clapping shows how much you appreciated Ryan Tanaka’s story.