Fresh Air From Digital Musicology

Zoltan Komives
Tido Music
Published in
4 min readJun 8, 2017

--

At Tido, we have a great enthusiasm for cultural heritage. We get really excited about cool UX. And we are determined to make music publishing great again. We do this by connecting users with the music they love and by bringing together all the facets of music: audio, video, notation, and commentary.

It’s very easy though, in the middle of the everyday routine, to forget how our next baby step leads towards the grand vision of this rich, connected musical world. The annual Music Encoding Conference provided a great opportunity for us to take a moment and contemplate, from a developer’s perspective, what the future of music publishing could be.

Before I dive into the details, it might be best to ask this question though: why do we even bother with musicology? Isn’t musicology that “so-called” science tucked away in the deepest corners of the Study of Humanities that only a handful of people know even exist, and even fewer think it’s not boring at best and utterly useless at worst? Nope. In my view, musicology is talking about music, and if you have anything interesting to say about Beyoncé’s “Love on Top”, and especially if you are making claims that her pants being stripped off at the same time as the bass stops has a political significance, then you are doing musicology.

So musicology, to me, is the invisible connecting tissue between music’s different facets, and since Tido’s business is to bring these connections alive, musicology is our business.

Now that we cleared this up, let’s see what was on the menu this year.

Image Interoperability

A very thought-provoking presentation was given by Andrew Hankinson about the International Image Interoperability Framework (a.k.a IIIF pronounced triple-eye-eff). IIIF is a set of application programming interfaces (APIs) based on open web standards. IIIF Image API, for example, defines a URL scheme to manage image resources of multiple dimensions, resolutions, and even provides the means to address, annotate and retrieve small portions of images. For example, I retrieved the banner image for this article with this technique from the Gallica collection, and if you replace 250,250,3550,900 with full inside the URL, you can see the image from which it’s been cut out.

Image annotated with its an X-ray layer (Source http://resources.digirati.com/iiif/an-introduction-to-iiif/)

It was developed by a few, is used by many, and is being adopted by even more cultural heritage institutions around the world to host digitised collections of books, sheet music and more.

I don’t see why music publishers shouldn’t benefit from the same techniques too. Even though I am realistic enough to appreciate that at the moment, music publishers are lagging behind these institutions in terms of technical expertise, I am bold enough to wish they were be able to teach book publishers a lesson by showing a greater appetite for the true digital transformation: start thinking in APIs.

Music Addressability

Music addressability — the way we talk about certain bits of music — was an overarching topic throughout the conference. We can address audio or video by timecode, but we need other means to address musical notation. This is the problem the Music Addressability API is trying to, well, address.

There were a lot of mentions of this API by many different participants, and it made us reflect on how commentary is handled. “Citations: The Renaissance Imitation Mass” project uses the Music Addressability API to compile a large set of assertions about similar passages in Renaissance parody masses, which are based on the idea of quoting other pre-existing music (think cover songs, 16th century-style). The assertions, which also contain information about who made them, are eventually published as nanopublications, tiny objects on the web making use of the Web Annotation Data Model and well-defined vocabularies which make these assertions parsable by machines.

Preview of a Tido Masterclass

In the Tido Masterclasses we are linking expert artists’ video commentary to specific points in the score using URLs, and this is not very far from the idea of nanopublications: I can imagine that in the not-so-distant future, expert and curated commentary (text, audio, video — just like in the Tido masterclasses today) will also be published in similar ways, where a piece of commentary relating to a short segment of notation and/or audio will represent the unit of publication.

What if @johnrahern could express his claims about Beyoncé’s pants (or political stance) as nanopublications? And what if Keith Freund in his compositional analysis of “Single Ladies”, instead of the clumsy references to measures by bare words, would be able to point right into the score?

Dynamic Notation

It was also very good to see how many digital musicology projects are making use of dynamic notation, rendering selections of pieces (as selected by a Music Addressability API request), or displaying entire digital critical editions. Verovio has become the standard choice of dynamic music notation renderer for digital musicology projects, and we are looking forward to see how it develops in the future.

Back to the office

We don’t often have the opportunity to meet and exchange ideas with such an innovative and open-minded bunch of folks as the digital musicologist community, but when we do, we always feel we’ve re-charged our batteries for the grand endeavour of leading the music publishing industry to the digital age.

--

--