11. Decoupling In Music Visualization

Fostering Loose, Hip Bindings In A Music Learning Aggregator Platform

Decoupling (making sure each software component knows as little as absolutely necessary about the other components around it) helps ensure changes can be made to one software component without impacting others.

Equally important are the freedoms it gives to mix and match at will in the user interface. Decoupling may be implemented in code, but it’s impact is directly experienced by users.

A good example of the benefits of decoupling is where any instrument in any configuration and in any key or tuning can (given the right pitch range), be ‘animatable’ (‘drivable’) by a corresponding score. Anything else, and the design has not been adequately thought through.

‘What-If’ Freedoms Through Well-Implemented Decoupling

At a deeper level, it gives us the freedom to experiment with combinations of -perhaps not in all respects compatible- models.

Here we enter the realms of ‘what-if?’ scenario exploration.

We can think of decoupling as a form of ‘clean-room’ technique targeting contamination by outside dependencies.

Decoupling allows relationships to be inferred rather than directly interrogated, and is the easiest thing in the world to get wrong.

Dependencies often result from excessive caution (providing more information detail than strictly necessary), or from a failure to recognize commonalities better separated out or left unstated.

Decoupling is a product of a coding good practice, and in the main served by abstraction (hiding detail behind generic, high-level interfaces).

Crowd-Funding Landing Page

Decoupling In Music Visualization

In a music visualization aggregator platform, decoupling is key to achieving freedom of choice in dependent animations — not just of instrument models, theory tools, physical simulations, musical psychophysics, but of the way (2D or 3D) they are displayed.

It’s worth mentally preparing for it right from the outset - with the aim of supporting the widest possible range of user actions.

To this end, whether learning-by-ear, learning-with-notation or (video) learning-by-example, we want to keep each of the following learning steps as independent of each other as possible.

Decoupling In Music: The Foundational Elements

The diagram to the left tries to summarize the links between learning tools and learning mode (ear or eye).

More importantly, it indicates (crimson bars) those areas where decoupling is essential if we want a diverse and flexible learning environment.

Whatever the original source (sampled audio, midi, MusicXML) the discipline of generating notation is central to later decoupling. Whether absolutely necessary to users is another matter..

The Role Of Classification Trees

The most natural means of describing and indexing a developing ecosystem of artifacts is the classification tree. It can be interrogated, indexed, used as a repository, is inherently ‘semantic’ (location is meaning), can be infinitely extended, pruned, copied, pasted or drag-n-dropped.

Under certain conditions, classification trees directly support decoupling:

  • the natural (layered) modeling of real-world objects (semantic follows physical).
  • in isolating each modeling ecosystem of the domain problem space in it’s own, separate classification tree. One for each of notations, instrument models, theory tools, physical simulations - and much more.
  • it can be interrogated, nodes can used as a repository (for data AND code), is inherently ‘semantic’ (location is meaning), and node contents can (carefully filtered) be displayed, configured, pruned, copied, pasted or drag-n-dropped.
  • generic (model family) nodes contain code — which is configured by means of specific definitions stored in the classification tree’s leaves. In this way potentially thousands of model variants can be configured from a single generic base.
  • within each of these ecosystems, classification trees can be nested to reflect various internal, conceptual groupings: musical form,(construction) function,(musical properties) and their directly derivable visual modes - the visual patterns and sequences of patterns (paralleling generative modeling in AI) associated with, for example, chord and chord sequence patterns on an instrument or theoretical structure.
  • support natural ecosystem evolution
  • classification trees and (semantic) URLs are a 1:1 reflection of each other.
  • a classification tree’s twigs and leaves can be put to use across modeling and configuration, storage, discovery, selection, population and fine-grain configuration.
  • a classification tree is the cornerstone of generative (exhaustively diverse) ecosystem modeling.
  • otherwise complex visual properties can be stored in classification tree nodes as simple text (for example as JSON).

‘Convention over configuration’ or ‘configuration over convention’? Using classification trees intelligently and consistently, these approaches are one and the same. This is an important (so to say ‘generic’) quality, with implications across the entire field of visual modeling.

A Common Interface To All Dependent Animations

Another classification-tree-like data structure (albeit with multiple trees) is the exchange format MusicXML, it’s primary visual expression being musical notation. This notation is based on widely understood and accepted conventions: time measures, pitch-to-notename mappings and relative voicings.

The learner might not always need notation, but a music visualization platform can greatly benefit from it. If MusicXML acts as a repository for data conventions such as note names and durations, notation constructed from it using a scriptable language such as SVG can act either as a container for, or a playback-resolved timeline snapshot of derived data. Examples? Note frequency, octave, and, for equal temperaments, a midi or similar modular indexing number.

However simple notation’s visual structure may be (and whether displayed or not), it’s time-aligned data becomes traffic across a shared or common interface for all dependent animations.

This shared interface decouples dependent animations from notation, allowing us to associate (within obvious limits such as pitch range or music system) any animation with a score.

Where a feature is used, values are passed in. Where not, either defaults are used or the feature can (“don’t care”) be ignored. For more complex animations, the interface can be extended. It’s as simple as that.

Learning Modes

Ear learning (the simplest learning mode) focusses on the direct relationship between heard (reference) audio and played pitch and timbre. It represents a very tight feedback and comparison loop.

Learning with notation focusses on the relationships between graphical icon, an often shaky mental model of desired pitch and timber, and what the learner actually plays.

Both can be further supported by visual instrument models, theory tools. Here we enter the realm of music visualization, an -for the most part- unrecognized and unexplored territory.

You can probably already guess which approach incurs the greater overhead.

Decoupling = Flexibility And Freedom

With it’s audio reference potentially far ‘weaker’, learning from or with notation could be described as somewhat contrived or ‘removed from reality’.

Overcoming this (as does Soundslice with it’s notation-synchronized video) is -for the moment- a unique selling point (USP).

Learning by ear decouples the learner from any dependency on written notation - but not necessarily the platform.

The important points here are that the platform needs to cater both to ear and notation based learning modes, and that -even if not displayed- the ability to construct a score regardless of source facilitates musical decoupling.

More Decoupling Points

Decoupling has application at many levels in a music visualization platform. For example:

  • The music exchange format MusicXML and notational freedom (font preferences, but more importantly, so-called ‘transnotation’).
  • Instrument form (Hornbostel-Sachs classifications) and function (musical properties) are decoupled in that one need have no knowledge of the other.
  • Semantic (RESTful) URLs uniquely identify assets or resources in a classification hierarchy. While routing (structure) are 1:1 mapped, actual content can be inferred, i.e. is decoupled.
  • Instrument classifications and their unique cultural expression (instruments may crop up in several cultures using different construction materials and methods, go under radically different names, yet share the same fundamental form-and-function configuration).
  • Musical properties and the models in which they find application. These can be abstracted out as ‘mix-ins’, to be applied as an when found necessary.

Decoupling = ‘Any’

Applied, the notion of decoupling translates into the english determiner and pronoun ‘any’. In coding practice, it is achieved by rigorous abstraction and encapsulation (modularity).

Here (below) we see -with storage/discovery/retrieval omitted for clarity- how our original diagram translates into code units. As can be expected of such a high-level overview, it is entirely independent of implementation language or environment.

Decoupling In Practice: Code Blocks Representing Sound Encapsulation Practices

In placing decoupling at the centre of every advance, we drive diversity: it is the linchpin to our generative approach to modeling.

Media preprocessing and score (i.e. data) playback are two distinct phases, the latter’s processing overheads well distributed over time - and hence surprisingly manageable.

Once everything has been loaded, only the immediate notation, it’s controls, where appropriate music source, and any dependent animations are relevant, the musical form and function blocks in principle off-loadable.

The alert amongst you will, in this diagram, have picked up on a layered (extendable) interface allowing -depending on their complexity- progressively richer configuration data to be fed to dependent animations. This allows the construction of progressively more complex models, culminating in, for example, (for music theory) 2- and possibly 3D lattices or tonnetze, and (for instruments) complex hybrids.

Starting with visual storytelling (which, in supplying it’s own graphics and other narrative objects, requires only a mapping to time divisions), we are able to add physics simulations, hundreds of abstract theoretical structures (music theory tools), and literally thousands of instrument models.

Additionally (but not shown), the playback data provides with everything we need to interact with sound, color and other ‘user preferences’ libraries.

With entirely exchangeable models, any specific user session will tend to use only one each of a small range of such animations.

Most of these concepts have been end-to-end validated in a proof of concept implementation.

Entirely graphical toolset ultimately supporting world music teaching and learning via video chat ◦ Paradigm Change ◦ Music Visualization Greenfield ◦ Crowd Funding In Ramp-Up ◦ Please Share

A Missed Opportunity

If decoupling is something we should seek to preserve, there are two central areas of online music processing where there is a distinct need for improvement.

A Missed Opportunity. Now Persistent — And Pervasive

Custom Positioning Elements In MusicXML

The first is the decision by the creators of MusicXML to support custom positioning elements.

This was a bad idea. Musical data and their presentation should always be decoupled, and symbol placement made algorithmically.

Hard-coded positioning is likely to cripple transnotation, introduce potential conflicts with software other than the original, and drastically increase the MusicXML payload - with direct impact on speed. This (as backed up by many forum rants) is a recipe for chaos.

Though hardcoded placement elements can be ignored, they do oblige the creators of notation programs (quite possibly a large pool of open-source developers) to consciously decide between algorithmic, custom or dual placement policies. This decision has to be prominently displayed, and will tend from time to time to be overlooked or misunderstood.

Fingerings

The second is the failure of MusicXML to decouple fingerings from other music data. Given music notation is in other respects instrument-agnostic, this is a singular oversight.

If and when a solution is found, legacy fingerings will need continued support.

FacebookPinterestTumblrYouTubeVimeoMediumBlogger

Project Seeks Sponsors. Open Source. Non-Profit. Global.

#VisualFutureOfMusic #WorldMusicInstrumentsAndTheory