5. Online Instrument Learning: Mesmerized By Tech
Simple Oversights: Profound Consequences
In our first post, we looked at the failure of music visualization to embrace data.
Our second post advanced broad, high-level (but concrete)strategies on how this can be overcome.
Our third post catalogued many of the deficits in online music education.
The fourth post introduced an open-source, non-profit, in-browser music visualization platform which dramatically extends online and remote music teaching and learning possibilities.
This, the fifth post, explores the motivations behind music instrument learning, how musical identification is increasingly subverted by technology, and suggest how our once profound relationship with musical instruments can be brought back into balance.
Musical Instrument learning remains for many something of a haven from the digital world: from the overwhelming pace of innovation, from it’s complexity, and inevitable obsolescence. The benefits to wellbeing of fine hand work (music-making included) are well documented.
Much music learning media pushes the curious towards specific instruments, yet there is manifestly no point in everyone playing -for example- guitar.
What persuades someone to learn to play the saxophone, clarinet or hammered dulcimer: the bandura, sitar, timpan or duduk — or, for that matter, any of the many thousands of other world music instruments? In an coming era of more free time, idle hands and minds are likely to lend such questions much greater weight.
At the heart of musical instrument learning are twin, powerful, and in some senses contradictory motivations: to differentiate oneself, and to connect.
There is, however, a third. Musical identification is an intensely personal and emotion-laden area, something technologists all too easily overlook.
Together, these three key forces are decisive to long-term learning success. Held in balance, the learner is likely to make good progress. Omit any and motivation ultimately stutters and fails.
The key to their balance -the sweet spot- is increasingly the quality of tool support: the diversity and quality of instrument model integration, the ease and directness of access to online role models and mentors, and degree to which what has been learned can be leveraged among one’s own peer group.
If existing teaching media overlook the vast majority of world music’s music systems, notations, instruments and supporting theory, this is only exacerbated in the stampede towards artificial intelligence, augmented or virtual reality, and gamified learning.
Here the western, 12-tone equal temperament and a handful of hopelessly over-subscribed instruments dominate.
We lose sight of this diversity at our peril. Whatever it’s sense of entitlement, technology is never more than a vehicle. Putting technology before context -cart before horse, bells and whistles before solid motivation- has never worked well.
Seldom has this perhaps better been illustrated than with musical workstations, whose initial appeal and ultimate demise often lies in the same characteristic: complexity.
Our concern here, however, is specifically online learning, which is set to undergo radical change. Some fundamental questions are immediately raised:
- Why the focus on emerging technologies such as WebAR and WebVR (the web versions of augmented and virtual reality) as a basis for ad-hoc, immersive musical learning when a much simpler, flexible, well-structured, data-rich and robust in-browser technology is already available in the form of scalar vector graphics (SVG), and the associated animations (notation highlighting, fingering positions, theory tool node highlighting) are done using CSS, which is, incidentally, blisteringly fast?
- How do we best bring MusicXML’s data and associated added value (especially modeling expertise) to these emerging technology environments?
- With -theoretically- multiple points of application, why is machine learning not already being more widely applied or integrated within existing music notation stacks?
Across-the-board worldwide yearly spending on machine learning and artificial intelligence already runs to the tens of billions.
It’s a similar story both for augmented or virtual reality and for gamified learning, yet -certainly in a musical instrument learning context- all these technologies tend to exist -if at all- in an own, rarified environment.
Their current isolation from data-rich music exchange files such as MusicXML is truly problematic. For example: the user or system data on which machine learning agents ‘learn’ should be the same as that ultimately used in their workaday application. The goal is, surely, an agent configurable to reflect various modes of play.
An agent taught (for example) to recognize chords should, then, be learning from audio or musical scores - where there are multiple considerations such as musical tension, timbre, attack, playing speed and handspan, and certainly not just arbitrary or algorithmically generated lists of chord examples.
WebGL And The Data Disconnect
It’s a similar story for WebAR and WebVR, where there is a tendency to fall back to expressively impoverished formats such as midi.
The reasons for this lack of integration are in many cases simple:
- these music notation stacks perform poorly as data conduits, primarily because they do not adhere to a data-driven (data-transparent) architecture.
- even if data can be imported to WebGL-based environments, expression is limited, and event synchronization lost.
Taken together, these issues mutually reinforce each other, suggesting innovators may -at least in the short term- be barking up the wrong tree.
They also provide clear justification for reworking these notation stacks to leverage the data-driven paradigm, and on the other to revisit their remarkably data-affine partner in crime, scalar vector graphics.
The Data-Driven Music Notation Stack
What, then, do we mean by ‘data-driven’? We can think of this in a number of senses (note: here the highest value information is presented first. As an aid to orientation, it’s better to work from the bottom up).
- added informational value or meta-knowledge. Note data can, in it’s relation to other notes, be processed to create higher-value information, such as intervals, reveal visual patterns associated with music genres and in turn give us a broad view of the modal landscapes associated with a musical culture.
- data depth, in that base data can be expressed in various ways (a musical note can, for example, be expressed as a name, frequency or -depending on the modular base used- index in the form of a midi number.
- data bindings can be manipulated en masse at various granularities using selections and hardware-clock synchronized transformations.
- data transparency, in the sense that unique data is given direct expression through multiple, tight graphical and audio bindings.
All are open to musicological comparison. Viewed as a group of capabilities, these represent a paradigm change.
More or less any application exploits data in some form, but seen in such ways, often claims to be data-driven are called into question.
A simple illustrative analogy of the data-driven paradigm? Seeds.
Left to themselves under normal conditions, seeds will grow to assume a shape characteristic (‘transparent’) to that species.
By manipulating their growing conditions, the plants can be trained to reveal specific qualities, such as their ability to climb, spread, survive arid conditions, cope with shade and so on. Each form assumed is an expression of the underlying plant type, but also of their manipulation.
In this way seed (data set) behaviors can be emphasized, improving our understanding across a broad range of conditions. Some forms of manipulation may even reveal behaviors or structural components completely hidden under normal conditions.
Scope And Impact
So it is with music notation. Using a data-driven approach, notation can be substituted (‘transnotated’), transposed, modulated, transformed (warped in various dimensions) and given alternative forms (tree, heatmap, force layout or node-link tree).
The selfsame data can be fed to dependent score-driven animations, such as instrument models and music theory or analytical tools. Examples? Dynamic nyckelharpa fingering, a circle of fifths (or it’s 3D variant, the pitch spiral), interval lattice or an arc diagram revealing motif repetition in a score.
Given mechanisms for the dynamic loading of such visualizations (a challenge long cracked), the possibilities are almost without limit.
Aside from a wide range of more or less ‘standard’ visualizations available for creative reuse, we have those particular to music — the thousands of instruments, the associated theory and score analysis tools, not to mention the many further areas of potential symbiosis. All just waiting to be brought to life by some musical score or recording..
Scalar Vector Graphics And Data Transparency
- we are already at the device capability threshold for widespread SVG animation adoption, and these capabilities will only continue to improve. SVG and the browser DOM are nevertheless massively under-utilized as an alternative to WebAR and WebVR. They are, though, much the more tried, tested, readily available and simple medium.
- Through hierarchy, reuse and mix-ins, SVG offers comprehensive instrument and theory tool modeling, reflecting real-world musical diversity. Moreover, anything modeled in SVG on the DOM can be quickly, easily and locally remodeled as a bitmapped image for delivery to WebAR and WebVR environments. Indeed, the SVG stack can deliver content to WebAR and WebVR on several levels: data, bitmapped skins (including smooth transformations) and modeling expertise.
- There are widespread reservations concerning SVG in regard to animation speed: in a test on rendering animated scatterplots, however, d3.js was found to comfortably handle up to some 2,000 points in real-time (faster than 24 frames per second) on a recent PC. This threshold lies well above anything we are likely to encounter in interactive musical modeling, especially in simpler instrumental and theory learning contexts — and holds even when running several score-driven, dependent animations.
- Most importantly, however, SVG is normally used only for structural features such as notation glyphs, instrument outline, strings and frets or keys. The actual transformations (above all note playback and fingering placements, their color and transparency) are usually executed using CSS attributes — which are fast. Indeed, the only really CPU-heavy task that comes to mind is notation scrolling. Normally executed on single background ‘g’ container elements, GPU-offloading workarounds may exist.
- Artificial intelligence is (like MusicXML) as equally applicable to SVG and the browser DOM as to WebAR and WebVR. It is applicable at multiple points in a musical instrument teaching stack. Each point of application is dependent on transient contextual data, which is poorly served in a bitmapped (WebGL-based) environment. In contrast, using modern data visualization libraries, SVG provides for straightforward, complete and very flexible data transparency.
- Regarding data transparency, SVG offers vastly better selection capabilities than a bitmapped application. In a musical score context, we are talking not just of horizontal selections for looping, but (optionally) vertical selections and at various granularities across multiple voices or parts, with tool-tips (mouse-overs) providing for (for example) interval display.
There are, then, not only strong arguments in favour of SVG instrument and theory tool modeling, but a strong potential for value delivery from SVG/DOM to WebGL-based WebAR/WebVR environments.
Reviewing all that has gone before (including in earlier posts) at the heart of music education’s problems are a failure to:
- leverage visual learning (and hence rapid assimilation over labored reason).
- acknowledge the learner’s urge toward instrumental differentiation, by extending the music learning stack to embrace music system, instrumental and theory tool diversity.
- break the internet’s tendency to isolate by encouraging individual learners and teachers to network, connect and interwork directly online in learner-needs-aligned dialog. A device is just a tool, it’s central goal to allow us to connect and collaborate in person.
- acknowledge musical heritage as very much a living culture, and work to support the person-to-person teaching and learning critical to it’s survival. This can be thought of as a re-democratization of underlying information flows.
- recognize that complex emotions are the major driver of instrumental learning, and focus enablement efforts around the process of early identification.
- transform music education from cerebral real estate grab into a wellspring of empowerment, emancipation and opportunity. Acknowledge that people are far more willing to buy into something when the proceeds go directly to the creator. Ergo, focus on peer-to-peer relationships between teacher and student. Make each instrument’s natural ecosystem (notation, instrument and theory tools) own site embeddable, the foundation for a fine-grain, highly diversified teaching and learning network.
- using robust and efficient models templates, leverage the power of the crowd both to progressively populate and govern the system with entire instrument and theory tool hierarchies.
- ensure that the libraries underlying SVG animations -whatever their nature- are in every respect economic, sufficient and fast, looking to attain animation speeds not noticeably far removed from those achieved with bitmapped approaches.
- The practical limit for in-browser SVG animation can be taken to be 2,000 simultaneous manipulations at 24 frames per second.
- find a means to support both algorithmic (including AI-derived) and personal fingering preferences (these imply both the musical context provided by a score, and a high degree of data transparency, i.e. a data-driven implementation).
- provide for progressive, layered instrument configuration from first principles (i.e. base parameters), potentially making every conceivable variant available to end users.
- leverage classification hierarchies at every opportunity, as these directly, naturally and intuitively support a progression from simple to complex models. (In this context, btw, there are clear arguments in favor of semantic or RESTful URL usage throughout).
- go ‘whole-hog’ with peer, asynchronous , and synchronous learning.
- focus AI efforts on spanning multiple instrument configurations (the goal being to develop more ‘generic’ approaches artificial musical intelligence).
- integrate learning into source-driven, immersive, genre-specific workflows encompassing every aspect of history, culture, influences, personalities and underlying drivers.
- Machine learning is currently expensive, but is likely to become commoditized with time (software as a service, or saas model), the associated skills more widespread. For the meantime, it is probably better to focus on the underlying integration framework.
This is context. Until we get the mechanics right, there is little hope of tapping back into the sublime. We need the means to reach out to our fellow humans, let them help us on our journey, while we help them on theirs. To that end, we need the tools to connect, interwork and mutually emancipate.
Project Seeks Sponsors. Open Source. Non-Profit. Global.