Most folks who work in data visualization would probably agree that visualization is a useful tool. Our jobs depend on the world being convinced of that, after all. But, why stop at visuals? Humans have five (six? more?) primary senses to work with. Limiting ourselves to working with just one feels unnecessarily restrictive at best, and downright negligent at worst.
For more than a century, scientists, artists, and other people working on the edges of human understanding have been playing with sound as an alternative way of conveying data. Today, data ‘sonification’ is far more prevalent in society than you perhaps realise — from the “beep, beep, beep” of an operating theatre’s EKG machine, to the “blip” of a supermarket checkout, to the phone notifications that blight every visit to the cinema, or more recently, every lockdown Zoom call.
Data sonification, in its simplest sense, refers to the act of turning data into sound. But, there’s a little more to it than that. Under most circumstances, speech wouldn’t be considered a form of sonification. Nor would Morse code, which encodes characters rather than data. So let’s turn to a definition proposed by Florian Grond and Thomas Hermann in 2011, who wrote that sound, “functions as sonification only if we make sure to listen attentively in order to access the abstract information it contains.”
For example, the beeps in the operating theatre are the output of a system that monitors the electrical activity of a patient’s heart. When that activity peaks, the system emits a beep. That’s tremendously useful for a surgeon who wants to be attentive to their patient’s vitals without looking away from what they’re doing to consult a bar or line chart.
More prosaically, but no less importantly, supermarket barcode scanners emit a “blip” so that the cashier knows when an item has been successfully registered, without needing to look up at their screen. They can work faster and more efficiently, you don’t have to wait so long in the queue, and the supermarket earns more money. E̶v̶e̶r̶y̶o̶n̶e̶ Capitalism wins.
There are many different kinds of sound, and there are almost as many kinds of sonification. The field can be divided up in all sorts of interesting ways, but the most common is by methodology. This was the approach taken by sonification researcher Thomas Hermann in 2002, who suggested five categories of sonification that we’ve simplified into the following three.
The first, and simplest, is audification. This is when you take a series of data values and use time-stretching and pitch-shifting techniques to bring them into a range that the human ear can hear.
For example, some of the earliest work in sonification was done by earthquake researcher Hugo Benioff, who invented the seismograph. In 1953, he provided a series of audio recordings for one side of an LP called “Out of This World”, where he sped up tape recordings of earthquakes to a point where they were audible. A liner note on the album reads:
“It is understood as a condition of sale that Cook Laboratories, Inc., will in no way be responsible for damage this phonograph record may cause to equipment directly or indirectly. For users with wide-range woofers this disclaimer shall be construed to include neighbors as well, dishware and pottery.”
A more recent example of this could be The New York Times’ 2017 sonification of the speed that different weapons can be fired at, or NASA Goddard Space Center’s 2018 “Sounds of the Sun”.
Earcons and Auditory Icons
The second category is what Hermann describes as earcons, or auditory icons. These are short, discrete audio messages that represent events. Phone notifications obviously fall into this category, and so do barcode scanner blips. But, there’s a lot more that can be done with them.
For example, Hermann suggests combining sounds to encode information. Stock market traders could develop an earcon that represents a particular stock and another that represents “buy” or “sell”. When they’re played in sequence, they could represent a recommendation to buy or sell that stock.
It’s possible to develop a small audio “language” in this way, which can not only make data systems substantially more accessible to people who are blind or partially sighted, but also improve the experience for fully-sighted users who find the audio feedback helpful. Sonification for accessibility is an important topic, but not one we’ll be covering in great depth in this article.
Hermann draws a distinction between earcons and auditory icons. In his categorisation, earcons are more abstract pings, boops, and dongs, while auditory icons sound like the thing they represent (for Scrabble fans, they’re skeuomorphic, or indexical). An earcon of a “sale” event might be a simple bell-like tone, while an auditory icon would perhaps be the sound of a cash register opening.
You can sonify data in this way alone. See, for example, Egypt Building Collapses by Tactical Technology Collective, which represents buildings collapsing with the sound of… a collapsing building. But, earcons and auditory icons are also commonly found embedded in more complex sonification systems — a specific bleep sound to represent a year passing, for example. In this way, they can be seen as the sonification equivalent of tick marks on an axis.
The final category of sonification is parameter mapping — which is what most people think of when they imagine a sonification. In this category, data is mapped onto different audio properties (pitch, volume, duration, tempo, etc), just like visualization involves the mapping of data onto visual properties (colour, shape, size, angle, etc).
These systems can be difficult to work with due to the sheer range of possibilities on offer. There are continuous mappings (volume, FX, duration) and discrete mappings (type of instrument, number of times a sound is played). There are some mappings that can fit in both categories — pitch can be adjusted continuously, or by stepping through discrete notes in a musical scale. In practice, sonifications that map continuous data to pitch, like this sonification of the yield curve by the Financial Times, typically rationalise the data to a pleasant-sounding scale — the equivalent of displaying the data in bins.
As with earcons and auditory icons, there’s a spectrum of parameter mapping from realistic to abstract. A paper by Sara Lenzi and Paolo Ciuccarelli, which came out in mid-2020, puts sonifications on a scale of ‘intentionality’ — the degree to which they are “designed to explicitly help the listener to intuitively and emotionally connect” with an issue.
On the most intentional end, you have the aforementioned Egypt Building Collapses audification, then Reveal’s 2016 sonic memorial to the victims at Orlando’s Pulse nightclub, which has a strong emotional resonance. It represents the lives of people killed in the shooting with different bell tones, which end abruptly in June 2016. This would make a boring visualization — merely the lifespans of some people. But it makes a powerful sonification. Because the sound is experienced through time, you’re lulled into a sense of security by the calming repeated rhythms, then comes the shock of all those lives ending at the same time.
At the other end, Brian Foo’s Two Trains maps income levels along NYC’s 2 line subway train to the quantity, volume, and force of instruments to create a piece of music with a satisfying narrative arc, but without passing judgement on income inequality.
Emotion & Complexity
Representing data in audio is only a part of telling a sonification story — you also need to connect with a listener emotionally, to give them something they’ll remember.
Emotion is one area in which music and sonification have a clear edge over visualization: does a line chart of global temperature rise communicate the same panic and urgency as this sonification by composer Chris Chafe of 1,200 years of temperature and CO2 data?
Sound has other strengths. It’s always experienced through time, giving its creator the power to control the pace at which a story unfolds. It’s also less easily ignored than visual content: there’s no real listening equivalent to skim-reading an article or glancing to get a quick overview of a chart.
At its best, sonification transforms data into an experience that’ll stay with you, that you’ll feel compelled to share with others.
Doing this effectively means tapping into the meanings we associate with sounds. For example, a series of sterile beeps may be an efficient representation of the Egypt Building Collapses data, but it doesn’t have the same emotional resonance as the sound of a building collapsing.
This gets exponentially more difficult when you go beyond audification and start encoding data in music. Abstract or complex parameter mapping systems often need to be learnt before they can be understood. Listeners need to keep a collection of data mappings in their mind while listening to the music — and may even need to listen multiple times to understand a sonification completely.
The equivalent experience in visualization is a large, complex graphic with many different data encodings. It usually has a complicated legend, and a reader needs to spend several minutes (or longer!) with it before they grasp all of the information it contains.
In the same way that the Tuftian school of thought sees a complex legend on a visualization as an impediment to understanding, one could argue that a sonification with complicated parameter mappings is not a great experience for the listener.
But just as there are plenty of stunningly beautiful, supremely effective visualizations which have complex legends, the same can be true in sonification.
Introducing… Loud Numbers
To spread these ideas, we’ve spent the last year developing Loud Numbers — a podcast that combines stories, data, and music in a way that’s never been done before.
Our goal is to create something that not only tells a series of compelling data stories, but is also a pleasure to listen to. We wanted to know if we could hit a sweet spot where we communicate the story and also make something that sounds beautiful, something you’d press play on more than once.
We’ve found a lot of fresh creative landscapes to explore. Our sonifications are full-length tracks that nod to established musical genres — we’re sonifying climate change data as Nordic-style techno, the history of EU laws as baroque counterpoint, and the taste of beer using jazz chords.
Here’s part of a track we’re working on that sonifies US economic data from 1968 to 2020 in the style of a UK jungle track. The track is based around a famous drum loop called the Amen break. When the loop plays forwards, the US economy grew that quarter; when it plays backwards, the economy was in recession. When the economy started growing again after a recession, there’s also a celebratory airhorn (we couldn’t resist).
But there are many other musical layers. Some communicate a dataset that’s relevant to the story — the floaty synth sound moves up and down in pitch along with the Dow Jones index, for example — while others, like the bassline, are simply there to create mood, texture, and musical structure.
You’ll hear all these examples when we release all six of the tracks we’re making in the first half of 2021. You’ll be able to listen in two ways — an EP of music available on all good streaming services, and a podcast that deconstructs the music and explains how it works.
If you want to follow along on our journey, you should sign up to our weekly development log newsletter, where we talk about what we’re working on, the challenges we’re tackling, and how we’re solving them. If you’re interested in sonification more broadly, then follow us on Twitter where we share great work done by others. And, if you’re making sonifications yourself, then get in touch and tell us about your work! We’re always really excited to see work done by others, and we love to share it with our community.
We’re looking forward to pushing the boundaries of sonification when the podcast is released in 2021, and we can’t wait to hear what you think.
Miriam Quick is a data journalist, researcher and author who explores novel ways of communicating data. She has written data stories for the BBC, worked as a researcher for Information is Beautiful and the New York Times and co-created artworks that represent data through images, sculpture and sound. These have been exhibited at museums and galleries including the Wellcome Collection, National Maritime Museum and Southbank Centre (London). I am a book. I am a portal to the universe., co-authored with Stefanie Posavec, was published in September 2020.
Duncan Geere is an information designer interested in climate and the environment. Based in Gothenburg, Sweden, he works with organisations like Information is Beautiful, Project Drawdown, Wired UK, and Nesta to communicate complex, nuanced information to a wider audience.