Hearing Data

How sound can help blind individuals understand data.

Jackie Ho
VisUMD
4 min readOct 24, 2022

--

Photo by Mark Paton on Unsplash.

Our visual system is by far our most dominant perceptual system and one we rely upon to navigate and make sense of our environment. This innate fact about our perceptual abilities is what allows data visualization to be such a strong tool for communicating data. Our visual system allows us to easily spot outliers, find boundaries that separate elements into groups, and quickly move our attention. This, of course, is under the assumption that we are fortunate enough to have our visual perceptual system fully intact. Visually impaired people (VIP) have the same need to consume data as sighted people. The visualization community is now beginning to recognize this fact.

With the vast breadth of research for data visualizations, there are guidelines on how certain data types should be represented visually. For example, while hues can be used to easily distinguish groups, they may not be the best option when it comes to mapping quantitative data. However, such guidelines do not yet exist for sonification, which uses sound to convey data. Because it does not rely on vision, sonification has been put forth as a viable alternative for visually impaired individuals. To address this lack of guidance for audio charts, Wang, Jung, and Kim conducted a study to figure out how different auditory channels affect visually impaired people to perceive and understand visualizations. Based on the results of their exploratory experiment, they work towards establishing a more robust framework to guide designers on creating audio charts.

Mackinlay’s ranking of visual variables based on data type and visual channel.

One distinguishing factor for audio channels versus visual ones is the cognitive burden the consumer faces when trying to understand a chart. Our visual capabilities are so nuanced where we can quickly attune ourselves to focus in and out of different parts of a visualization. Without a strong visual system, people would need to work harder to keep a mental model in place. The capacity of our working memory is already very limited and can be easily overworked when trying to make sense of a mental model of data. Not only do designers have to consider the most effective channels to represent data, but also which ones are less burdensome on people’s cognitive load.

One of the main objectives of this study was to create a ranking of audio channels similar to what currently exists for visual channels. To accomplish this, the research team broke up their study into three main parts that focus on understanding different aspects of sonification:

  1. How different auditory channels represent different data types based on intuitiveness and accuracy;
  2. Whether mappings for data-level perception transfers to chart-level perception; and
  3. The impact of previous experience with audio charts.
Auditory channels and their definitions.

There were five auditory channels that were evaluated for intuitiveness and accuracy: pitch, volume, length, tapping, and timbre. While intuitiveness is hard to quantify, the research team used a Likert scale for its measurement and accuracy based on participants being able to rank different values of the chosen auditory channel correctly. These channels were mapped to three data types:

  1. Quantitative — a number value (i.e., test scores from 0 to 100);
  2. Ordinal — categories that can be ranked but with no specific measurement between them (i.e., types of educational degree from middle school up until Ph.D.); and
  3. Nominal — categories that aren’t necessarily ranked (i.e, types of fruit).

Based on the study’s findings, participants found pitch to be the most intuitive. However, the auditory channels of number of taps and lengths of sounds ended up being the most accurate when decoding the data. The researchers noted that pitch may be chosen as most intuitive based on easy distinctions between two data points using this channel. In addition, when it comes to chart-level interpretations, participants seem to take into account the ‘visuals’ of the graph when determining how intuitive an audio channel would be for a specific chart type. One of the main takeaways would be discrete sounds made more sense for charts such as scatterplots while continuous sounds worked better for line charts.

What I found to be really strong about this study is how it serves a great starting point for future work. For researchers, there’s an opportunity to explore more types of charts and combinations of audio channels to data types. Another interesting point sparked from a participant’s comment on quickening the speed of taps to determine the maximum threshold speed for this population. Furthermore, the paper studies whether or not musical training help people perform better, as well as potential differences between early-onset and late-onset blind or low-vision individuals. Overall, this work serves as a great step forward in making data more accessible for the blind community and I can’t wait to see how these insights are applied in real-world contexts!

References

  • Wang, R., Jung, C., & Kim, Y. (2022). Seeing Through Sounds: Mapping Auditory Dimensions to Data and Charts for People with Visual Impairments. Computer Graphics Forum (Vol. 41, №3, pp. 71–83).
  • Mackinlay, J. (1986). Automating the design of graphical presentations of relational information. ACM Transactions on Graphics, 5(2), 110–141. https://doi.org/10.1145/22949.22950

--

--