Listening to Line Graphs

Making line charts accessible to blind people using sound.

Jong Ho Lee
VisUMD
4 min readNov 10, 2022

--

Photo by Ayadi Ghaith on Unsplash.

“Flatten the curve!” One of the most memorable catch-phrases during the COVID-19 pandemic paints a mental image of a chart in our minds to encourage social distancing. Where did this phrase come from? The phrase itself is conceived from charts that show COVID-19 infection trends. If the infection trends upwards too rapidly, it would overwhelm healthcare systems. Thus, “flattening the curve” depicts a chart that shows a gradual increase and not a sharp spike.

Weekly trends in COVID-19 cases in the U.S. Source: The CDC.

Sighted people can take a look at such charts and develop an intuitive understanding of when there were COVID-19 surges. Using the figure above for example, you can see that there were high rates of infection during December 2020 and January 2021.

However, how can charts showing time-series data be accessible for people who are blind or low-vision? Researchers at Stanford University, Adobe Research, and University of Michigan set out to build a system that uses sonification and audio descriptions to help such people develop a similar intuitive understanding of time-series data in their recent publication, “Supporting Accessible Data Visualization Through Audio Data Narratives.”

In the research, Alexa Siu and colleagues devised a way to automatically generate sonification and audio descriptions from time-series data to make charts accessible for people who are blind or low-vision. Sonification is like playing a different tone of sound that corresponds to different levels of data. For example, you could imagine a whistle that gets gradually higher in pitch when data starts trending upwards. Audio descriptions are verbal narratives voiced by a computer that describes how the data looks like. For example, a computer could say something like “In January 2021, COVID-19 rate increased rapidly by 50 percent” with the COVID-19 data above. Combining these two can create audio data narratives, which can describe time-series data using a combination of whistles (sonification) and computer-narrated voices (audio descriptions).

Generating sonifications and audio descriptions automatically from a time-series dataset. Each sonification and audio description are divided into segments. Source: Alexa F. Siu et al. (2022)

The algorithm that Siu and colleagues developed would divide up time-series data into segments, which would provide descriptions of how the data looks like for that segment. Before implementing the algorithm, the research team first conducted a series of workshops with people who are blind or low-vision to find a guideline on how best to generate segments with audio data narratives. From the workshops, they established the following design principles.

  1. The audio description segment should be played before the sonification segment to provide context and structure to the reader. These two segments should not overlap.
  2. The sentence structure of the audio description should be consistent across the narrative and should describe a start and end point for the following sonification segment.
  3. The sonification segment should maintain consistent trends and a rhythmic pattern.
  4. The narrative should contain a moderate number of segments. Too much or too little segments can be confusing.
  5. The sonification segments should have a moderate duration (not too long or too short).

Based on the design principles, the research team created an algorithm that automatically segments the data. Readers curious in the details should read the full article, but in a nutshell, the team implemented a heuristic-based approach that aimed to find a set of boundary points that contributes most to the overall shape of the graph. After developing the technique, the research team evaluated with people who are blind and low-vision.

Experimental conditions used when conducting the experiments. The control condition played the whole audio description of the data set and played sonification afterwards. The narrative condition mixed and matched different segments of audio descriptions and sonifications. Source: Siu et al. (2022)

To evaluate the technique, the research team designed experiments where the study participants would complete a series of comprehension tasks using audio data narratives from example data. The research team controlled how the audio data narratives were presented to the participants to see how participant responses were different based on different conditions. From the experiments, the research team asked study participants to provide the following responses:

  • A description of trends they found from the data.
  • Accuracy and time to complete comprehension tasks such as “what is the highest rate of increase?”
  • Self-reported mental efforts and comments.

The research team found evidence that audio data narratives, when presented according to the narrative condition, allowed study participants to have a higher proportion of interesting insights. Furthermore, study participants were more likely to have better quality insights for audio data narratives presented as the narrative condition. In other words, study participants formed their own interpretation of the data and gained a better understanding with audio data narratives. In addition to better understanding, researchers also found that audio data narratives were efficient for the tasks presented in the narrative condition, and found benefits when using sonification and audio descriptions together.

However, researchers warn that using this technique could increase the mental burden of the users. One reason may be since sonification is a new way of representing information for people who are blind or low-vision, it may require extra effort to “combine all the pieces.” Researchers also stressed that balancing the overview aspect and presenting details were important when using this technique.

References

  • Siu, A., S-H Kim, G., O’Modhrain, S., & Follmer, S. (2022). Supporting Accessible Data Visualization Through Audio Data Narratives. CHI Conference on Human Factors in Computing Systems, 1–19. https://doi.org/10.1145/3491102.3517678

--

--