A Vision for Accessible Data Exploration: How Sound and Touch Transform Visualization

Urvi Varma
VisUMD
Published in
4 min readOct 31, 2023

How Blind People Use Sound and Touch to Understand Data

The quest for accessible data visualization is not merely a technological endeavor; it’s a mission to make the world of data open to everyone, irrespective of their abilities. In a remarkable study from 2022, the University of Maryland, in collaboration with the National Federation of the Blind and Poolesville High School, put forth a roadmap to make data more accessible to those with visual impairments, highlighting the role of sound and touch. They conducted in-depth interviews with 10 blind Orientation and Mobility (O&M) experts. These instructors teach blind students how to navigate the world using senses besides vision, like sound and touch through the process of “experiential learning”

These interviews revealed how O&M trainees actively map different properties of sound and texture to real-world concepts like size, distances, shapes, and location. The findings throw light on specific audio and haptic affordances that could facilitate more accessible representations of data and visualizations for users with visual impairments.

Attuning Senses to Space

O&M students learn through hands-on exploration followed by reflection which helps calibrate their senses and build mental maps. For example, loudness often indicates the distance to a sound source. And higher-pitched sounds stand out from environmental noise. They also interpret the absence of echoes as objects blocking the transmission of sound. The white cane provides crucial tactile feedback about surroundings through different techniques. Sweeping the cane creates textures that reveal surface changes. Shorelining- two point touch- uses the edge of a curb or grass as a spatial reference point.

Students also actively probe space using echolocation. Tapping the cane produces echoes that reveal distances, objects, and open areas. These interactions let users “see” and position themselves in a space with sound and touch.

Multisensory Perception

Combining sound and touch has proved to be more effective than either sense alone. However, many O&M trainees have previous experience with tactile graphics such as braille, so haptic representations may feel more intuitive than audio. With practice, audio can work but some calibration is required to shift reliance from touch to sound. The instructors stressed the need to assess students’ grasp of geometric concepts before introducing new visualization formats. This is because individuals with stronger grasp of these concepts will be better equipped to understand and translate audio/haptic stimuli. Weak internal spatial models increase the likelihood of confusion or cognitive overload from sonified visualizations.

The Power of Auditory Sweeps

One of the key concepts in the research is the notion of auditory sweeps. They are a sort of spatial journey through a visual representation of data, which indicates the order in which visual information will be sonified (converted to sound). It’s like a storytelling technique for data. When we look at charts, we typically see the data in a particular direction or pattern- may it be from left to right or in different directions. Auditory sweeps do perform a similar function, but with sound. They tell people with visual impairments about the visualized data in a way that follows the same order, as if they were looking at it. It therefore helps blind people understand and explore data in the same way as someone who can see it visually. It’s a way to make charts and graphs more accessible by turning them into a story they can listen to. Not only does this mirror the visual experience, but also provides blind individuals with a clear understanding of how the data is laid out. These auditory sweeps have already been adapted for various platforms, from web charts to touchscreens, making data exploration more inclusive and immersive.

Mapping Marks and Channels

But how do we translate these visual marks into sound? This is where insights from the research become invaluable. Visual channels, like colors, can be translated to dimensions of sound such as pitch. It is important to note that perception varies among blind individuals, but pitch often serves as a reliable tool to recognize and distinguish objects from other sounds. In addition, static, consistent sounds can help estimate relative positions between sound sources and provide references with respect to the user’s body. The dissimilarity of sounds, especially non-verbal ones, aids in switching focus across sounds. All these insights can be harnessed to convey individual marks by separating them in space, allowing for a rich and comprehensive auditory experience.

The Journey Towards Accessibility

While this example isn’t a comprehensive solution, it’s a design sketch that illustrates the potential of applying the framework to real-world scenarios. There are already existing examples of sonification tools in the marketplace, such as HighCharts and iOS audio charts.

Solutions for translating visualizations should focus on more than just task accuracy. Assessing confidence in perceiving charts non-visually is important for true accessibility.

The insights suggest data visualization tools should:

  • Retain existing visual constructs but render them via audio and haptics
  • Use a mix of speech, non-speech sounds, and tactile feedback
  • Support rich interaction beyond just displaying data
  • Customize audio/haptic mappings to match users’ abilities
  • Facilitate collaboration between sighted and blind coworkers

The study recommends that future work in this field should actively involve blind individuals, ideally as design partners. This approach ensures that accessibility solutions are not just theoretical constructs but practical tools that genuinely empower individuals with visual impairments to explore and comprehend data.

Original Paper: Towards Understanding Sensory Substitution for Accessible Visualization: An Interview Study- Pramod Chundury, Biswaksen Patnaik, Yasmin Reyazuddin, Christine Tang, Jonathan Lazar, and Niklas Elmqvist

--

--