Screen Readers Reloaded

Understanding how screen readers can support visualization.

Shwetha Sanjeev
VisUMD
4 min readNov 10, 2022

--

Images by MidJourney (v4).

A screen reader is a tool that, as its name suggests, reads your screen. It is an assistive technology for blind people that turns written text into speech. Screen readers are not only used by blind people, but are also widely used by people who are partially sighted or who might have reading disorders.

While screen readers are designed for plain text, problems arise when they encounter visual representations such as charts and graphs. If there is no textual description of the chart (a so-called “alt text”), screen readers frequently portray web-based visualizations as unintelligible strings of “graphic graphic graphic” or are sometimes altogether invisible.

Web-based visualizations also go beyond merely summarizing data or recreating tables; they are made to enable interactive data exploration at different granularities. Since screen readers don’t have features that allow for in-depth visualization exploration, people with visual impairments still struggle to utilize web-based visualizations to its full extent.

Current accessibility rules demand that visualization designers link to the underlying data tables and include textual descriptions of their graphics as part of their alt text. These suggestions do not offer information-seeking strategies equivalent to those that interactive visualizations for sighted readers offer. Although well-written alt text can give readers a high-level summary of what the visualization demonstrates, readers cannot drill down into the data to delve deeper into particular areas. While tables enable users to focus on particular data points, reading data line-by-line gets tiresome and makes it challenging to spot broad trends.

The most common ways to make a visualization accessible to screen readers include adding a single high-level textual description (via alt text), offering access to low-level data via a table, or labeling visualization elements with ARIA to allow screen readers to step through them linearly. Despite their potential, these systems do not support the complex information-seeking behaviors that sighted readers may with interactive representations.

The authors of this paper wanted to investigate how structure, navigation, and description combine to produce richer screen reader experiences for data visualizations than are possible via alt text, data tables, or the current ARIA specification. This is an alternative to having coherent text describing the visualization.

Example structural and navigational schemes and applied to diverse chart types.

A prototype of a speech reader was created where the arrow keys up, down, left, and right were used for structural navigation (moving up or down a level, or stepping through siblings respectively). Pressing the shift+left or shift+right keys at any node within a facet branch will take you to that same place under an adjacent branch. This is known as lateral movement across facets.

In the figure, data is grouped in the encoding structure by U.S. state; users can then drill down into counties across either this branch or the legend one. The next figure (C) offers two different paths for drilling down: month first, or weather first. (D) structures the tree by annotations rather than encoding: users can descend into the time intervals designated by the orange and blue rectangles, and viewpoints within those intervals. Finally, (E) organizes its tree in terms of data, offering a binary search structure through the years

In order to evaluate this prototype, 13 blind and visually impaired people were contacted to test it out. The results of the authors’ exploratory analyses of the prototype led to the following conclusions:

  • Tables are familiar, tedious, but necessary. Tables were mentioned by every participant as their preferred method of accessing data and visuals. However, they also place a significant cognitive strain on users because they must recall earlier lines of the table in order to interpret subsequent values, making them unsuited for processing vast volumes of data.
  • Prior exposure to data analysis and representations increases the efficacy of spatial representations. Those participants who were able to read tactile graphs and maps or do data analysis were able to quickly establish a spatial grasp of how each prototype operated.
  • Hierarchical representations make it possible to effectively convey insights with minimal cognitive load. Few participants expressed a wish to filter and organize the data so they could start exploring potential trends without wading line by line in the data, even though static tables are the most accessible alternative to interactive visualizations.
  • Reading a visualization with a screen reader entails constant hypothesis testing and pattern-making. Participants described reading a graphic as a process of gradually developing a mental model and continuously testing it to discover where the patterns may no longer hold because screen reader users interpret data iteratively.
  • Cursors and roadmaps are important for understanding where you are. Accessing interactive visualizations requires being able to both capture a broad overview of the data and maintain the ability to drill down into the data. However, participants underlined the value of indicators to help them understand where they may shift in order to transition between these two levels.

Additionally, discussions with the participants revealed that design factors for users with total blindness and low vision can differ significantly from one another. Participants who were partially sighted, for instance, employed screen readers and magnifiers. They therefore favored elaborate written descriptions in addition to more succinct spoken narration.

References:

--

--