Person using a Macbook pro
source: https://unsplash.com/photos/npxXWgQ33ZQ

Many charts can’t be perceived by everyone. AutoVizuA11y changes that

Diogo Ramalho Duarte
Feedzai Techblog
Published in
4 min readSep 22, 2023

--

Imagine you are visually impaired and navigating the web and cannot see what is on the screen. You are browsing assisted by a technology that reads what is displayed on the page — a screen reader. After a while, you end up on a webpage with a chart depicting a lot of complex data, but you don’t realize it. The robotic screen reader voice only says “image” and moves on to the other elements on the screen.

This scenario is the reality for many visually impaired people navigating the web nowadays.

A study from 2021 has shown that going through visualizations online takes 200% more time for people with visual impairments than for their sighted peers. They are also 60% less accurate. This has to do with the fact that those charts are, many times, not properly constructed to be read by screen readers and other assistive technology. But this can be changed.

As a company focused on offering solutions to improve fraud detection, where software with complex visualizations is key for data analysis, Feedzai is committed to contributing to that change.

AutoVizuA11y (read as “auto visu ally”), a tool that automates the addition of accessibility features in charts, is part of that commitment. Now we are sharing it as an open-source React package!

With AutoVizuA11y, that generic, uninformative “image” tag becomes a full description of trends and outliers, where key statistics are identified. The library rests on three key pillars:

Improved navigation. Users can easily navigate through and within charts using their keyboard;

Increased speed of interaction. Shortcuts provide statistical insights and navigation alternatives;

Better descriptions. LLMs (or manually entered text) are used to describe trends, outliers, and other significant data insights.

Diagram showcasing the three main pillars of AutoVizuA11y with a barchart as an example. The first column shows the keyboard focus around the chart, and a section with an example of an automatic description. The second shows the keyboard focus around the second data element, and a section with an example of a spoken average, given after the corresponding shortcut is pressed. The third shows the keyboard focus around the third data element, and a section with an example of a spoken average.
AutoVizuA11y brings descriptions, data insights, and enhanced navigation to charts on the web.

The developments of this React tool were made in close collaboration with proficient screen reader users who informed the key user requirements for a tool such as this. Post-development tests validated that those users are happier with well constructed accessible charts than with HTML tables (a common alternative for those navigating the inaccessible web nowadays).

Navigation

Guidelines from Chartability, a set of heuristics designed to assess chart accessibility, indicate, “If a chart is interactive with a mouse (or another input device) it must also be made interactive for use with a keyboard”. With AutoVizuA11y, the visualizations, and their data points are accessible through the keyboard as well. This ensures that users can navigate inside a chart but also across charts if there are multiple visualizations on the page.

Data insights

By pressing the right combination of keys, when focused on a chart, the screen reader reads statistics such as the global average, maximum, and minimum value; compare a data point to the rest of the chart; move N data points ahead; go to the end of the chart; and more. The focus will change to the data point in question when the shortcuts are applied. All the shortcuts are available in the repository’s README.

Descriptions

How can we guarantee consistent and insightful chart descriptions that are actually useful to visually impaired users? AutoVizuA11y supports automatic descriptions that are generated via OpenAI’s API. The tool outputs two descriptions: a longer one with no size limitation, and a shorter one with around 60 words.

The screen reader users will always be informed if the description they are hearing is automatically generated. If it is manually generated, the “automatic description” warning will be removed.

The automatic description is generated after a set of key elements is passed to the API — the full list of elements that the developer should provide is available in the tool’s repository. If the automatic description is not satisfactory, it can be overridden by a manual description.

How to use AutoVizuA11y?

The tool is a React package available on npm. It is meant to be used like any other library. First, install via npm, then import the component in your project file. AutoVizuA11y should wrap the chart JSX. Like in the example below where the descriptions are automatically generated by the GPT model. In case the developer wants to provide tailored descriptions, they should replace the autoDescriptions object with the manualDescriptions.

<AutoVizuA11y
data = { barData }
selectorType = {{ element: "rect" }}
type = "bar chart"
title = "Number of hours spent looking at a screen per day of the week."
context = "Screen time dashboard"
descriptor = "hours"
autoDescriptions={{
dynamicDescriptions: false,
apiKey: API_KEY,
model: "gpt-3.5-turbo",
temperature: 0.1,
}}
>
<BarChart></BarChart>
</AutoVizuA11y>

There are a variety of props you can use to control the outputs of AutoVizuA11y. A detailed list is available here. Below are some you might want to pay close attention to:

  • selectorType: It expects either the HTML type (for example “rect”, “circle”, or “path”) of the data elements or their class name — only one can be chosen. It ensures that the data elements that should be navigable have an aria-label.
  • multiSeries: set it to true when dealing with multi series charts.
  • autoDescOptions vs manualDescriptions: choose one or the other. The first one outputs descriptions that are generated through an API call to OpenAI’s GPT. The other one is set manually by the developer. We wanted to ensure that developers can choose between one or the other, thus minimizing potential errors from the LLM descriptions.

AutoVizuA11y was tested with the following charts: bars, single line, multiple lines, pies, treemaps and heatmaps. See a working demo here.

If you have any questions, feedback or want to share an example of AutoVizuA11y in a chart you built feel free to reach out to data-viz@feedzai.com.

--

--