Time to write better alt text for accessibility

It is mighty hard to implement better accessibility practices, when the current guidelines are vague and seemingly more focused on virtue-signalling. Here are some research takeaways that are paving the way towards a more coherent view.

Dora Cee
UX Collective

--

Earlier I had a closer look at what Dark Mode entails in terms of accessibility, and this time I am covering some recent studies stressing another accessibility angle. One, which is challenging for a whole different reason: a lack of clear guidelines.

The dawn of alt text (alternative text) came about in 1995, albeit catering to a more segmented audience. It was introduced with the release of HTML 2, with the intention to give general users a description of images while the page was loading (or simply broken). Alt attributes seemed more of a side-thought and used as fallback content rather than an aid at the time.

Almost 20 years later, we are luckily a bit less ableist and now we are starting to wake up to the need for a more inclusive user experience. Though we are not quite there yet, at least now it’s on our radar. With the magnifying glass in hand, we can inspect what we might do better, and how technology can assist us in this endeavour.

People placed around laptop screen with in-built megaphone sticking out and blaring in the foreground.
Image by pch-vector on Freepik

Is your picture worth a thousand words?

When faced with an image, it can be hard to gauge how much detail to include in our literal depiction. Before we go into that in greater depth, let’s consider character limits, so we know what “restrictions” we are advised keep in mind.

As a rule of thumb, alt text should be descriptive enough that if you close your eyes and someone reads it to you, the arising picture in your mind is mostly aligned with the actual image.

Guidance around the length itself is a bit fuzzy, but the general idea seems to be that sticking to approximately 100 characters is a good starting point.

How this could be applied to detailed graphs and charts is not exactly straightforward. Fortunately for us, Medium-inhabitants, we are allowed up to 500 characters to figure that out for ourselves.

Alt text writing could be improved with templates

Based on a 2019 study from Gleason and colleagues, the most common reasons for not including alt text boiled down to three issues. One is that it takes too much time to write them, and second, we often just forget to add it. The third challenge came down to users not knowing what to write when including a description.

To solve these problems, a few research projects recommended the creation of templates. These could speed up and simplify the process by providing a basic guideline for expected descriptions and structure.

A 2022 research mentions approaching this, for example, via systems automatically recognising figure types and generating a template based on existing recommendations. For detailed images, an automated tool could detect and indicate which figure elements haven’t been described yet.

Another support could be the system detecting when labels get changed, and automatically updating or flagging the corresponding ones in the alt text and captions. This could reduce inaccuracies and errors, which are rather numerous in an alt text context.

People reading and studying a list of rules, with all requirements being ticked off.
Image by pch-vector on Freepik

Policies are empty structures

The same 2022 research article highlighted that the interviewed HCI researchers added alt text due to a strong external motivator. Namely, accessibility policies and accompanying resources, albeit the latter were not fleshed out properly. Without clear guidance and training, it is difficult to get one’s head around hazy “best practices”.

“While we agree that policies and resources will not necessarily lead to universal high-quality alt text, our results showing variable alt text quality and author perspectives that they were under-prepared indicate that additional alt text education and information dissemination may be useful.”

According to the paper, to improve current shortcomings, existing guidelines should include:

  • Good and bad examples, so alt text authors can avoid common mistakes
  • Explanations of the content differences between captions and alt text
  • Instructions on writing and then editing descriptions for brevity
  • Advice on how to describe figures

The authors conclude that even once the confusion cleared up, different creative domains such as writing and graphic design might find it challenging to adapt to one unified rule book.

AI still needs some work

Back in the olden 2021, a Microsoft Research team examined whether automatic alt text (generated by AI) is preferable to our own wordings. They held interview-based usability testing sessions with screen reader users and alt text authors to answer two questions.

On the one hand, they wanted to see how authors fared in terms of working with AI-written text. When asked about their own user experience, they appreciated having the AI text as assistance they could lean on to refine their descriptions.

Nevertheless, the study concluded that giving the authors more freedom in terms of crafting their own descriptions was a better choice in the standard of their output. If they started from an AI-generated position, they provided considerably less detail.

Automatic alt text: “A person sitting on a table.” Author alt text starting from blank interface: “A young lady with dark curly hair and glasses, sitting down at a coffee table. She is holding an espresso cup with her right arm and leaning her head on her left hand.” Author alt text from automatic text: “A young female person sitting on a table, smiling at the camera.”
Table from Microsoft’s Designing tools for high-quality alt text authoring research.

When they asked screen reader users, they noted that starting from an AI alt text baseline reduced writing quality. Since these were vague and usually a single sentence long, they anchored a similar expectation in authors’ minds. This resulted in them leaving out further details that could have improved the overall reading experience.

“The feedback interfaces highlighted considerable differences in the perceptions of authors and SRUs regarding “high-quality” alt text. Finally, authors crafted signifcantly lower quality alt text when starting from the automatic alt text compared to starting from a blank box.”

In essence, screen reader users found authentic (i.e. human-composed) wordings more apt. They acknowledged that AI descriptions are improving, but nevertheless still leaving much to be desired.

Woman sitting at table and leaning on her elbow as she is looking at laptop in front of her.
Image by pch-vector on Freepik

Screen reader user expectations

From an alt text author perspective, the main problem seems to be the uncertainty about what to include in the description. Until we get a straightforward formula for ideal alt text generation, this may be a lingering concern.

Fortunately for us, the same Microsoft study asked screen reader user participants what would be ideal for them. (This brings us back to the golden rule of UXR, doesn’t it? Ask, don’t assume.)

In a nutshell, the feedback highlighted the below:

  • Accuracy: Alt text shouldn’t include wrong or misleading information.
  • Completeness: The description should be detailed enough to provide important and contextual aspects of the image.
  • Conciseness: Keep it short enough that it doesn’t disturb the general flow of the whole reading experience, and make the words count.

“Why [does] the author of that article or that piece of collateral choose this image? How is it relevant to what I’m reading?”

Some of the participants agreed that completeness and descriptiveness are more important than being concise; however, there wasn’t a general consensus amongst the whole group. For now, it would seem that there is no one-size-fits-all approach in terms of which aspect to prioritise.

If you find alt-text writing difficult, you are not alone. If you find current alt text practices subpar, you are also not alone.

We probably shouldn’t be all this new to the game, but at least we are trying to improve now. Before we know it, we will have totally aced this. The first step is always becoming aware, and then we get to build a solution. Onwards and upwards!

Sidenote:

The Web Content Accessibility Guidelines 1.0 did formalise alt as an accessibility feature already in 1999. We just weren’t really good at incorporating or adopting it as a practice.

Tool(s) of the trade:

As mentioned by Ashley in the comments below, one useful tool, which guides you through writing better alt text is https://whatthealt.com

Thanks for reading! ❤️

If you liked this post, follow me on Medium for more!

Further Reading:

References & Credits:

  • Functional Accessibility Evaluator — Rulesets
  • Gleason, C., Carrington, P., Cassidy, C., Morris, M. R., Kitani, K. M., & Bigham, J. P. (2019, May). “It’s almost like they’re trying to hide it”: How User-Provided Image Descriptions Have Failed to Make Twitter Accessible. In The World Wide Web Conference (pp. 549–559).
  • Mack, K., Cutrell, E., Lee, B., & Morris, M. R. (2021, October). Designing Tools for High-Quality Alt Text Authoring. In The 23rd International ACM SIGACCESS Conference on Computers and Accessibility (pp. 1–14).
  • McEwan, T., & Weerts, B. (2007). ALT text and basic accessibility.
  • Williams, C., de Greef, L., Harris III, E., Findlater, L., Pavel, A., & Bennett, C. (2022, April). Toward supporting quality alt text in computing publications. In Proceedings of the 19th International Web for All Conference (pp. 1–12).
  • Wu, S., Wieland, J., Farivar, O., & Schiller, J. (2017, February). Automatic alt-text: Computer-generated image descriptions for blind users on a social network service. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing (pp. 1180–1192).
  • Images by pch-vector on Freepik

--

--

Design / Psych / UX / AI & more | Here to translate scientific research into practical tips & advice.