Accessibility’s Observer’s Paradox

By deliberately improving the accessibility of your website, you increase the likelihood of accessibility errors.

Michael Schofield
Metric
3 min readMar 4, 2019

--

Last week, WebAIM published a damning accessibility analysis of the top 1,000,000 homepages that I interpret as accessibility’s “observer’s paradox.”

By deliberately improving the accessibility of your website, you increase the likelihood of accessibility errors.

Home pages with ARIA present averaged 11.2 more detectable errors than pages without ARIA. An increase in the number of ARIA attributes also had a moderate correlation with increased errors. In other words, the more ARIA in use, the higher the detectable errors. This does not necessarily mean that ARIA introduced these errors (it’s likely these pages are simply more complex), but pages typically have more errors when ARIA is present, and even more so with higher ARIA usage. …

Pages with a valid HTML5 doctype had significantly more page elements (average of 844 vs. 605) and errors (average of 61.9 vs. 53.3) than those with other doctypes.

The adoption of any of these [javascript] frameworks is aligned with additional accessibility errors. This does not necessarily mean that the frameworks caused these errors, but does indicate that home pages with these frameworks have more errors than pages without. …

Home pages in the sample that utilize the popular Bootstrap framework had 1.3 million more accessibility errors than pages that did not utilize Bootstrap. …

I include these last two quotes because I’ve seen them largely interpreted with one critical takeaway. That is, something along the lines about how newfangled JavaScript frameworks (React, Vue, Angular, etc.) are bad, and — especially when criticizing Bootstrap in particular — lazy.

Sure, maybe, but I interpret these a little more softly: many people choose these frameworks because they include — either as part of the package or with an easy add-on — a robust layer of accessibility options. Bootstrap, by itself, is a pretty smart choice for the accessibility-conscious designer.

I prefer to think there is more to it than will or ignorance.

The availability of more tools to assess accessibility and thus customize the experience does not really correlate with doing it well. It definitely correlates, however, with a more complex experience with more room for error, and thus a need for better testing — especially, you know, the in-person human type of testing.

In my own usability tests with blind users after attempting to improve the search results of a popular library discovery service with additional screen-reader readouts and context I thought and hoped would be useful, I was totally surprised to see that I’d only added to the confusion.

This is a phenomenon I’ve noticed that exposes the gulf between technically accessible and accessibly usable, whereby designing to avoid the red flags thrown-up by accessibility scanners like WAVE or aXe, we omit the question of usability and, subsequently, the need to test the users of this interface.

The assumptions predominately able designers make about what is accessible are probably wrong. Not out of malice or gross intent, but ignorance only absolved by either a sufficiently diverse team or sufficient user testing (or both).

Clapping (👏) is a super way to brighten my day. Check out my podcast Metric: The User Experience Design Podcast (right here on Medium), and consider subscribing to my newsletter Doing User Experience Work. ❤ It goes a long way if you’re able to support this kind of thinking on Patreon.

--

--

Michael Schofield
Metric

I write about engineering, storytelling, and my experiences publishing an award-winning fiction podcast, a comic book, a board game, and product design.