Harnessing the power of Automated Accessibility Testing

John Gimber
Deloitte UK Engineering Blog
7 min readFeb 23, 2024
A person using an accessibility device
Photo by Sigmund on Unsplash

Automated testing is good — checking the basic functionality of a product at high speed and repeatedly.

Accessibility testing is good — making sure that the product is usable productively and comfortably by the maximum number of people.

Automated accessibility testing (AAT) is great and is seen as the “gold standard” by many, but some often curse this accessibility testing, thinking of it as a drag, or an expense that they just do not want to deal with. It is becoming more important than ever — lots of public sector agencies mandate it now and the recent WCAG (Web Content Accessibility Guidelines) 2.2 rule updates may be catching people out. So, the big question is “How do we do this without breaking the team, the product, and the bank?”

There are several scenarios that we need to look at to understand the “why” of this, so let us look at them.

Scenario one: No accessibility testing

Imagine a 3-year project. It is finally done! We have a working system; we have working automated functional tests (so we know that the product does everything it is supposed to do). Everything is slick, everything is stable. But we have no accessibility tests.

And why is this omission so important?

Anyone with a disability (visual impairment such as the common colour-blindness, cataracts, physical challenges, or even mental health issues such as anxiety or memory retention challenges) could be using the new product. But if it is not built correctly — both visually, and non-visually “under the hood” then the entry barriers to product engagement are too high. Imagine if people could not submit their taxes or access health services online because the government sites were not built correctly… text could not be read because of contrast issues, screen readers which could not dictate screen content… it would become a national scandal.

With this in mind, it is time to manually test the accessibility of the product just before go-live.

Boom, lots of issues are found. Frantic panic to upgrade the user interface (UI) to make the product accessible. This has 2 effects:

  • The once-slick and once-stable product is now being destabilised by hurried, late changes.
  • The automated tests that ran against the UI now need to be updated to match the new UI. Again, no longer slick, or stable.

Scenario two: Neglected AAT

Another 3-year project. Automated functional and accessibility testing is implemented during the development phase, but nobody is quite sure what to do with it. New features are being churned out, we know there are UI problems, but nobody’s dealing with them. There’s time later, right?

Quite often, a new feature will arrive in one code merge, and the automated tests for it will arrive in a separate work ticket. This disconnect between the work and the accessibility testing is where the large crack is appearing, and stuff’s just falling into it. Accessibility bugs are appearing, but not being dealt with.

Although we know the accessibility status of the product at any given time, we are still in the situation that we must make late changes to the product, and therefore late changes to the automated tests.

Scenario three: In-line AAT

What is this? OK imagine a typical pull request for a code merge. A developer thinks “yeah, this new feature works well, let’s merge it into the main code repository.”

All well and good.

Now (usually) a person or two will have to review the code prior to it being merged. If it is good, then it can be merged.

But what if… the review process was enhanced slightly.

Instead of this:

  1. Check the new feature works well.
  2. Check that the automated functional test coverage is in place and runs correctly.
  3. Approve and merge the new code and matching tests.

We go to this:

  1. Check the new feature works well.
  2. Check that the automated functional test coverage is in place and runs correctly.
  3. Verify that the automated accessibility test(s) have been added for the new feature and run without error, in the same code branch as the new feature itself, and regression test all previous accessibility scenarios in the AAT pack.
  4. If both are correct, approve and merge the new code and matching tests.

The accessibility check stage here aims to do the following:

  • If this is a new feature, ensure there is accessibility test coverage for it.
  • If this is a modified feature, ensure that the existing accessibility tests cover it suitably (i.e. overall test coverage is equal or better than before).
  • Either way, ensure that the existing AAT regression tests run correctly as well.

Either way, this merge will not commit any accessibility issues into the code repository.

Suddenly, there is no disconnect between the development of a new feature and the test coverage for it. No code is merged into the product that is not accessible by default.

We know (at any given time) that the product is accessible.

Think of this as an “accessibility tax” on each piece of work. Yes, it can be seen as a pain, and when you add up all the “taxes” on all the features, it is not an insignificant piece of work. However, if you work out how long it takes to make those changes at the end of the project instead, you would be incredibly surprised just how much more tax you would otherwise end up paying.

As you can imagine, there are definite benefits here:

  • We have gone from “Code, review, accessibility test, re-factor, re-test, re-review” to simply “code, accessibility test, review”. We spend less time re-factoring, and more time focusing on delivering value.
  • The product can be released at any time, with the assurance it is accessible.
  • The overall cost of product development and accessibility testing is reduced. No rework, and no disruption at the end.

Could We Make It Better? Yes!

So far, we have successfully performed a shift-left of the AAT activity and merged it as an activity at the end of development of a feature. We can enhance this further by considering automated functional tests (not the automated accessibility tests) and shifting-left even further.

Assume we are using a web driver technology like Selenium or Puppeteer, which allows us to interact with the product’s UI via several methods — giving us the ability to click, type, drag, drop — you name it. If the user needs to do it, we can automate testing of it.

So, imagine this — we want to automate the clicking of a button. We could identify our button using an underlying ID, a position on the screen, a position within the page code, all sorts. Or we could identify our button via the aria label or caption (aria labels help assisted technology tools such as a screen reader like Jaws to interact with page audibly). Suddenly, if we interact with our on-screen controls only via accessible methods, then our automated functional tests will prove that a screen reader can interact with the page accessibly before we even start putting effort into proper automated accessibility testing.

There are additional tools as well, such as “accessibility linters” (one example is the Deque Axe DevTools Linter, a further discussion on available linters can be found here). These assistive tools look at the product code as it is being developed and helpfully point out accessibility-related code errors or omissions to the developer on the fly. This is assistive technology working intelligently with your team, not slowing it down.

The upshot of these changes is we cut code once, and only once: do it once, then move on. We don’t have to cut some code, test it separately, go back, fix it, fix the other features that use it… just do it once. Shorter, faster, cheaper, less risk.

Accessibility is a contributing factor in development, functional testing, automated accessibility testing, and feature review. It is not something that happens when a feature gets chucked over the fence for testing. It is owned by the whole team.

So, in summary… automated accessibility testing is good, but it is only the start of the journey. Build it in to everything you do and use the available tools. Integrate it into the development and review processes — connect the dots and close the feedback loop between the testing and the fixing, and you’ll find that when the time comes to take stock of your finished product, instead of needing to start a new “upgrade to accessibility” journey, you will realise that you have already arrived.

Useful Tools

Puppeteer: https://pptr.dev/

Selenium: https://www.selenium.dev/

JAWS: https://en.wikipedia.org/wiki/JAWS_(screen_reader)

Accessibility Linter: https://www.deque.com/axe/devtools/linter/

Free Accessibility Linters to Automate Accessibility Workflow :
https://www.digitala11y.com/free-accessibility-linters-to-automate-accessibility-workflow/

References

Accessibility testing (WCAG, 2019) (Accessed: January 2024)
https://www.w3.org/wiki/Accessibility_testing

Accessibility Testing: An Essential Guide (Sourojit Das, 2022)
https://www.browserstack.com/guide/accessibility-testing.

How to do accessibility testing (DWP) (Accessed: January 2024)
https://accessibility-manual.dwp.gov.uk/best-practice/how-to-do-accessibility-testing

Testing for accessibility (UK Gov, October 2023) (Accessed: January 2024)
https://www.gov.uk/service-manual/helping-people-to-use-your-service/testing-for-accessibility

Understanding accessibility requirements for public sector bodies (UK Gov, October 2023) (Accessed: February 2024)
https://www.gov.uk/guidance/accessibility-requirements-for-public-sector-websites-and-apps

WCAG 2.2 and what it means for you (Abbott, 2024)
https://www.craigabbott.co.uk/blog/wcag-22-and-what-it-means-for-you/

Disclaimer

Note: This article speaks only to my personal views/experiences is not published on behalf of Deloitte LLP and associated firms, and does not constitute professional or legal advice.

All product names, logos, and brands are property of their respective owners. All company, product and service names used in this website are for identification purposes only. Use of these names, logos, and brands does not imply endorsement

--

--

John Gimber
Deloitte UK Engineering Blog

I am an ISTQB & ISEB-accredited test manager at Deloitte, drawing on over 20 years of experience as an IT test specialist, QA consultant and test team leader.