How We Automate Accessibility Testing at Moonpig
At Moonpig we take accessibility very seriously, we want our website to be accessible and intuitive to all of our users despite any physical or visual impairments they might have.
The Web Content Accessibility Guidelines (WCAG) are a set of accessibility standards developed by the Word Wide Web Consortium (W3C). These standards exist as a tool to help web developers ensure their websites or web applications can be used by people with various disabilities such as colour blindness, loss of motor control. There are three levels of compliance ranging from A — lowest to AAA — highest. At Moonpig we are aiming for AA compliance which still allows us to maintain our brand colours and typographical styling while still maintaining a very good experience for disabled users.
Manually testing accessibility compliance is labour intensive and doesn’t scale well for an organisation that averages 16 releases per day to the website so it’s prudent for us to investigate how we can automate as much of that process as possible.
In this blog post I am going to talk about a number of tools that we use to help us achieve that goal, they are: Lighthouse; ESLint with the JSX accessibility plugin; Jest Axe; and the React Testing Library. I will also talk about how we can automate tests around keyboard navigation.
Lighthouse is an open source tool that can be used to run audits against any webpage. It is built into the Chrome dev tools and can be found under the “Audits” tab. Lighthouse provides the ability to audit against a number of measures such as performance, accessibility, and SEO. Here we’re going to focus on the accessibility tool.
The Lighthouse accessibility tool will look through the markup ensuring that we’re using the correct semantic tags with the appropriate attributes including aria-roles. It will also audit against colour contrast ratios between text and backgrounds.
We have configured Lighthouse to run using Puppeteer as part of our Continuous Integration (CI/CD) pipeline that runs on GitLab, every pull request that is opened against the website frontend repository will run this test and block the pull request should it fail. We currently run the test against the homepage and a gallery page with accessibility set to thresholds of 100% and 98% respectively.
ESLint JSX A11Y Plugin
ESLint is a static analysis tool that reads our source code and can alert us to errors that we see. We can also configure our IDE environments (most of us here use VSCode) with an ESLint plugin to give us instant feedback as we code. ESLint can be extended with a number of plugins and one of them is called ESLintJSX a11y plugin to parse our React code and alert us to any accessibility issues it finds. This instant feedback helps us to move faster and fix accessibility issues early.
ESLint is also configured to run as part of our CI process as we can catch errors that might not have been seen by the engineer.
When combined with Jest, and React Testing library we can run Axe assertions on our React components. While most errors will be caught by the ESLint plugin described earlier, this gives us an extra layer of resilience at a low cost.
Typically we run Jest in watch mode while developing so if we update a component in a way that will break accessibility the unit test running the Jest Axe assertion should fail.
React Testing Library
React Testing Library is a tool that allows us to render and interact with our React components and test against the rendered DOM. It has a number of “getter” functions that implicitly assert that we are following correct accessibility standards.
An example is the `getByAltText()` function. This will increase confidence that we’re adding meaningful alt text to that image and that this doesn’t regress due to a code change. This is preferable to using another hook such as an Id and avoids us having to make an explicit assertion against the alt-text. Some others are: `getByRole()`; `getByTitle()`; `getByLabelText()`. Wherever possible we make use of these functions and only look to use alternatives where they are not appropriate e.g. `getByText()` to find a heading as that in most cases won’t have a role.
Testing Keyboard Navigation
While the above techniques take us closer to automating our accessibility testing with very little effort, they only go so far due to limitations of static analysis.
In some cases we need to interact with our React components and assert against the DOM explicitly to get the level of confidence we need.
The main navigation in the header at Moonpig was built with keyboard accessibility in mind, as such we have automated tests that will fire keyboard events for arrow key presses and assert that the correct button or link has received focus. We also test opening the dropdown with the keyboard and that in this case it renders and focuses on the landing page link (this link is not visible when using a mouse hover). There is also a test to prove that when the user hits the escape key that the dropdown closes and the parent button that initially caused it to open receives focus.
We have also recently started to integrate a React Testing Library extension called UserEvent that under the hood may fire a number of events depending on the UserEvent we wish to dispatch. For example if we use the click method it will trigger other events on top of this such as mouseEnter, mouseMove and focus on the element if it doesn’t already have focus. It will also focus on the linked input if the click was triggered on it’s label.
The goal of this package is to simulate events as though they were real events taking place in a browser.
One could argue that we might be better testing these sorts of things in a real browser, it certainly has some advantages from a confidence perspective but there are some drawbacks:
- Longer feedback loop. Not only are these tests slower to run, you also lose the power of watch mode as Jest watch mode will look at your dependency graph and run all the appropriate tests based on what’s changed since your last commit. Cypress for example (A browser-based testing runner) will only watch against your spec files and runs against compiled output.
- Because the browser based test runners run against compiled output you lose the ability to run test coverage against the source.
Fire events the same way the user does From testing-library/dom-testing-library#107: [...] it is becoming apparent the…
To conclude I think we get a huge benefit from automating large parts of our accessibility testing at Moonpig, it saves us time and reduces the likelihood of in-accessible features being released or refactors causing accessibility regressions. However this doesn’t mean that we take our foot off the ball when it comes to manually testing accessibility. We still use screen reader technologies such as voice over but we can do much more.
I’d like us to bring in a disabled user who could help provide us insight into how we could further improve the user experience for our disabled users.
Robert Smith is a Senior Software Engineer focusing on Frontend Engineering at Moonpig.