We Used UX Research After an Accessibility Audit. Here’s What Happened.

Branon Eusebio
Human-centered
Published in
8 min readDec 20, 2022

Hey there — my name is Branon Eusebio, and I’m a Design Technologist at Color Health! I’ve had the pleasure of getting to contribute to the accessibility auditing process at Color, and today I want to tell you all about how it went when I put it into practice recently on one of our products. I’m also joined by my colleague, Eric Richards, a User Research Operations Specialist at Color, who will tell you about how he used User Research to put some of my accessibility work to the test. It’s going to be a great time — but first, let me get you up to speed on the context!

Accessibility at Color Health

Accessibility is an imperative in almost all of the work we do at Color building public health technology platforms. A core tenant of our healthcare delivery platform is empowering our partners and customers to reach all patients for care, regardless of barriers to access. At that, we need to ensure we’re building products with web accessibility at the forefront of design and development to account for all user needs — addressing localization, assistive technologies, and more.

Luckily, tons of different measures and practices are in place at Color to make sure technology is built to be accessible. One of the best tools in our toolbox is our accessibility audit process. This is the process by which we evaluate the accessibility state of an app at a given time, create trackable issues out of the results, and remediate them with fixes. The scope of these audits is strictly digital accessibility for our users with disabilities, with the goal being to ensure our product complies with the Web Content Accessibility Guidelines (WCAG) and web best practices.

Earlier this year, I performed one of these full end-to-end audits on one of our products just before an important launch of some new features. Now that you’re all caught up, I’m going to break down how it went, followed by Eric who will tell you about subsequent user studies done to validate the work. Here we go!

The Accessibility Audit with Branon Eusebio

This accessibility audit involved 4 different phases: auditing, logging results, remediating, and cleaning up. Let’s tackle each phase in that order!

Phase 1. — Auditing

In the audit phase, I began by documenting the core pages and flows of the product in question. I got a nice Google Doc together to hash out how a user may traverse the app step-by-step, and which routes/URLs they’re visiting along the way. With a list of pages to audit, I was ready to rock and roll.

To perform the actual audits, I leveraged Deque’s axe DevTools Chrome plugin, which runs a suite of accessibility tests on a given page in the browser. Axe wields two different test suites:

  1. Automated: does a quick automated sweep of the most common accessibility issues in a webpage
  2. Intelligent Guided: runs more targeted tests that require a bit of manual work (like finding all images and double checking their alt text, etc.).

For both test suites, axe outputs amazing data in both an interactive UI and .csv spreadsheets. I ran both of these test suites on each page and massaged their output data into some nice sheets of our own, adding notes on more complex issues after poking around in the interactive UI. I repeated this process until every page in the core flows document was covered. Phew!

The axe DevTools test results when run on Google.com shows 13 issues, with a detailed breakdown by issue.
Axe DevTools run on Google.com as an example
A Google Sheet to house the Automated Tests output results from Axe Devtools
A Google Sheet to house the Automated Tests output results from Axe Devtools
A Google Sheet to house the Guided Tests output results from Axe Devtools
A Google Sheet to house the Guided Tests output results from Axe Devtools

Phase 2. — Logging Results

Next up: logging issues. Now that I had a couple nice spreadsheets that mapped all issues found from axe DevTools audits, it was time to create actionable Jira tickets. I dug through all the output issues and found common themes, identified fixes, and filled out 20–30 tickets in an “Accessibility” epic. I prioritized the tickets in Jira (P0, P1, P2, etc. in descending order) based on axe’s severity rating for each issue, and I made sure to add as much information to the tickets as possible, including descriptions, recommended fixes, links to WCAG criteria docs, and more.

Phase 3. — Remediating

With a tidy epic all ready to go, it was time to get these issues fixed! Our team usually works with the product managers and engineers on the team that owns the product we’re auditing to divvy up the work, but with the upcoming launch I decided to lend a hand in fixing the issues myself. I grabbed all of the tickets in the epic and I made all of the engineering improvements to resolve 100% of the outstanding issues over a few sprints! As I fixed these issues, I ran axe tests again on the same page to confirm that the issue was alleviated, which gave me full confidence I was making the necessary improvements.

Phase 4. — Cleaning Up

Many tickets later, it was time for final steps. Axe’s combined testing suites cover around 80% of all accessibility issues in web applications, so some final manual auditing was done to cover the rest of our bases. This resulted in a handful of extra tickets that I logged and addressed in an identical manner.

And just like that, our audit was complete! But — before we considered the product ready to go, we needed to validate my work and see just how usable the app was after our accessibility fixes. Luckily, I knew a guy! Let’s hear from Eric about how he did just that.

The Accessibility User Studies with Eric Richards

Hi! I’m Eric and I’m a User Research Operations Specialist at Color. I partner with our user research team to recruit and plan for a variety of user research studies, including accessibility studies. In order to validate the accessibility changes made to the product through Branon’s audit, I worked with a user researcher, Gabriel, to conduct usability testing with people who are blind or have low vision and use screen readers to navigate websites. We decided to conduct usability testing because we wanted to observe how the audit translated into the end user experience and identify any remaining opportunities for improvement.

Running the Studies

To assist with participant recruitment, we reached out to the Vista Center for the Blind and Visually Impaired, which is a 501c3 nonprofit headquartered in Palo Alto, CA and serves people who are blind or have low vision. We connected with Alice Turner, Director of Community and Corporate Relations because the Vista Center could identify a diverse set of participants from the community and had past experience working with companies on accessibility-focused user research. Through this organization, we were able to recruit a group of people who had varying degrees of experience using different screen readers, including JAWS, NVDA, and VoiceOver on Mac. This group also represented a large age range, with the youngest participant in their 20s and the oldest in their 70s. Participants were compensated for participating in the research.

Before sessions with participants began, we created a research plan that included a series of tasks that we wanted to ensure participants could complete using this product. These tasks closely followed the core flow that the audit covered. We also created accounts in our staging environment specifically for this test. And since we were conducting sessions remotely over Zoom, we ran a pilot session with Alice Turner from the Vista Center, who uses a screen reader, to ensure that our session materials and set-up (e.g. consent forms, testing account information, accessing the site) would be accessible for participants.

The Results

Overall, we found that the accessibility audit efforts paid off — people found it easy and straightforward to navigate through this product using screen readers and there were no major usability issues. We received positive feedback in the process, including one participant who said,

“Somebody knew about accessibility when they designed this.”

This was amazing for our team to hear and gave us additional confidence that people who use screen readers will be able to navigate through the product successfully.

We also received helpful suggestions for improvement from participants. For example, participants found it difficult to locate a feedback survey on the page that contained their genetic ancestry results since it didn’t have a dedicated header. In addition, we learned that we could provide more informative buttons by customizing buttons’ labels for screen readers using an aria-label. For instance, instead of the screen reader reading “View results” which is how a button looks visually, the screen reader could read “View Results for <this genetic test, etc.>” which prevents the user from having to navigate around the page to get the necessary context for the button. Lastly, one participant who had low vision but was not blind noted that the colors on some key informational images did not have enough contrast and were difficult to see.

As a result of the sessions, our team got a strong signal that users could successfully navigate core aspects of this product. The team is now exploring improvements to the design and is planning to implement suggestions in the near future. And since this product uses design components that are shared across other Color products via a nifty design system, any changes will benefit not only this product but other Color products as well. Lastly, through this research, we learned more first-hand about best practices for conducting remote user research with people with disabilities that we can apply to future accessibility user research studies.

Summing it up

It’s important to recognize that accessibility is an ever-changing, constantly-evolving commitment. Many believe that you simply build an app, make some accessibility fixes, and boom — your app is accessible; however, in practice it takes continuous iteration to make sure the UX takes every user into account. In other words, accessibility is less of a “state of being” for an application, and more of an ongoing endeavor, with cycles of auditing, fixing, testing, and releasing. That’s exactly why we believe it’s important to share out some of our processes in building accessible health technology — we hope, in shining a light on our own accessibility adventures, that the spirit of equitable UX may be illuminated in others.

Sincerely,

Branon Eusebio and Eric Richards

Checkout the Vista Center here. To Contact Alice Turner, Director of Community and Corporate Relations, e-mail aturner@vistacenter.org

--

--

Branon Eusebio
Human-centered

I’m a UX-focused Software Engineer from San Diego! I’m passionate about full stack engineering, design systems, a11y, & neuroscience.