I’ve seen a few accessibility statements generated by people in the field, but do not consider myself an expert report writer. There are lots of ways to go about writing them, but I find that often when they are delivered they tend to overwhelm the non-technical people and frustrate the technical people. If you’re interested in this topic, it is worth checking out the Chang School new book Professional Web Accessibility Auditing Made Easy which is licensed under a Creative Commons license. Google’s How To Do an Accessibility Review may also be useful to some to look at. I am taking a different approach to this topic.
Accessibility reports are only going to be useful if they create change. On the most basic level, to fix any technical challenge, you need to be able to replicate it. Whatever you can do to make it easier/faster for everyone to replicate the problem, the easier it will be to fix.
- Location: URL of the page being reviewed (if more than one).
- Date: If working against dynamic content, it may be important to state the day the page was reviewed.
- Reviewer: Hopefully the report was done by one person, but it is important that there is a clear means of communications should more definition be required.
- Guideline Reference: If there are optional elements included in the review (AAA requirements for instance) then that should be clear. Requirements should reference the WCAG guidelines that are required with the project and describe barriers for people who are affected.
Focusing on Barriers Not Rules
The Web Content Accessibility Guidelines are guidelines. There are success criteria like WCAG 2.0–4.1.1 Parsing which is important as standards compliant code is absolutely a best practice. That said, the web has been built to be much more robust than this. As an example, the 4.1.1 Parsing page above does not validate with the W3C’s tool. Furthermore it may have been more relevant in 2008 when WCAG 2.0 was released than it is today.
I love the work that 18F did to build an approach to triaging accessibility issues. What are the ones that are most serious, that should be addressed first. They describe it as a checklist, but it is really more of an approach. I would for this to have a systems approach rather than a page level approach, because ultimately if you can fix the problems upstream you will have a solution with a bigger impact. This could just mean altering a template which is used in many places on the site, but could also mean fixing a library on GitHub that affects millions of sites.
It should be clear though that even if you address all of the A, B & C level issues in their checklist, your site won’t necessarily meet WCAG 2.0 AA (or Section 508). Also, as Derek Featherstone said, “Compliance is a starting point, not an ending point.”
Reporting Tools from the W3C
The World Wide Web Consortium (W3C) has also produced a Template for Accessibility Evaluation Reports which has some interesting ideas. In their Review Process they suggest listing the automated tools and manual processes used for the review. This helps a technical user identify how to better replicate the problem and to use the same testing framework used by the people evaluating it.
Another interesting tool is the W3C’s WCAG-EM Report Tool. This is an interactive form that produces a standard report structure that could fairly easily be compared with earlier evaluations. Like most everything the W3C does, you can find the open source code for the project on GitHub. This project can be extended to meet the needs of government departments.
Now I am a big fan of Microsoft’s open source Accessibility Insights tool, not because they make your site WCAG 2.0 AA compliant, but so often they catch the low-hanging-fruit of accessibility that we really should never see on a public site. That said, they can be used to easily find accessibility errors even in public government sites. Now Accessibility Insights is one of many tools that uses Deque’s open source axe-core library. Axe-core is one of the automated testing tools that is working to comply with the W3C’s Accessibility Conformance Testing (ACT).
I started asking questions in some issue queues about using these automated tools to help build accessibility reports and learned about Axe Reporter EARL. This code can be executed after running axe-core to produce a file that complies to the W3C’s Evaluation And Reporting Language (EARL) 1.0. Ok, what the heck is EARL? What’s that? It’s a machine readable RDF schema that allows accessibility errors to be recorded.
I can see an open source reporting tool that allows various ACT compliant automated accessibility to gather the ACT Rule results. These issues could then be compiled to help organize an effective course of action.
- A developer could be using axe-core in their continuous integration testing and report errors to a common database with every commit.
- The site could be crawled using something like axe-crawler and automatically submit errors to that same database.
- A project manager could export reports using Accessibility Insights that could be imported easily into that same database.
- Finally an accessibility expert could review these issues manually, and add to them based on whatever testing they choose.
There would be a few advantages to this. An automated system should be able to:
- Be very standardized so we can eliminate many tests that are not impactful
- Hopefully it would be smart enough to pick up common header/footer issues that appear throughout the site
- The site could be crawled nightly and new accessibility errors automatically added or removed.
- Each test could have a timestamp to record when a problem existed if there is a problem with replication.
- When addressing issues, the report could act like an issue queue allosing barriers to be marked resolved if the developer felt that they had been addressed.
- Allow space to record best practices and example patterns
It would allow you to go beyond the W3C’s WCAG-EM Report Tool. Helping evaluators know which pages should be selected as part of the Structured Sample Web Pages. You want to make sure that you do good manual reviews of the home page, search pages, as well as any interactive forms. Evaluators should seek out pages which are unique and which represent different functionality, and it is sometimes hard to find the pages that are buried in a large site.
Web accessibility needs to be thought about as a journey. How does your site’s accessibility change over time? What can you do to make the accessibility process as simple as possible while still giving you metrics to allow you to make evidence based decisions? Accessibility reports should be fairly easy to replicate and most of the content should not be based on opinions of the evaluator.
Websites need to be understood as a living system. We need to build approaches to track accessibility that allow organizations to quickly identify when there are new accessibility concerns on the site, and work to address them. We need help our users resolve barriers throughout the life of the site.
It is great to see the movement to adopt accessibility statements but imagine if there were a structured way to collect that feedback so that it fit within an accessibility report and could be easily replicated and addressed?
Action Issue Queue
Ultimately, at the end of the day, a developer, a designer or an editor will be required to:
- Replicate the problem as described
- Verify the best practice to be emulated
- Make the appropriate modifications
- Test if they can still replicate the problem
Having accessibility issues (from whatever source) organized in a central issue queue that allows people to be given responsibility to address these problems would be key. To be able to prioritize these issues it is important that an organization understand the barriers presented to users and the violations to WCAG that may correspond to that. But ultimately, the goal is to make the site meet the needs of a wide range of users, and to work to eliminate barriers when they are discovered. The perceivable, operable, understandable; and robust framework of WCAG 2.0 is a great guide to this, but it is generally not a good way to write an impactful accessibility report.
I don’t know that anyone has or is building an accessibility reporting tool like the one I have described. I think that these pieces can fit together to help build a reporting framework that is better.
An Approach for Today
Until we can do this, let’s make sure we:
- Create accessibility reports are focused on things that our teams can do to remove barriers for real people.
- Take the time to reference evidence as well as best practices, instead of just WCAG Success Criterion.
- Identify duplication, rather than duplicating the reporting, if the same problem appears on multiple pages, don’t detail it multiple times, but highlight either in the first instance, or in a special section to make it clear that this is likely a problem with a common template.
- Barriers that occur multiple times, should be given a higher priority, even if they are minor issues.
- Make sure there is a universal way for every unique issue to be identified. This is simple in an issue tracker, but until a simple numbering scheme can work (issue 2.4.3 and know that this is the third accessibility problem mentioned in the fourth section of the second page of the site).
- Reports should be focused on action items, but should always have a folder with all of the automated reports that have been used to generate the high level report.