AI Won’t Solve Our Accessibility Problems
There is a lot of excitement about AI (Artificial Intelligence). It shows promise to solve many problems, including some for greater digital inclusion. I see great potential, but also some significant limitations. We need a measured approach to machine intelligence to learn how algorithms can best help.
The Google Translate Problem
Let’s admit it. Google Translate is getting pretty good. However, it isn’t good enough to be as good as a professional translator. At some point, for some languages, we’ll probably get there. However, I still do not see a point that for legal documents that we would blindly trust any machine to translate a document. I would like all translators to start with a machine translation for their initial draft. It should be good enough to get you at least 80% of the way there. With language, it is important that both the meaning is translated, not just the words. Ideally, you have someone well versed in the subject area in both languages to ensure that the reader would get the same meaning.
Many are looking something like Google Translate to address a sea of accessibility problems. Government agencies are looking to AI to fix their accessibility problems. If Google Translate isn’t good enough for your content, AI inserted accessibility isn’t going to be good enough either.
Teaching the Bot
I was talking with Melissa Kargiannakis on Twitter in April. Her company, skritswap, has just secured millions in venture capital for a machine learning too. skritswap is a plain language generation tool is being built in Canada. This is quite the amazing initiative as writing in plain language is not simple. She pointed out that:
“Google Translate had the benefit of training on hundreds of years of datasets between different languages. To change content complexity is currently a very manual, artistic, and yes, author included pursuit. We want to use AI to help scale it.”
For most accessibility challenges, there isn’t this base set of data with which a machine can begin to understand how to map the information. This is a challenge for Plain Language too, but the team at skritswap are looking at this in the right way. As Melissa said:
“It isn’t about replacing [people]. We actually want to work with content creators, especially Plain Language Experts — who tend to have their own indep[endent] consulting agencies — to do more faster and with more consistency. Eventually, I see this as exactly what Luisa said — a primer”
The approach taken by Melissa is brilliant. Let’s improve the authoring experience to produce better content. We can leverage machine learning to make writing in plain language easier and more intuitive. Authors can get real-time information on how to write more clearly and consistently.
Open Source & AI
AI requires three things:
- huge volumes of data
- a machine learning framework and
- massive computational power to work.
Machines need to learn from example. You need to be able to map the translation from inaccessible to accessible and do that at scale. That mapping doesn’t exist, and the terrain keeps changing. With enough data, we could have a better understanding of both the problem and the role that AI could have.
There are good open source machine learning algorithms already. What is missing is the data-sets & computational power to clean those up. I don’t know that there is a business case yet for open AI as the algorithms are only a small part of what makes it work. You need the data & the computational power, both of which are unlikely to be free. There will most likely always going to be some proprietary angle to AI.
Using Machine Intelligence Properly
Centaurs (in AI) are when a person’s intelligence is combined with a machine’s intelligence. There are examples where this can result in better outcomes than what either could do alone. The government needs to start thinking about Intelligence Augmented (IA) rather than AI. With most organizations, the end goal of AI is to replace human labour. Canadians would be better served by investing in systems that authors produce better content. This would more easily and reliably serve the needs of citizens.
There is lots of space to start using machine learning to make our organizations better. IA fits nicely within ATAG 2.0 AA, a framework to help authors create more accessible content. On simple tasks like alt text, IA could be used to propose a description of an image. Right now good AI is providing a taxonomy of items in a photo, on its own, this isn’t useful for alt text. It is more useful than some but doesn’t convey the same meaning to the user as the photo. If presented to the author of the article they might be more likely to ensure that better text is provided. Most authors will more easily edit some default alt text and explain the meaning behind the selected the image.
Authors to Users
The whole point of good communications is being able to convey an idea from one person to another. Accessibility isn’t a checkbox, but a journey. Done right, it helps authors think about how they can improve their content to improves understanding in their readers. The workflow that governments adopt to produce their content will always have a huge impact on its accessibility.
There are certain things where automation can dramatically improve accessibility with little oversight. Yet, unless you are 100% confident that the meaning of the author’s work will not be modified, the author needs to review it.
To fix accessibility problems you either need to improve the presentation layer (WCAG) or the authoring layer (ATAG). We have to stop thinking we can insert an inaccessible PDF into a magical process that will produce something that meets WCAG 2.0 AA.
I know that government does send off a lot of PDFs to external sources to have them become accessible. Different people will do a better/worse job at this. Unless evaluation is built into the workflow, you’ll never know if what they are giving you actually meets the WCAG 2.0 AA requirements.
Originally published at openconcept.ca.