How AI Can Influence Accessibility

Thoughts on establishing best practices in AI ethics and creating an inclusive digital future

Min Xiong
LexisNexis Design
32 min readJun 24, 2020

--

Recently, I was involved in several “Human Rights by Design” discussions with fellow UX designers and product managers. One of the fascinating topics was about Artificial Intelligence (AI) ethics with a focus on accessibility, universal design and digital inclusion. The conversations came out really informative, engaging and quite thought-provoking, so I decided to probe deeper and share my thoughts and findings on how to establish best practices in AI ethics with regard to digital accessibility and usability.

As society has entered into a new chapter, also known as the Fourth Industrial Revolution, AI is viewed to be rapidly changing the accessibility world. Some believe it makes the web more accessible. Some remain sceptical of these AI algorithms and technological evolutions. Whatever your view is, AI products are already changing the way we interact, learn and communicate.

The impact of AI on Accessibility: Illustration by LexisNexis Content UX Team

In order to explore the ideas of how to create an inclusive digital future, it is vital to understand the technologies under AI’s umbrella and how they are related to accessibility. This becomes the first part of my research.

Part 1: Terminologies and How AI Technologies are related to Accessibility

The terminologies in AI are interconnected and interdependent. People often confuse these terminologies. Let me provide two examples:

a. people ask if data visualisation should be taken into consideration when discussing accessibility best practices for AI products.

b. people ask if a product is advertised as an AI product but technically it is a machine learning enhancement, do we still need to study it as part of the AI ethical framework?

The answer is yes to both questions.

1.1 Definitions

Here is a brief explanation of the key terms. I only picked up a few which are essential to this article. For any in-depth information or case studies, please check out the Elsevier AI Resource Center or other relevant scientific research.

  1. Machine Learning (ML) is a subset of AI that trains a machine on how to learn and what to learn. It can provide decisions and make predictions through trained neural networks.
  2. Neural Network is a series of algorithms behind the machine learning outcomes and it is inspired by the human brain.
  3. Deep Learning is a subset of machine learning, where neural networks are trained with large amounts of data so computers can perform tasks like speech recognition and image auto-tagging.
  4. Natural Language Processing (NLP) is a subset of AI that uses machine learning to derive meanings from human languages so computers can handle tasks like auto-captioning and auto-translating.
  5. Data Visualisation is used to display machine learning results. This is the quickest way to visually summarise the information so these trends, patterns and relationships can be easily digested and understood.
  6. Artificial Intelligence is a broad science of copying human abilities and a collection of technologies that extract insights and patterns from an astronomical amount of data.

1.2 How AI Technologies can benefit the Accessibility Field

Many of today’s most talked about AI-powered technologies have been driving digital inclusion. Some are developed specifically with disabled persons in mind. Some are created to improve digital inclusion in general. This includes but is not limited to:

Features

  • Voice control of computers and mobile phones for persons with mobility and physical impairments
  • Adding automatic translation and caption content for persons with deafness and hearing loss
  • Recognising images and adding alt text for persons who need screen reader support
  • Shortening and summarising articles for persons with reading difficulties
  • Delivering facial recognition to tackle the authentication challenges for persons who find it hard to manage various passwords

Tools

  • Machine learning can improve automated accessibility testing, help product owners manage their accessibility status better and eventually make the content more accessible. For instance, Deque Systems, Inc. has refined automated accessibility testing by leveraging Machine Learning technology in its axe beta (formerly axe Pro beta).
  • What’s more, AI can establish a solid foundation for future accessibility improvement. As an example, Microsoft® launched an AI for Accessibility grant program with the aim to encourage developers to create products which can help persons with disabilities using Microsoft’s AI tools.

1.3 The 4th Industrial Revolution and Inclusive Attitude

Apart from these well-known positive uses that come with AI developments, there are boundless possibilities the technologies could bring in the future. The 4th Industrial Revolution contains a number of technologies which can be used in physical, digital and biological worlds, and AI is the key driver of it.

As more and more AI technologies get integrated into digital product offerings, organisations will inevitably encounter debates on AI ethics and accessibility. In particular, this will be the focal point in the companies using machine learning in customer-facing programs or having business interests in the public sector. For instance, parents might ask schools how they can ensure that students with disabilities can access AI-powered virtual learning environments.

Looking back at history, the Industrial Revolutions have brought many benefits to humanity over time, but of course, it wasn’t all smooth sailing. The transitions also brought about a great deal of disruptions — the key to getting the best out of tech is to handle it with an inclusive attitude, caution and consideration for those affected. This leads to my second part of research — what’s the current status of AI ethics discussions and what design standards are most suited to ensure AI ethics in the accessibility area.

Part II: AI Ethics and Human Rights by Design

The way in which I understand AI ethics is that they are part of the moral behaviour around the creation and use of emerging technologies. It means people first, society first and the product owners being accountable for the AI products they release. By all means, AI ethics are not designed to slow down the rate of innovation but instead, it is brought up to make humans more responsible for the development of AI products and transform our lives for the better.

I have been working in the digital content field for over 10 years and have always been passionate about content accessibility and usability. My view on AI-powered digital content is very simple: if the content can be accessed by persons without disabilities, then it should be accessible to persons with disabilities.

2.1 Discussion of Ethical Considerations on AI and Accessibility

There is a range of views about AI ethics and accessibility. Also, there are a large number of principles and high-level statements and position papers out there. Almost all technology giants openly share their thoughts and how they plan to ensure that they are conscious of making smart and ethical decisions. To cite an example, here is the general outlook of Microsoft in relation to how AI benefits persons with disabilities. They have identified the ethical considerations and have placed emphasis on the utmost importance of educating future generations with an inclusive design approach.

“AI technologies offer great promise for people with disabilities by removing access barriers and enhancing users’ capabilities. However, ethical considerations must carefully guide the development, deployment, and discussion around such technologies. These considerations regarding inclusivity, bias, privacy, error, expectation setting, simulated data, and social acceptability apply to all users, but are particularly nuanced and salient when considering the large potential benefits and large potential risks of AI systems for people with disabilities. While legal regulation may address some of these considerations in the future, it is unlikely to keep pace with the changing technological landscape. Educating our next generation of innovators is of paramount importance; emerging ethics curricula for computer science students should include content such as sociological concepts from the field of disability studies and inclusive design approaches from HCI.” AI and Accessibility: A Discussion of Ethical Considerations Microsoft

I support Microsoft’s perspective about design thinking— inclusive design is really the key to ensure AI technologies are making the digital world more accessible. Human rights can be advanced through the design process, and the concept of “Design for all”, or “Inclusive Design” or “Universal Design” or “Human Rights by Design” encourages accessibility for all people to the greatest extent possible tailored to human diversity and inclusion.

Having a good understanding of these design principles can help everyone listen, learn and adjust their individual lenses. This in turn will bring a holistic approach to diversity into the product design and development process. It also helps all stakeholders follow the same principles while working with each other.

It is no secret that many organisations are currently developing machine learning algorithms to improve personalised user experiences. This aligns with the ultimate goal of UX designers whose role is to be the voice of the users, whose mission is to test and incorporate product feedback, including persons with disabilities to ensure optimal user experience for all.

2.2 AI Ethics Guidelines on Accessibility and Universal Design

I believe that accessibility in AI is part of user experience, and all UX designers should be trained and instructed to ensure that the products and services are designed and developed to be accessible to all users.

However, encouraging design for human diversity also requires frequent reviews and input from many sources. It would seem like an unattainable approach for a UX designer to tackle all accessibility-related issues in the AI field without the spirit of collaboration and cooperation with all internal and external stakeholders.

For this reason, in my view, it is critical to understand AI ethics guidelines on accessibility, because accessibility in AI is fundamentally an ethical responsibility shared by government bodies, business owners and citizens. There is a need to have a shared framework for everyone to follow.

It is worth pointing out that governments can be served as a channel for “Human Rights by Design”, and they can help define and measure what’s acceptable.

Many governments are putting forward guidelines on trustworthy AI and emphasise that AI products must be lawful (respecting all applicable laws and regulations), ethical (respecting ethical principles) and robust (both from a technical perspective while taking into account its social environment). To cite an example, here is the guideline from the EU.

“Accessibility and universal design. Particularly in business-to-consumer domains, systems should be user-centric and designed in a way that allows all people to use AI products or services, regardless of their age, gender, abilities or characteristics. Accessibility to this technology for persons with disabilities, which are present in all societal groups, is of particular importance. AI systems should not have a one-size-fits-all approach and should consider Universal Design principles addressing the widest possible range of users, following relevant accessibility standards. This will enable equitable access and active participation of all people in existing and emerging computer-mediated human activities and with regard to assistive technologies.” EU Ethics Guidelines for Trustworthy AI

It is important to note that people should be encouraged to share their concerns and provide feedback about AI ethics to government bodies. For companies who release AI products, many of them have internal high-level guidelines and some design principles to follow but implementing them in concrete and tangible ways will take time. Collaboration between governments, companies and citizens is needed to shape our world for the public good.

2.3 Universal Design, Inclusive Design, Accessible Design — Design for ALL

To pursue opportunities while mitigating the risks, I believe that, same as other digital products, accessibility best practices and the principles of human-centred design must be followed and integrated as part of the AI development process. For us designers, designing a product is not really about expressing ourselves or following our product owners’ direction, design means that we can leverage best practices and envision our products in the context of their users and their environments.

Under the concept of “Human Rights by Design”, Universal Design principles are the most important ones as highlighted in the EU guidelines. Accessibility and usability in the sense of Universal Design refer to the design of inclusive environments, including everything people need to access, such as products, devices, services, digital content, and physical space. In other words, the principles can be used in both a physical world and a virtual world.

To give an example, when it comes to real-world applications, AI-equipped technology can be used in the customer-facing retail environment and provide different in-store experiences. For instance,

  • High-end fashion retailers could install high-tech smart mirror technology, which combines voice and facial recognition when communicating with customers. As a designer, universal design means that the smart mirror needs to be installed properly for all users (for example, wheelchair access) and the technologies used also need to take all user types into consideration (for example, a customer who is deaf would not be able to use the voice control feature).
  • Supermarkets could have autonomous robots guide customers around the aisles, providing essential inventory information and answering questions. As a designer, universal design means that the robot should be equipped with the ability to communicate with people in various ways. It needs to be built with a reasonable size so it can navigate around the store.
Artificial intelligence could let robots do more complicated jobs in the future, such as having a further conversation with this little girl (Credit: Unsplash Free Images)

In relation to the digital world, universal design, inclusive design and accessible design are used interchangeably. They all focus on increasing the accessibility of interactive systems (websites, browsers, tools, and many other digital products) and share a similar design thinking process. But among them, universal design is the term preferred by lawmakers.

The following 7 Principles of Universal Design are conceived by the Center for Universal Design at North Carolina State University, USA, led by the late Ronald Mace.

7 Universal Design Principles: Illustration by LexisNexis Content UX team
  1. Equitable Use — The design is useful and marketable to people with diverse abilities.
  2. Flexibility in Use — The design accommodates a wide range of individual preferences and abilities.
  3. Simple and Intuitive Use — Use of the design is easy to understand, regardless of the user’s experience, knowledge, language skills, or current concentration level.
  4. Perceptible Information — Use of the design is easy to understand, regardless of the user’s experience, knowledge, language skills, or current concentration level.
  5. Tolerance for Error — The design minimizes hazards and the adverse consequences of accidental or unintended actions.
  6. Low Physical Effort — The design can be used efficiently and comfortably and with a minimum of fatigue.
  7. Size and Space for Approach and Use — Appropriate size and space is provided for approach, reach, manipulation, and use regardless of user’s body size, posture, or mobility.

Keep in mind, if we review Universal Design at the micro-level, designers are not urged to find one design solution for all. Rather, designers are expected to explore solutions which are more inclusive. For example, provide alternative access for a drag and drop exercise feature online when it is impossible for a blind user to enjoy it. Once the most extreme accessibility issues like this are resolved, designers should be able to step back from individual features, move to a macro-level perspective, and look at the product as a whole.

2.4 AI Ethics and Accessible Future

There are undoubtedly ethics-related questions about AI, such as job changes, bias and privacy. While evaluating these potential adverse impacts, we should always be open-minded about the fact that AI technologies have unlimited possible future uses to advance accessibility. Their responsible implementation is in our hands.

My view on the subject is that innovative technologies can provide new ways to deliver products and services in the future. They can also improve and increase accessibility for people who want equal access. With future 5G connectivity, more and more AI-powered services will become faster, cheaper and more reliable.

This leads to my third part of research — what sort of AI features or tools are worth further developments, how can we leverage these technologies to benefit persons with disabilities, and how can we ensure AI products are perceivable, operable, understandable and robust for all users.

Part III: How to ensure AI-powered Digital Products are accessible to all users

To ensure AI-powered products and its content are accessible to everyone, be it data analytics, a chatbot or an AI-based marketing tool, the first place I would reference is the Web Content Accessibility Guidelines (WCAG) 2.1 international standard. WCAG may not be 100% perfect and include every single disability or cover every single new technology, but they are the most comprehensive and most broadly adopted accessibility standard internationally.

WCAG Guidelines (Credit: Canva Free Images — use in design only)

The best practices on the W3C website are a great resource for everyone. They overlap with many best practices on web development such as mobile web design, device independence, multi-modal interaction, usability and design for older users. They also help companies polish their brand image and expand their customer base. I expect that WCAG will cover additional AI products gradually in their future updates. In fact, WCAG 2.2 is coming soon.

Secondly, I would like to share my views on some AI products with regard to best practices and guidelines on universal accessibility. This is not an exhaustive list, but it contains some useful and practical information on the subject. I hope that it can give you some food for thought and support you in your decision-making process.

3.1 The Role of AI in Automated Accessibility Testing and Remediation

There is an unquestionable need to integrate accessibility into the development process. While working on the product, one of the most frequently asked questions from the stakeholders is about automation and process improvement. It has been nearly everyone’s dream to be able to auto test and auto remediate all accessibility defects.

Can we use AI to unlock full accessibility compliance? Let’s discuss.

3.1.1 Machine Learning enhanced Accessibility Scanners

Leveraging Accessibility Insights to automate check

“Easy, fast, cheap and it can detect problems early on”, this is the general comment people have given to automated accessibility testing. Accessibility scanners have been around for a while. Recently, machine learning has drastically improved these tools’ features and made them much more user friendly, efficient and accurate. The technology can scan through a large number of web pages, documents and identify many defects without manual testing.

It is worth mentioning that automated testing means using an automated tool to execute test cases. It is a technique used to compare the actual outcome with the expected results. Manual testing is performed by a human who sits in front of a machine to carefully executing the test step. To conduct a comprehensive and rigorous accessibility audit, the most common approach is to use both methods combined.

The browser extension is the simplest version of automated testing. All we need to do is to install the add-on, and then click a button. Once the scan is completed, each tool will generate a report that flags accessibility issues based on WCAG 2.1 standards. The report will explain why the issues have to be resolved and provide guidance on how to fix them. The most well-recognised tools in the accessibility testing area are Microsoft Accessibility Insight, Google Lighthouse, Deque axe, and WebAim WAVE.

Leveraging WAVE to auto check

The second type of tool uses APIs from the command-line, which requires customisation to organisations’ development environment. The benefit of implementing an API is that it can cover as many projects as the company needs and it can be integrated with other QA tools. For example, Selenium, Cypress, WebdriverIO, Protractor, Tenon and Appium. Organisations can also create their own bespoke automated tools by leveraging APIs or software components. Here is good documentation regarding Axe API.

The third type of tool is designed for producing accessible documents. It can be part of the authoring tools, like Microsoft Accessibility Checker or Adobe Accessibility Checker. It automatically checks and remediates issues before users publishing or sending out their files. The tool is very easy and simple to use, just click the Checker from the Tools Panel and follow the instructions.

Leveraging Adobe Acrobat to ensure the file is accessible

Another useful tool for creating accessible files is Adobe InDesign, it supports accessible cross-media publications. For instance, to create an end of year report with numerous layouts and images, you can use InDesign accessibility features to ensure all elements are tagged before exporting the file to PDF, HTML or ePub3 (eBooks). The tool is not hard to use, but it requires some learning and practice.

Here is a quick demo

Step 1 Open a blank page with an image in Indesign. Add Alt Text using Object Export Options panel.

Step 2: Export it as Epub format, then inspect the element and see if it is successfully tagged.

In addition to manual remediation, you can ask a third party to help if there is a significant number of documents. For instance, Sensusaccess offers a machine-learning-enhanced auto-conversion tool for documents. Upload files of your choice, the tool will convert them into formats like accessible MP3 audio, DAISY full text and audio, Braille, and E-book. The company works closely with various universities.

3.1.2 AI-powered Automated Accessibility Remediation Tools

This type of technology is about creating custom JavaScript overlays or Tool-based overlays without having to touch the underlying source code of the website. The most discussed products in the markets are Amaze, AccessiBe, Mk-sense and Equalweb. The technology can add missing alternative text for an image, add missing HTML attributes and fix issues like icons and buttons, roles and landmarks. It automatically remediates issues on a website to comply with WCAG success criteria.

These are great tools for product owners and developers to understand the impact of code changes to accessibility and how the products can be improved by enhancing accessibility and empowering all users. In addition to that, the technology provides a short-term solution to organisations which are under pressure to fix accessibility issues overnight but have no development teams immediately available.

The reason why I believe at this time the solution is for the short term is that it won’t remediate anything which is complex. For example, WCAG requirement — Guideline 2.2 Enough Time: Provide users enough time to read and use content would still depend on applying code changes on the app itself. It also requires screen reader users to learn a new approach if it is a tool-based overlay. It means that they have to navigate a toolbar or get a plugin while JAWS or NVDA is assisting them to read the instructions.

3.1.3 Image Auto-tagging using Machine Learning

This technology is for persons who need screen reader support. It leverages machine learning methods to recognise visual elements within an image, then identifies the patterns and looks into a large database, to make sense of the image, pinpoint its category and come up with a decision on what the image is about. Once the machine auto-tagged the image, a screen reader will be able to read “alt” text to users who are visually impaired.

It is worth noting, with regard to businesses that possess a huge visual database, this innovative technology has improved businesses’ overall ability to execute, pivot and scale. For instance, it has helped content owners who have never tagged their images in the past to remediate the issue fast. Also, it has provided users with enhanced search performance, as search is normally based on keywords tagged in the content.

One caveat for this method is that for images like medical graphics, scientific illustrations, legal terminologies, and artworks, the content owners still have to do due diligence and take responsibility for the image descriptions, and they should not solely rely on machines to fix all their “alt” text problems yet. This is due to the fact that tag medical images with incorrect descriptions or under insufficient data integrity control is deemed recklessly risky. For this reason, using this technology wisely, safely and responsibly seems to be the preferred approach among content editors and accessibility experts.

With the data accuracy in mind, while acknowledging the discrepancy between a human and a machine’s ability to appropriately tag may lead to an unintended consequence, the argument of human review cost will never go away. For content owners who have millions of images, this can be prohibitively expensive and impossible to embrace retrospective remediation. Fortunately, with all fairness, both lawmakers and customers understand it, if it is deemed as a disproportionate burden, owners are not forced to retrofit content which is mission impossible to fix, with the condition that alternative access must be provided to users.

3.1.4 Key Takeaways on Automation and Process Improvement

The idea of utilising software to scan and check a considerable number of web pages or documents is well received by accessibility analysts, testers, and developers. I can see such automated testing tools or platforms being advanced further. One important caveat to remember is that if our intent is to provide a completely accessible product — not only do we need automated tools to discover critical issues, but we also have to have a deeper understanding of the WCAG criteria and think about real use cases.

In my opinion, automated remediation tools are fantastic and the idea of auto-fixing is fascinating. I will definitely keep an eye on the technology and see where it leads to. But at this very moment, I generally recommend using it selectively and developing an additional full remediation program accordingly. I believe that the only way to build a quality website is to have good web development practices. People might be able to use AI to obtain a compliance certificate, but the products still need to be built for real users.

After discussing the pros and cons of image auto-tagging, I think this area shows great potential and I am positive about the future. It is worth pointing out that the current status does not mean auto-tagging will not move forward and further boost the machine learning productivity. In the future, once the technology is advanced and becomes more reliable, the accuracy of the image descriptions will be improved as a result. Imagine that you upload an image, the system will add a nice description for you automatically — “A cat is playing with two kids in the garden — the sky is blue, and the grass is green”.

A cat is playing with two kids in the garden: Illustration by LexisNexis Content UX Team

3.2 Automated Decision-Making AI Systems and Accessibility

There is no doubt that data analytics is becoming more powerful thanks to AI. Nearly all industries can benefit from AI-powered predictive systems. For example, car manufacturers can leverage AI to predict maintenance needs, doctors can leverage AI to better coordinate care plans, and students can leverage AI to improve their learning.

The products these people are using are based on AI algorithms. Some of the systems can come to a decision automatically without any human involvement. These decisions can be based on customer information or data collected previously from other sources.

While the process of gathering data may differ from product to product, as a UX designer, how can we make sure that the end products are accessible to all users?

3.2.1 AI-driven Website Builders

This type of product is uniquely popular with small businesses or someone who is validating a market opportunity. Business owners are enabled to skip the process of utilising designers and they can simply leverage the system to handle the design and development. With respect to the product qualities, most of them have some accessibility features included, such as enhanced colour palette, keyboard navigation, screen reader support and mobile responsive. At this point, the most discussed AI-driven website builders are Wix ADI, Bookmark, Jimdo Dolphin and Firedrop.

How does an AI web builder work? It initially asks a user some questions and what the person wants to achieve. Based on the user input, the system will use algorithms to check its database, including layouts, styles, content, navigation options, and colour palette etc. Then based on the user preference, it will create a unique website for the user with options to customise.

To make the website as accessible as possible, the user or the owner still need to follow the best practices while adding content and other digital assets. For example, adding a meaningful image description for a photo, entering a description in the alternate text field for any equation elements, editing table properties and entering caption text, and make sure the photos being uploaded have enough contrast.

On top of that, prior to setting up the site, the product owner ought to conduct some research work and select the best tool for accessibility purposes. There isn’t a specific standard for AI-powered website builders, but the owner could simply reap the benefits of Authoring Tool Accessibility Guidelines (ATAG) to help with the evaluation process. The ATAG 2.0 provides guidelines for designing web content authoring tools that are both accessible to authors with disabilities as well as promoting the production of more accessible web content by all authors.

3.2.2 AI-powered Data Analytics and Data Visualisation

As I mentioned earlier, data visualisation is the best way to visually summarise the information so these trends, patterns and relationships can be easily digested and understood. However, one of the main accessibility challenges in demonstrating AI-powered data analysis outcomes or the process of how the decision-making system works is also data visualisation, namely bars, charts, diagrams, maps and interactive infographics.

Universal Design means that everyone can access the same product without the adoption of a specific design. To put it simply, ensuring data visualisation is accessible for everyone goes well beyond considerations for colour contrast issues and colour blindness. It means that there is a need to ensure even the screen reader users can read the graphics.

For a simple image, providing a succinct and informative text description is usually sufficient. But for complex graphics, this is not enough. Products must provide information that the visualisation conveys, such as what values are presented, the categories of data being shown, and the relationships between each category.

A blind man is using braille and screen reader (Credit: Unsplash Free Images)

Making data visualisation readable by screen reader technology is time consuming even for the most experienced developers, but no one should overlook the process and exclude users who cannot see the graphics. If there aren’t any development resources available or if there is a strict time-limit to release the products, a practical alternative is to invite a third-party supplier to assist.

An exemplar company who makes accessible graphics is Highcharts. The product is free to use for personal projects, school websites, and non-profit organisations. The company has published some best practices on making accessible charts in their blog, including how to use SVG pictures, add text descriptions, duplicate data from charts to tables, apply shades of the same colour, and implement Highcharts Accessibility API.

Every Highcharts license includes the Accessibility module, which contains many exciting features. After including the exporting and export-data modules, users can key-board tabbing a chart, read it by screen readers, view the chart as a data table, interact with the chart control using voice commands, download an SVG version of the chart then turn it into a tactile graphic using embossing printers.

3.2.3 Challenges and Benefits of Decision-Making AI systems

To ensure decision intelligence related products are accessible to persons with disabilities requires thorough planning and dedicated resources. There must be a meaningful conversation between all stakeholders. An experienced UX designer should be able to take all factors into considerations, bridge the gap and merge the knowledge from all sides to define the best solution.

Apart from implementing the products carefully and adhere to best practices, for awareness purposes, it is worth mentioning that automated decision-making based on AI could potentially discriminate persons with disabilities. Here is some useful information from the European Disability Forum.

“If an algorithm making a decision on the price of insurance policy discriminates against persons with disabilities, they may end up paying more for insurance or be denied cover. There are similar potential risks of discrimination in a wide range of areas: automated screening for recruitment, financial services and so on. While this may be unintentional, AI and other emerging technologies systems are likely to reinforce already pervasive exclusion of persons with disabilities, encouraging misrepresentation of persons with disabilities or other characteristics such as race, age, gender, sexual orientation, religion and so on.”

On the other hands, a debate has two sides, it is important to know that these systems also provide great benefits to all users regardless of their personal circumstances. Here is an interesting perspective on automated decision making from the Information Commissioner’s Office.

“Profiling and automated decision making can be very useful for organisations and also benefit individuals in many sectors, including healthcare, education, financial services and marketing. They can lead to quicker and more consistent decisions, particularly in cases where a very large volume of data needs to be analysed and decisions made very quickly.”

3.3 Natural Language Processing AI and their impact on Communication

Historically, computers can process and manipulate all sorts of data, nonetheless when it comes to language comprehension and word recognition, it is a very different world. To ask a machine to learn, understand and process real-world languages, machines have to be trained and the training process is complicated. It won’t ask machines to simply record a huge set of vocabulary. Instead, it requires machines to understand the syntax, semantics, pragmatics, discourse and the meaning behind those words (the cognitive aspect of language). It instructs machines to collect unstructured data and leverage algorithms to find patterns.

To understand how it might impact accessibility, two important terminologies are to be aligned: Natural Language Processing (NLP) and Natural Language Understanding (NLU). The latter is vital in order to achieve the success of NPP. It gets the machine to comprehend what a group of text really means and viewed as the first step towards NLP. To classify unstructured data, NLU is developed to identify the intended semantics from the multiple possible semantics and label them accordingly. It is also widely used to perform tasks like syntax analysis of grammatically correct typed sentences.

As I mentioned earlier, NLP is a subset of AI that uses machine learning to derive meanings from human languages so computers can automatically handle tasks which involve natural human languages like speech and text. NLP works closely with speech recognition and text recognition engines. While speech or text recognition is applied to enter information, NLP is used to understand the data and leverage the information to perform tasks. Chatbots, Auto-captioning (voice recognition), Optical Character Recognition (text recognition), Auto-translating (machine translation) and Writing enhancements are the most discussed applications in relation to accessibility.

3.3.1 Chatbots

A chatbot is an interactive software or conversational agent that communicates with a user through a chat window screen. It imitates human conversation — voice chat, text chat or both. Currently, AI chatbots are revolutionising the customer service industry. They are available 24/7 and have the ability to intelligently perform tasks and solve problems without human intervention. Since more and more customers are using chatbots to receive the support they need, making them accessible is imperative and not optional.

To ensure that chatbots are accessible to all users and comply with accessibility guidelines, seven key criteria should be taken into considerations: keyboard navigation, skip to main content, chatbot landmarks, browsing with a screen magnifier, orientation, reflow and meet optimal visual design standards (font size, line spacing, word spacing, colour contrast and colour blindness). Chatbots can be accessible if our humans wish to do so. For example, Astute announced that their self-service chatbot is digitally accessible for consumers with disabilities.

A LexisNexis Chatbot to support employees

3.3.2 Auto-captioning

Automatic captioning technology has the ability to analyse video and audio content, and then automatically transcribes the audio through natural language processing AI. Features like this can support users who are not native speakers understand the content better and process the information fast. It helps users on a commuter train to watch content privately without headphones. It also assists users who are hard-of-hearing or deaf to communicate better. It basically provides convenience to both persons with and without disabilities.

For digital products with video and audio content, this is a feature which everyone wants it and product owners need to implement it. In fact, this is part of WCAG requirements — Guideline 1.2: provide alternatives for time-based media. The good news is that most online tools have the feature built into the system. For example, Microsoft Translator, an AI-powered communication technology uses an advanced form of automatic speech recognition to convert spoken language; YouTube’s voice recognition technology, it automates sound effect captions with AI; Conference solution company, Zoom rolls out AI-powered transcripts and note-taking features; the latest news from Google AI — On-Device Captioning with Live Caption; and Microsoft Team also offers live captions during any meetings.

To ensure this feature is being utilised, it is important to remember to turn it on.

A Screenshot of a live conference with the subtitle on — desktop view on MS team

3.3.3 Screen Readers and Optical Character Recognition (OCR)

WCAG 2.1 guidelines do not only apply to web content, but they also cover PDF documents. PDF content can contain essential information. Hence, it needs to be accessible to persons who rely on assistive technology. Apart from using Adobe Acrobat and other remediation tools to tag each element and make the files accessible, another interesting technology to read content is OCR. The technology is related to text recognition, AI and computer vision. It converts a scanned document or a photo of a document into machine-encoded text.

Optical Character Recognition feature was first introduced in screen reader Job Access with Speech (JAWS) 13 in 2011. It allows the screen reader to access any images on the screen that include text and recognise all of the text in a PDF document. The technology struggled to recognise the text initially since the quality of the document was not optimal. However, JAWS 2018 has drastically improved this functionality, the scanners have been improved to be able to read a variety of styles and sizes of text. As a note to best practice, technicians should master this technology and make good use of it when it comes to serving all customers with improved digital experience.

Here is the process of how to use OCR with an image.

Step 1: Locate the image

Step 2: Press the JAWS Key + space, then Press O (for OCR) and F (for File)

Step 3: JAWS will display the OCR result and read the text back to users

3.3.4 Auto-translating

Auto translation has brought a number of great benefits to linguistic accessibility. It encourages the removal of barriers of intercultural communication, facilitates international growth, and increases the potential of the media to promote information in various languages. Human translation is reliable, accurate and high quality, but often it is not accessible to most people due to the high cost and the feasibility to obtain a human resource. Over the last 20 years, machine translation has been gradually accepted by humans as part of the online experience.

As our society becomes more and more reliant on AI to advance user experience, machine translation is no exception. Many tech companies are leveraging AI to improve their translation accuracy. The most famous AI-powered neural machine translation system is Google’s. It has collected a large data set and built a system which sufficiently fast and relatively accurate to provide good quality translations for users. Compare to what we had 20 years ago, the current system represents a significant milestone.

Six months ago, Microsoft announced a live presentation feature in PowerPoint with live subtitles in more than 60 languages. I was fortunately able to preview the feature at a conference. During the call, the presenter asked the audience to scan a QR code, which led us to a web link, then on the mobile screen, the subtitles displayed while he was speaking, also there was a language menu available for instant translation.

3.3.5 Writing Enhancements

With respect to NLP, another of my personal favourite is Grammarly, the feature is not subject to WCAG conformance requirements, but it is very useful and helpful to people with disabilities. The system is trained on naturally written text and guides people with and without learning difficulties during their writing process. It can help people improve spellings, punctuations, grammatical structures, and logical orders. It is similar to Microsoft Word and Microsoft Editor’s spelling and grammar checker but with more control and suggestions.

A screenshot of how Grammarly for Chrome works on Medium

Grammarly’s AI learns from several databases which contain a large number of sentence collections that have been sorted, grouped and labelled. The machine then decides on what to suggest and what to correct. It draws conclusions from common mistakes, language patterns and certain language constructions. In other words, the tool provides writing enhancements by leveraging both machine learning and natural language understanding technologies. The software works as an extension on Microsoft Edge, Chrome and Firefox, as well as an add-in on Microsoft Word and Outlook.

3.3.6 NLP AI, Today and in the Near Future

Today, NLP is everywhere. Its algorithms teach a machine to use a language just like how a human does it. Almost any features that involve languages would be based on NLP, namely machine translation, chatbots, conversational search, predictive typing, answering a question from the web and spell-checking. As all applications continue to grow and expand, humans are increasingly feeling comfortable to interact with machines.

On the other hand, despite the success achieved in NLP applications, one pivotal caveat to note is that NLP AI is not 100% accurate yet, there are other factors which need to be taken into account. For instance, different accents, senses of humour, regional dialects, and language contractions. Also, one crucial fact that makes AI systems struggle with a deep and genuine conversation is its ability to take context into account.

Talking about the future, I strongly believe that the technologies won’t stop here since we’ve only scratched the surface at this point. The future of NLP is literally open-ended. As technology progresses, NLP will continue to revolutionise communication between humans and machines. This area (speech recognition, natural language understanding, natural language processing and deep learning) will be improved further through iterative and incremental research and experimentations.

Part IV: Conclusion

Broadly speaking, the current consensus is that we do not fully understand how emerging technologies are impacting disability rights yet because the technologies are still under development and it could take years for them to become fully mature.

But here is what I think should be taken into considerations:

4.1 Positive Impacts of AI on Persons with Disabilities

AI technologies and their advancements have a huge potential in making our products more accessible. For persons with disabilities, AI products can increase inclusion, independence and equal access. The features and tools I mentioned in this post are valuable to users with disabilities and make it easier for companies to accomplish their compliance. The technologies might not be perfect, but they are worth further investment and development. And I would like to reiterate that AI ethics are not designed to restrict competitive nature in business or slow down the rate of innovation, it is there to encourage us to make the technologies work better for us.

4.2 Constraints and Challenges with Disability-focused AI Solutions

For persons with disabilities, AI-related products have yet to be perfected for four key reasons. The first and foremost is about affordability. Even though many tools are widely available, costs can be a major barrier to obtain such technologies. Secondly, most AI solutions require internet access, some areas simply do not have adequate internet infrastructure. Thirdly, persons with disabilities are often lack of an opportunity to learn what works best for them and to update their digital skills. Lastly, access to tech support can be a challenging task in itself for persons with visual or hearing or motor impairments. Many companies only provide one communication method to contact them.

4.3 Individual, Corporate, and Regulatory Responsibility for AI

To reduce the potential negative impacts of AI, such as people with disabilities couldn’t access AI products or services, working with government bodies is vital. They create ethics frameworks, regulatory oversight and legal safeguards. Those frameworks could help companies understand the social and ethical implications better as well as protecting the public from being affected by unlawful and unethical products. Some might question if over-regulation would potentially stymie growth. The fact is that new AI regulations are coming, companies who do not understand AI ethics would need to come up with creative solutions to reassure the public about how they will run a profitable business without succumbing to the temptation of using AI in an unethical manner. The same applies to individual citizens.

4.4 Accessible Future is a Human Determination

Keeping up with AI advancements and AI ethics can be overwhelming. Integrating ethics elements into product development circle can feel abstract for some. Having said that, things won’t go terribly wrong if we always adhere to best practices and use a human-centred design approach. After all, making any given piece of content, technology, or user interface accessible is ultimately a human determination. AI can help improve things, but it is humans who can decide whether the interaction is accessible, i.e. compatible with assistive tech, usable, and user-friendly. If we all follow the 7 Universal Design principles, then we know that we are at less risk of making anyone feel excluded from the new emerging technologies.

4.5 Principles of Inclusion, Diversity and Equality

AI products should meet the needs and preferences of people with disabilities. The technologies at the outset should include everyone regardless of their race, age, gender, religion and social background. People with disabilities and without disabilities should be able to enjoy the products “together”. If we can have various forms of diversity to be part of the AI development process, it will empower all stakeholders to release responsible products. The future of AI will be decided by human actions. Together, we can build trust through our engagements and advance technologies for the benefit of mankind.

Disclaimer

This article is based on my own research, interest and passion for the topic, and does not necessarily represent LexisNexis’s positions, strategies, or opinions.

Acknowledgements

Many thanks to David Lovell, Ted Gies and Emili Budell-Rhodes for providing excellent feedback, Harris Osiana and David Goco for sharing their experience on automated testing, and Aaron Capua for creating the lovely illustrations.

--

--

Min Xiong
LexisNexis Design

Global Head of Content UX at LexisNexis, enjoys traveling, reading, and passionate about inclusive design, innovation and accessibility