5 Tests To Audit And Improve The Accessibility Of Your Chatbot

Guy TONYE
Voice Tech Global
Published in
7 min readSep 7, 2021

--

With a growing number of businesses and organizations using Chatbots on their website, we wanted to provide a framework to make the Conversational AI interface accessible and inclusive.

In our courses and workshops, we focus on outlining practical steps and techniques to recognize where the experience is broken then design solutions to make it more exclusive.

The five tests guide will help Conversation Designers:

  • Audit a Chabot
  • Suggest actionable solutions with the engineering team or webmaster
  • Enhance their design to make it more accessible
Illustration for the post with the icon of a checklist and the title of the post with a cursive font

Test 1: You can reach the chatbot with your keyboard only

How to test

After going on the page without your mouse, try to get to the chatbot using the keyboard only. The standard approach is to use the tab key. The TAB key will help you navigate between the elements of the page. You can use the SHIFT and TAB keys to navigate back to the previous element.

The first thing you can evaluate is when you move with the TAB key: do the elements show a focus when “tabbed” to?

The focus looks like an overlay rectangle around the element. The rectangle only has borders that seem to surround the component.

Example of a focused button with a blue rectangle overlay surrounding the element
Example of a focused button

If and when you reach the chatbot icon, the second thing you want to assess is if you can open the chatbot by pressing the RETURN (or ENTER) key.

Why is it a problem

For a user leveraging assistive technology, they will never be able to reach the chatbot.

Recommendations

The developer of the website can solve this by making sure that:

  • Option 1: the Chabot icon is a <button> element in the HTML
  • Option 2: the chatbot element (if it’s not a <button>) has a tabindex attribute that will make it “focusable” and allow users to TAB to it.

Test 2: After opening the chatbot you can immediately type

How to test

When you click (or TAB then RETURN) on the chat icon, verify that it immediately focuses on the place where you can type the text.

The focus on the place where you type will have the blinking cursor on the section where the user is expected to type. It can also have the rectangle overlay surrounding the input section.

Example of focus on the text input with the cursor appearing over the place holder “Enter the bot response”
Example of focus on the text input

Why is it a problem

As you have experienced during our test 1, TAB-ing to an element is quite difficult, especially when the page has a lot of elements. After clicking on the Chatbot icon, if the focus goes to another place in the page, the user will have to TAB their way back to that chatbot input to enter the text to speak with the bot.

Recommendations

The developer of the website can enforce the focus on the input using Javascript.

See guidelines and examples on the Mozilla documentation.

Test 3: Any prompt can be answered without looking at the screen

How to test

When interacting with the bot, you want to ensure that you are able to converse without needing to rely on the graphical elements. If the bot responds with images or buttons (suggested actions), you can ask yourself the following questions: “If I did not have access to the images or the, would I still be able to engage with the chatbot?

For example, if the bot says, “Hi, Guest! I love interacting with real people. Choose an option below so we can start.” If the user isn’t able to see or access the options in the buttons, the prompt isn’t helpful for the user to progress in the conversation.

Example of the prompt from the bot relying on the screen and buttons
Example of the prompt from the bot relying on the screen and buttons
Example of the prompt from the bot relying on the screen with the buttons hidden
Example of the prompt from the bot relying on the screen with the buttons hidden

Why is it a problem?

As described in the first test, a user would need to TAB through all the options to carry on that conversation.

Recommendations

Making the prompt actionable, with a more concise version of the options, can help the user quickly understand what to do without needing to navigate through the buttons.

The technique of making sure that the prompt is self-sufficient is increasingly adopted in multimodal design for conversational products.

Test 4: All images have an alternate text

How to test this?

When using images in the chat, it’s important to add alt text. Otherwise, the screen reader will say “missing description” or “unlabeled image.”

To verify if an image is screen-reader friendly you can:

  • Use a screen-reader, or Accessibility equivalents (on Mac, Windows, and Linux), navigate to the image, and hear what the screen-reader software says
  • Right-click on the image and use the web developer tools by clicking on “Inspect” in the contextual menu. Then in the code showing in the developer tools, search for the img tag and an attribute alt.
Image of the contextual menu and the Inspect highlighted with a pink rectangle
Image of the contextual menu and the Inspect
Image of the developer tools after clicking on Inspect with a highlight on the image tag.
Image of the developer tools after clicking on Inspect

In the snippet below, you are looking for any indication of an alt attribute in the tag

<img src=”https://s3.amazonaws.com/com.getstoryflow.api.images/1621532012819-2.png">

There is none, but if there was it would look something like:

<img src=”https://s3.amazonaws.com/com.getstoryflow.api.images/1621532012819-2.png" alt=”blurry image showing a phone”>

Why is it a problem?

If you have done the test with a screen-reader solution, you must have experienced how degraded the experience can be when the image has no description.

If you have not used a screen-reader, the reader will say “missing description” or “unlabeled image” and for the user, they are missing out on any information that the image was communicating.

Recommendations

The solution to this problem involves the conversation designer AND the developer. On the developer end, it’s important to ensure that all images in the conversation have an alt tag.

For the conversation designer:

  • It’s important to ensure that the images augment the experience but are not essential for it. Similar to test 3, the user should be able to carry on the conversation without the image as much as possible.
  • It’s crucial to supply the alternate description for any images they use.

Test 5: Screen-reader by friendly design

How to test this

If you are using a screen-reader software (VoiceOver on Mac, NVDA on Windows, and Orca on Linux), you want to browse the history of the conversation and verify that it is readable. If you listen to the conversation TAB through, it reads each utterance one after the other, the experience is not ideal for the user.

If you are not using a screen reader, you can again use the developer tools inspection by right-clicking on any text, the bot or yours and click inspect.

In the code you are looking for either:

  • An aria-label tag on the text, specifying “the bot said” or “the user said”
  • Find anywhere in the text a snippet saying “the bot said” or “the user said” with an element with the class sr-only
Example of element with the sr-only class highlighted with a purple rectangle inside the developer tools
Example of an element with the sr-only class

Or a snippet of code looking like this

<section tabindex=”0" class=”bot message-bubble” arial-label=”at 21:4 the bot said”>Ok, Pick up , got it. Do you want that pizza right now, or would you like it at a later time?</section>

Why is it a problem?

If I showed this transcript for you:

  • Hi, how are you
  • Good thank you
  • What is your ID?
  • How can I check my balance?
  • One moment please

Would you be able to find who is the user and who is the bot?

That is the challenge that a user with assistive technology listening to the conversation history faces.

Recommendation

The developer for the website should add an sr-only class. This resource will provide guidance.

Using that sr-only class, it will allow inserting screen reader only text that will make the conversation more meaningful for users that don’t have access to all the visual cues of the chatbot.

How to use the 5 Tests guide?

The 5 tests guide can be used on any existing or soon-to-be-published chatbots. We recommend sharing out the guide internally to kickstart the conversation about accessibility for the chatbot.

The 5 tests and their recommendation are a small part of our techniques for Inclusive Design in Conversational AI. If you want to learn more about our Inclusive Conversational Design Toolkit, get in touch with us at hello@voicetechglobal.com.

If you liked this story, please give us a clap 👏👏🏻👏🏽.

Feel free to share in the comments If you have suggestions or additional tests that we could include to continue to help make the Chatbot experience more accessible.

--

--

Guy TONYE
Voice Tech Global

Software engineer, Google Cloud Platform Certified Data Engineer, Co Founder @ VoiceTechGlobal