My Information Experience Journey with Accessibility: Part 1

Bonnie Kern @ VMware
VMware Accessibility
6 min readFeb 3, 2022

About me

Hi, I’m Bonnie Kern. I’m a Staff Technical Writer on the VMware Information Experience (IX) team. I have light brown olive skin, brown eyes, and dark brown hair with natural silver highlights.

I create documentation for VMware vRealize Automation Code Stream in the Cloud Management Business Unit. Before I started to work on Code Stream, I created documentation and videos for VMware vRealize Operations Manager and VMware vCenter Configuration Manager.

I’ve been passionate about usability for years and have been involved with it on many products. I’ve designed numerous usability tests and led many usability test sessions both at VMware and prior work experiences. I’ve led some of these usability tests in person, some remotely, and used cameras to record some of them before Zoom and other online recording tools became ubiquitous!

On the IX team, we run campaigns and missions, which helps us improve our content for internal VMware users and public customers. Because of my interest in Accessibility (A11Y), and my prior work on usability and the VMware IX Usability Education Mission, I led the VMware Accessibility and Inclusion mission in 2021. Since Q2 2021, we’ve made great progress and our mission work is now part of the VMware Office of the CTO!

How I got started in Accessibility

My deep dive into accessibility started when I asked a question in our VMware Accessibility Content internal channel. In early 2021, I attended VMware Disability Week and realized that there was a lot that I wanted to learn to help users. I wasn’t sure why, but from my heart I felt that I wanted to improve how I could support our users with VMware documentation, particularly for vRealize Automation Code Stream, and it seemed to align with my usability passion and experience.

So, I started to look at what our gaps were in the accessibility of VMware Docs. I reviewed the information that we currently had in our IX documentation standards, and I began to realize that I needed help in learning what users who experience disabilities require so that they can easily use the documentation.

Learning more about how our documentation must comply with internal and external guidelines, and what users who have disabilities really need were two big questions that I wanted to answer!

The challenge that led me to work with the Accessibility team

When I started to examine our IX documentation standards, I learned about the Web Accessibility Evaluation Tool called WAVE. I installed the WAVE plug-in and ran it on my documentation. Right away I realized that I had some work to do!

The VMware Docs framework had multiple errors, and I didn’t know how to resolve them, or even if I could. My documentation also had some errors, which seemed resolvable. To correct the errors in my documentation topics, I learned that I needed to add some headings and improve the alt texts on some of my screenshots and diagrams. Then I wanted to learn what all the other WAVE errors meant.

Running the WAVE tool on VMware Docs displays errors in red with tooltips that describe the problem.

To get input on which errors mattered the most, I asked our VMware Accessibility team. One of our Accessibility SMEs answered my question and directed me to the key areas to look for the most important errors and details.

Since then, I’ve asked the Accessibility team other questions about whether screen readers can read the on-screen text in the UI, how I and my IX team can run a screen reader on our content (since then, I installed a free screen reader and am learning how to use it), how much tone matters to Accessibility, whether users of assistive technology expect to have an index in a product document, and more. The Accessibility team has been extremely helpful in their responses.

Addressing the challenge in my VMware documentation

Before I started to examine my VMware docs more closely, I’d been working with another Accessibility SME on the VMware Accessibility team, with Crest — VMware’s Automated Accessibility Testing Tool, and helping her test the redesign of the visual report that displays issues on web pages.

From meeting with her early on, I learned about blindness, barriers, magnification, text-to-speech, voiceover, low vision, hearing loss, screen readers, and the importance of having clear alt text in our docs for people who use screen readers. I resonated with everything she said. Improving the accessibility of my documentation seemed even more important to me because I now had first-hand input on a big why that mattered to users with disabilities.

During the time that I was meeting with her, I was also working on the alt texts for my screenshots and diagrams. I learned that our authoring tool had a 150-character limit on our alt text. If my alt text was too short to describe enough about the screenshot or diagram, I would need to add more text that described what the screenshot or diagram was attempting to convey. I needed to ensure that people who use screen readers had enough context to understand the content in my docs.

I started attending cross-team meetings about accessibility and VMware Docs, and I was beginning to learn more about WCAG compliance. I was intrigued. How well did my content comply with the WCAG standards?

As I looked at my VMware Docs content through the eyes of a user or customer who has a disability, I was faced with an important question. Could screen readers read out loud the text in my flowcharts? The flowcharts that I included were a required component of our use case topics and described the workflows that users would follow. Flowcharts were one thing in my content that I couldn’t test yet because I hadn’t learned how to use a screen reader. So, in the true spirit of my passion for usability testing, I had to prove it.

When I posted my question about whether screen readers could read the text in a graphical flowchart if the graphic is in SVG or another format, a technical member of the Accessibility team replied with a professional answer of yes when specific elements are coded in.

Had I coded my content correctly? And what about the output? Could screen readers read the text on our output? I had some more testing to do, and I was beginning to feel the challenge on multiple levels!

According to the Accessibility SME, screen readers can read the text in SVGs when the text is coded in the <text> element and the SVGs are supported on output. I learned that providing alternative text can be done by using the <title> element, which is equivalent to the alt attribute on the <img> element. And that if a long description is needed, <desc> in the SVG can be used.

To truly understand the user experience, I needed more confirmation and the SME offered to test the topic for me. The Accessibility SME reviewed the images on my topic by using a screen reader and DOM (Document Object Model) inspection and confirmed that the images were rendered as inline images <img> and that they all had alt attributes.

Yay! My content was good to go, right? No, not quite. I wondered if our authoring tool precisely worked this way or if differences existed in how we code our content versus what the SME was saying.

Stay tuned!

Written by:
Bonnie Kern, VMware Staff Technical Writer, Information Experience

Thank you to our Accessibility SMEs for their contributions in this experience:
Technical contributions by Nick Caskey, VMware Staff Accessibility Engineer
Content analysis and testing contributions by Joyce Oshita, VMware Accessibility Engineer
Mentoring and educational contributions by Santina Croniser, VMware Accessibility Engineer
December 2021

--

--

Bonnie Kern @ VMware
VMware Accessibility

Bonnie Kern is a VMware Staff Technical Writer, usability enthusiast, advocate for accessibility and family, and loves to read, learn, paint, draw, and garden.