The State of Image Accessibility on the Web
People are always surprised to see my totally blind dad use a computer. In fact, every day he sits at the kitchen table, ear phones plugged in, clicking away at his Lenovo ThinkPad T460. Accessible technology like the Jaws screen reader on computers and Voiceover on the iPhone has allowed my father, Peichun Yang, to connect with the world in ways he never thought possible when he lost his vision in 1998. With his screen readers, he can read and send emails, browse the internet, and even find and waste hours on his favorite podcasts. However, while current accessible technology has opened the web for the blind, a lot of content on the web, like images, are still unusable due to websites skimping on their focus on accessibility.
How can websites make their images accessible?
When blind or visually impaired users run across an image on a website, their screen readers read the alt text (alt tag, alternative text, alt) of the image. Basically, alt text provides a textual alternative to non-text content in web pages, giving visually impaired users a better sense of what is on a webpage (more info).
Here is an example of alt text in an image tag in HTML. (Note: Medium actually doesn’t allow us to write custom alt text for this image)
<img src=”img/dog.jpg” alt=”small white dog walking on red brick”>
Issues arise when alt text is missing or incorrectly populated. For example, just yesterday my dad called me over for help because he had navigated to a corner of a website and could not figure out what was on the page. The page had only five large images, each with fancy 3D text that described a different Cyber Monday sale: “Cyber Monday 30% Everything….” Because none of the images had alt text, his screen reader only read “graphic, graphic, graphic….” This is an extreme example where missing alt text rendered a website useless for my dad, but in reality a non-trivial number of websites fail to write effective alt text for images. And, without alt text, the browsing experience of websites are worse if not impossible for the blind and visually impaired community.
We scraped 15 popular websites for images and alt tags to bring to light the current state of web image accessibility.
The state of affairs in alt text
What you are looking at is a bar chart that shows how well each website is doing in adding alt tags to images. Specifically, this chart tells us the percentage of images that possess an alt tag.
What this chart doesn’t capture is the amount of alt tags that are empty. On a website, alt tags on an image can be empty (alt=“”) if the image is a decorative element like a pattern background. We wanted to see how many alt tags for images on websites weren’t just empty, but actually provided content that is useful to the blind.
This chart shows the percentage of populated alt tags for each website, which is a better metric to use when looking at how accessible the images on a website are.
We found surprising variability in website accessibility just from our small sample of 15 websites. E-Commerce sites did very well in populating their alt tags, which is sensible as Google uses alt tags for SEO. On the other hand, specialized news websites have a lot of room to improve in making their images more accessible. Thankfully, larger news publishers are generally more responsible in properly populating their alt tags.
Websites like Reddit or wikiHow that are non-commerce and contain copious amounts of user generated content have an incredible lack of populated or extant alt tags. On such websites, content that is image-based is almost entirely inaccessible to the blind community.
For this pilot study, we decided to keep our sample size tiny and our analyses simple.
What can I do to make the web more accessible?
Consider how many images you see every day on the internet, and what the web would be like without any graphics. Although the blind can’t see images, at least technology has allowed a textual alternative: alt-text.
We understand that good alt-text is not a top priority for the majority of websites. But, we hope that we can inspire developers to consider spending time improving their alt-text. Here are a few quick tips to write more effective alt-text.
- Don’t start alt-text with “photo of…” because a screen reader will say something along that lines before it reads the alt-text.
- Alt-text is usually only a few words. Be succinct!
- If an image is solely for decoration it is fine for the alt-text to be empty. Make sure you at least have alt=“” because if the alt tag is completely missing a screen reader will read the filename of the image.
- If an image is used for navigation, make sure the alt-text provides the content and function of the image.
We’d also like to point out the irony that, although the entire purpose of this article is dedicated to web accessibility, Medium doesn’t make it easy for publishers to edit the alt text in posted images. Therefore, the images in this article don’t have alt text.
Hopefully, we got you interested in making websites more inclusive. Below is a short list of great content that we frequent the most.
How we did it
We wanted a quantitative estimate of how Peichun’s favorite websites were doing to make themselves more accessible to everybody by determining how much effort websites put in towards annotating their images through the alt tag.
After noting 15 of Peichun’s favorite websites, we built a crawler in Scrapy and politely crawled each of the websites up to depth 2 to avoid being rude (we also made sure to read the TOS for each website beforehand). For each website, we determined how many images we saw, and how many of those images had alt tags. Within the images that had alt tags, we wanted to see how many of those alt tags were actually just blank.
Ultimately, we were able to retrieve numerical alt tag statistics for every website. The raw numbers are right here:
Limitations of this study
We totally acknowledge that this study is full of limitations. For example, our crawler cannot distinguish between important image content and decorative images across websites, and our numbers treat important images and unimportant images as the same statistic. These figures are intended as general estimates of image accessibility.
We are Kevin Yang and Will Hang, two Stanford undergrads who believe that technology can be more accessible to everyone with the help of new technologies such as artificial intelligence. Our mission is to enhance the accessibility and enjoyment of the web for everyone. Learn more about our work at include.ai