In the context of digital libraries or digital repositories, the word “accessibility” sometimes refers to access as in open access literature, which is freely available via the internet to anyone anywhere (Crawford, 2011). While digital library systems fall within the context of this paper, accessibility is not used to refer to open access. Rather, the Web Accessibility Initiative (WAI) definition of accessibility is used, enabling people with disabilities to use the web (W3C, 2005). Kleynhans & Fourie (2014) note the lack of definition surrounding accessibility and indicate the importance of defining it. Overall, accessibility means that users with all disability types — visual, hearing, motor, and cognitive — are able to use the website (WebAIM, 2013).

According to 2013 American Community Survey data, an estimated 12.6% of the population of the United States has a disability (U.S. Census Bureau, 2013). Web accessibility is important so that people with disabilities have equal access to the information, and resources available on the web. Just as in the physical world, accessibility also benefits users without disabilities; accessible websites also benefit users on slower internet connections, or antiquated mobile devices (W3C, 2005). Further, attention to website accessibility improves the usability, or ease of use, and, improves search engine optimization (SEO) of websites (Kleynhans & Fourie, 2014; Moreno & Martinez, 2013; Nielsen, 2012; Rømen & Svanæs, 2011). Inaccessible websites widen the digital divide, because they restrict access to information on the basis of ability (Adam & Kreps, 2009).

Literature Review

Bertot, Snead, Jaeger, and McClure (2006) conducted a study to develop evaluations for assessing digital libraries. The assessment framework developed by Bertot et al. (2006) includes functionality, usability, and accessibility as determining factors for the success of digital libraries. The accessibility criteria include the provision of alternate forms of content, not using color to convey information, using clear navigation structures, and structuring tables to transform gracefully when enlarged.

Southwell and Slater (2012) evaluated academic library digital collection websites, using a variety of screen reader software. Rather than evaluate the accessibility of the website overall, the focus was placed on whether the digital item selected was screen-readable. The primary focus was to determine if digitized texts, as opposed to born-digital documents, were accessible. Thirty-eight of the libraries evaluated by Southwell and Slater used homegrown systems, and 31 used content management systems. An overwhelming majority of the libraries using content management systems, 25 (81%) for digital library content used CONTENTdm. Results of the study indicated that 42% of the items evaluated are accessible to screen readers. Typically, the absence of a transcript for image-based information was the cause of accessibility failure.

Cervone (2013) provides an overview of accessibility considerations and evaluation tools. Further, Cervone notes many visually impaired people do not use screen readers, instead opting to use browser and computer settings to compensate for their impairments. Using responsive design to gracefully accommodate increases in text sizes is suggested. However, many organizations, educational institutions, and libraries are still working to integrate responsive design on their websites (Rumsey, 2014). Organizations without responsive design should be mindful of how tables reflow (Bertot et al, 2006).

Fox (2008) suggests five principles to be mindful of when developing or redesigning a digital library website: simplicity, usability, navigability and findability, standards compliance, and accessibility. Adhering to any one of the aforementioned principles actually serves to support adherence of the other principles. For example, standards compliance sets the stage for accessibility, and accessible websites support findability of information (Moreno & Martinez, 2013).

Evaluating Accessibility

The task of evaluating accessibility is very complex. To begin with, there are a variety of standards with which to measure accessibility compliance. The most recent standard to measure accessibility is the Web Content Accessibility Guidelines (WCAG) 2.0, which were finalized by the W3C in 2008. The WCAG 2.0 is preceded by WCAG 1.0, which was recommended in May, 1999 (Mireia et al, 2009; W3C, 2008). Further, Section 508, Subpart 1194.22 can also be used to evaluate the accessibility of websites. Eleven of the 16 Section 508 checkpoints are based on the WCAG 1.0 specification. Recent studies of accessibility typically use the WGAG 2.0 guidelines (Billingham, 2014; Ringlaben, Bray & Packard, 2014; Rømen & Svanæs, 2012). A variety of tools for automatically assessing the accessibility compliance of websites are available. Using an automated validation tool is an excellent place to start when evaluating website accessibility. However, it is essential to follow automated accessibility checks with other processes to evaluate accessibility (W3C, 2005b).

In addition to the complexities of evaluation caused by the variety of standards to evaluate website accessibility, the number and variety of accessibility assessment tools convolutes the accessibility assessment process. The W3C provides a list of web accessibility evaluation tools. At the time of this writing, the list, which can be filtered by guideline, language, type of tool, type of output, automaticity, and license, contained 48 accessibility evaluation tools (W3C, 2014).

Method

Digital library websites using the CONTENTdm platform were identified using the “CONTENTdm in action” website (CONTENTdm in action, n.d.). In some cases, links to collections pointed directly to content residing on the CONTENTdm platform, while in other cases, the landing page for the collection was stand-alone, with links to the content in the CONTENTdm system for further exploration.

The differences in how digital library content is displayed provided an additional opportunity for analysis — evaluating the collection landing page and the CONTENTdm driven page. Analyzing the two different page types provides an opportunity to identify and differentiate between accessibility issues on the collection landing pages and on the collection browse pages. Two academic library digital collections with a landing page separate from the “Browse Collection” interface were identified for analysis: the Carver-VCU Partnership Oral History Collection at the Virginia Commonwealth University (VCU) Libraries, and the Civil War in the American South Collection at the University of Central Florida (UCF). While some of the digital library collections landing pages were standalone, outside of the CONTENTdm system, both of the collection landing pages evaluated in this research project were generated within the CONTENTdm.

Carver-VCU Partnership Oral History Collection
Landing page for the UCF Civil War Collection

Preliminary accessibility evaluations were conducted with several of the automated tools listed by the W3C, in order to select the most appropriate tool for formal analysis. AChecker Web Accessibility Checker was selected after evaluating the results and output format generated by the following tools: Functional Accessibility Evaluator (FAE) 2.0, HTML_CodeSniffer, WAVE Web Accessibility Evaluation Tool, Accessibility Management Platform (AMP), and Cynthia Says from HiSoftware. Each evaluation tool has strengths and weaknesses, which are outside of the scope of this paper. AChecker Web Accessibility Evaluation Tool was selected for use based on its base functionality, readability of reports and data, and data export options.

Automated evaluation was conducted using AChecker Web Accessibility Checker. Pages were evaluated at the WCAG 2.0 AA level. WCAG 2.0 level A includes specifications websites must conform to, and level AA includes the specifications websites should conform to for accessibility. The URLs listed in Appendix A were inputted into the “address” field, with the option to check against, “WCAG 2.0 (Level AA)” with the view by guideline report format. Then the full accessibility review was outputted to PDF using the export option.

The WCAG 2.0 guidelines were selected for evaluation because the WCAG encompasses more disability types and usability principles than WCAG 1.0 and Section 508 (Mireia et al, 2009). To be clear, it is possible for a website to meet WCAG 2.0 standards, while not being functionally accessible (Clark, 2006). However, a website is certainly not accessible if it does not meet the WCAG 2.0 guidelines. Further, the automated accessibility check does not check for accessibility of individual items in the collection, as in the Southwell and Slater (2012) research.

Results

The results of the accessibility evaluation are presented in the following three tables. Table 1 displays the general highest-level overview of the results — the total number of problems identified for each page. AChecker categorizes the results into three separate categories: known problems, likely problems, and potential problems. Issues AChecker can identify with certainty are categorized as known issues, more ambiguous barriers that “could go either way” and need human intervention to determine whether an issue exists are listed as likely problems, and issues that need human review for evaluation are listed as potential problems (Gay & Li, 2010).

Table 1: AChecker Accessibility Evaluation Results by Type of Problem Page Known Problems (n) Likely Problems (n) Potential Problems (n) VCU: Oral History Landing 4 0 160 VCU: Oral History Browse 231 0 1500 UCF: Civil War Landing 3 0 180 UCF: Civil War Browse 58 1 945

Table 2 and Table 3 display the specific guidelines where accessibility issues were identified by AChecker, for VCU and UCF content, respectively.

Table 2: Accessibility Evaluation Known Problem Flags by Guideline — VCU Criteria Problem Detail Landing (n) Collection Browse (n) 1.1 Text Alternatives (A) Image Missing Alt Text 1 1 1.3 Adaptable (A) Missing Form Labels 0 148 1.4 Distinguishable (AA) Bold Element Used 1 3 2.4 Navigable (AA) Improper Header Nesting 0 1 3.1 Readable (A) Document Language Not Identified 2 2 3.3 Input Assistance (A) Element with more than one label 0 1 3.3 Input Assistance (A) Empty Label Text 0 74 Table 3: Accessibility Evaluation Known Problem Flags by Guideline — UCF Criteria Problem Detail Landing (n) Collection Browse (n) 1.1 Text Alternatives (A) Image Missing Alt Text 0 0 1.3 Adaptable (A) Missing Form Labels 0 34 1.4 Distinguishable (AA) Bold Element Used 1 3 2.4 Navigable (AA) Improper Header Nesting 0 1 3.1 Readable (A) Document Language Not Identified 2 2 3.1 Input Assistance (A) Empty Label Text 0 17 4.1 Compatible (A) Non-unique ID Attribute 0 1

Data Interpretation

At the onset, the primary weakness of the interpretation of the accessibility evaluation results lies in not having direct access to a CONTENTdm system as a contributor or administrator. Therefore, interpretation relies on assumptions, which are supported by the similarity of the results of the two separate digital library collections on the CONTENTdm system.

The two digital library collections evaluated presented nearly identical known accessibility issues. Two errors were identified with the VCU collection that were not identified in the UCF collection; the issues were missing image alt text (on the landing and browse page), and element with more than one label (collection browse page). The image missing an alt tag is the header image for the template. Since no issue with a missing image alt tag was identified on the UCF collection, presumably the alt attribute of the header image can be modified by local administrators of the CONTENTdm system. The number of errors identified related to missing labels appears to be related to the number of collections available in the system. For example, 148 missing form label errors were identified on the VCU collection browse page, while only 34 were identified on the UCF collection browse page; the VCU system had 37 separate collections and the UCF system had 17 separate collections. The missing form labels are related to the faceted navigation to “add or remove other collections to the search. Although the collections may be reached directly from a specified landing page, the absence of form labels could make it impossible for visitors using screen reader technology to navigate to other collections in the system, or to view other collections along with the current selection.

Recommendations

Based on the number of known problems identified on the collection brows pages in the accessibility evaluation, it is important to determine if labels can be added by a local CONTENTdm system administrator. If a local CONTENTdm system administrator has the ability to add labels, then meaningful labels should be added for each element where labels were required. Because both collection browse pages presented the errors in the same structural location, it is likely that the missing labels are a function of how the system outputs the collection information onto the page. In the case of a system structure that generates inaccessible content, advocacy for the importance and necessity of accessibility is invaluable. Clients of OCLC should strongly urge the vendor to make accessibility corrections a priority for future updates and releases of the system. When customers consider features a priority, vendors should follow suit, especially in the competitive tech marketplace that currently exists. The value of accessibility advocacy to create positive change cannot be overstated.

VCU Collection Browse
UCF Civil War Collection Browse Page

There is plenty of work, beyond the initial automated check, that must be done in order to evaluate and improve the accessibility of digital library collections on the CONTENTdm platform. Each of the likely problems and potential problems identified in the AChecker report should be reviewed to determine if additional action is needed in order to provide accessible content. Some of the potential problems identified by AChecker include items with the need for long description, issues with table structure or display, and areas where visual context may be required. Correcting potential problems related to the need for visual context, where consumption of the information in the image requires being able to view the image, will provide at least some of the information needed to ensure individual items in the collection are accessible. After corrections are made, re-evaluate the pages with the AChecker tool. Follow up automated accessibility evaluation with manual evaluation, and, whenever possible, involve users with disabilities in the evaluation (Henry, 2006; W3C, 2005b). Although, many people with visual impairments do not use screen readers, they are invaluable evaluation tools, especially for projects where users with disabilities are not directly involved in the testing process (Southwell & Slater, 2012; W3C, 2005b).

Limitations and Recommendations for Further Research

The primary weakness of this research report is that it only scratches the surface of evaluating the accessibility of digital library content using CONTENTdm. Accessibility evaluation was conducted using only one automated assessment tool, the AChecker Web Accessibility Evaluation Tool. As Gay and Li (2010) point out, different automated accessibility evaluation tools perform different checks, and identify different problems. Comparing the results from a selection of automated accessibility evaluation tools would provide valuable information about the individual strengths and weaknesses of the tools, and when use of one tool over another can prove more beneficial. Although a CONTENTdm driven landing page and browse collection page were evaluated for accessibility, no individual item detail page was evaluated for accessibility. While evaluating an individual item detail page would not necessarily inform the discussion regarding individual collection item accessibility, identifying other potentially inaccessible system structures is a benefit of such analysis. Another limitation of the current study is that only the accessibility issues identified as known problems were analyzed to inform the results. A great deal of data from the initial automated accessibility remains untapped in this study. Providing additional detail regarding the issues identified as likely problems and potential problems allow for a more comprehensive view of the accessibility of the CONTENTdm system, even though this study identified some specific structural changes that are needed for accessibility. Further, accessibility assessments using other tools, such as screen readers, and additional manual accessibility evaluation would help fill in gaps in the information currently available about the accessibility. Finally, conducting accessibility studies of the CONTENTdm system with users with disabilities would help to identify any lingering accessibility issues that were not identified in the previously mentioned methods.

References

Accessibility management platform. (n.d.). Retrieved March 1, 2015, from https://amp.ssbbartgroup.com/

Adam, A., & Kreps, D. (2009). Disability and discourses of web accessibility. Information, Communication & Society, 12(7), 1041–1058. doi: 10.1080/13691180802552940

Bertot, J. C., Snead, J. T., Jaeger, P. T., & McClure, C. R. (2006). Functionality, usability, and accessibility. Performance Measurement and Metrics, 7(1), 17–28. doi:10.1108/14678040610654828

Billingham, L. (2014). Improving academic library website accessibility for people with disabilities. Library Management, 35(8/9), 565–581. doi: 10.1108/LM-11–2013–0107

Chowdhury, S., Landoni, M., & Gibb, F. (2006). Usability and impact of digital libraries: a review. Online Information Review, 30(6), 656–680. doi:10.1108/14684520610716153

Clark, J. (2006, May 23). To Hell with WCAG 2. Retrieved March 14, 2015, from http://alistapart.com/article/tohellwithwcag2

CONTENTdm in action. (n.d.). Retrieved January 31, 2015, from http://www.oclc.org/en-US/contentdm/collections.html

Crawford, W. (2011). Open Access: What you need to know now. Chicago, IL, USA: American Library Association.

Functional Accessibility Evaluator 2.0. (n.d.). Retrieved March 1, 2015, from http://fae20.cita.illinois.edu/

Gay, G., & Li, C. Q. (2010). AChecker: open, interactive, customizable, web accessibility checking. Paper presented at the Proceedings of the 2010 International Cross Disciplinary Conference on Web Accessibility (W4A), Raleigh, North Carolina.

Henry, S. L. (2006). Understanding web accessibility. Web Accessibility (pp. 1–51): Apress.

HiSoftware Cynthia says portal. (n.d.). Retrieved March 1, 2015, from http://www.cynthiasays.com/

HTML_CodeSniffer. (n.d.). Retrieved March 1, 2015, from http://squizlabs.github.io/HTML_CodeSniffer/

Kleynhans, S. A., & Fourie, I. (2014). Ensuring accessibility of electronic information resources for visually impaired people. Library Hi Tech, 32(2), 368–379. doi: 10.1108/LHT-11–2013–0148

Mireia, R., Merce, P., Marc, B., Miquel, T., Andreu, S., & Pilar, P. (2009). Web content accessibility guidelines 2.0. Program, 43(4), 392–406. doi: 10.1108/00330330910998048

Moreno, L., & Martinez, P. (2013). Overlapping factors in search engine optimization and web accessibility. Online Information Review, 37(4), 564–580. doi:10.1108/OIR-04–2012–0063

Nielsen, J. (2012). Usability 101: Introduction to usability. Retrieved October 21, 2014, from http://www.nngroup.com/articles/usability-101-introduction-to-usability/

Ringlaben, R., Bray, M., & Packard, A. (2014). Accessibility of American university special education departments’ web sites. Universal Access in the Information Society, 13(2), 249–254. doi: 10.1007/s10209–013–0302–7

Rømen, D., & Svanæs, D. (2012). Validating WCAG versions 1.0 and 2.0 through usability testing with disabled users. Universal Access in the Information Society, 11(4), 375–385. doi: 10.1007/s10209–011–0259–3

Rumsey, E. (2014, July). Responsive design sites: Higher ed, libraries, notables. Retrieved March 14, 2015, from http://blog.lib.uiowa.edu/hardinmd/2012/05/03/responsive-design-sites-higher-ed-libraries-notables/

Southwell, K. L., & Slater, J. (2012). Accessibility of digital special collections using screen readers. Library Hi Tech, 30(3), 457–471. doi:10.1108/07378831211266609

Total validator. (n.d.). Retrieved March 1, 2015, from https://www.totalvalidator.com/

U.S. Census Bureau. (2013). DP02 Selected Social Characteristics in the United States [Data]. 2013 American Community Survey 1-Year Estimates. Retrieved from http://factfinder2.census.gov

W3C. (2014, March). Easy checks — A first review of web accessibility. Retrieved March 5, 2015, from http://www.w3.org/WAI/eval/preliminary

W3C. (2005a) Introduction to web accessibility. Retrieved October 3, 2014, from http://www.w3.org/WAI/intro/accessibility.php

W3C. (2005b). Selecting web accessibility evaluation tools. Retrieved March 5, 2015, from http://www.w3.org/WAI/eval/selectingtools.html

W3C. (2014, December 18). Web accessibility evaluation tools list. Retrieved March 1, 2015, from http://www.w3.org/WAI/ER/tools/

W3C. (2008, December 11). Web content accessibility guidelines (WCAG) 2.0. Retrieved March 5, 2015, from http://www.w3.org/TR/WCAG20/

WAVE web accessibility tool. (n.d.). Retrieved March 1, 2015, from http://wave.webaim.org/

WebAIM. (2014, April 22). Introduction to web accessibility. Retrieved March 5, 2015, from http://webaim.org/intro/


Originally published at LibUX.