How card sorting can inform UX design and content strategy
Categorizing devices for a library tech lending program
Having access to technology can be a defining feature of a student’s experience in higher education, where students rely on mobile devices and other technologies to access course materials, conduct research, and complete coursework and projects.
Since 2007, the University of Arizona Libraries (UAL) has operated a tech lending program that includes one of the largest laptop lending programs in the country and also provides access to a variety of other items like chargers, VR kits, and 3D scanners.
Early in the COVID-19 pandemic, the library added WiFi hotspots to the inventory to accommodate students who were living in remote areas without a reliable broadband connection. This meant the tech lending program took on new importance as some students found themselves unable to access facilities during the campus lockdown in spring 2020.
This summer, the tech lending program received a large number of new items, and the program’s webpages needed a major overhaul to accommodate these additions and incorporate new features, like an online reservation system. As part of the redesign, the library’s UX team conducted a card sort study to find an effective way of organizing items from the tech lending program on the UAL website.
This article explains how we set up our card sort and how we used participant feedback to inform our information architecture and content strategy to improve the tech lending program’s presence on our website.
Project context
In summer 2022, the library’s existing tech lending program merged with the Gear-to-Go program, another campus program that used to lend out cameras and recording devices to faculty and staff. The merger coincided with the library’s website refresh project, for which the library is updating the website’s content, design, and technical infrastructure.
With the merger of the two tech lending units, all technology is now available to borrow from a central location on campus. Students, faculty, and staff can browse the “Borrow technology” pages on the library website to find information about what items are available, who is eligible to borrow them, and how to check them out. Some items are also reservable online, which is a new feature of the refreshed website.
Using a card sort to explore ways of categorizing devices
The library owns more than 100 types of technology items, and there were many possible ways to organize them on the website. On our website, devices are organized into categories with other items that have similar functionalities. Under the legacy system, there were 13 categories, including “laptops,” “tablets,” “cords and chargers,” “drives and readers,” and “audio/visual equipment.”
Early in the summer, the department that oversees the tech lending program, Access and Information Services (AIS), came to the UX team with concerns that technology items were not organized as effectively as they could be. The merger of the two tech lending programs provided a perfect opportunity to re-evaluate how devices would be organized on the refreshed website.
In order to find the best way to categorize devices, the UX team decided to do a card sorting study. Card sorting is a UX research method for designing information architecture. In a card sort, the researcher provides participants with a list of items and asks them to group the items into categories.
Our card sort included 25 items that represented a variety of devices that are available for users to borrow. We included these items in the study because we were unsure whether they were in the correct category under the legacy system. For example, we were unsure whether Bluetooth keyboards should be in the “Tablets” category or whether a Microsoft Surface Pro is a “Laptop.” In some cases, we were also unsure whether the name of the category was effectively worded.
A card sort can be open or closed — “open” means that participants create and name their own categories, and “closed” means that the researchers create the categories in advance. For this study, we chose an open card sort and asked participants to create as many categories as they liked and group items in a way that made sense to them. The only condition was that if they were unfamiliar with an item, they should put it in a category called “I don’t know.” (You can try it yourself in our card sort demo.)
The card sort went live on Optimal Workshop in June 2022. We invited members of our participant pool to participate in the remote study, and 23 people participated. About 48% of them were undergraduate students, 26% were graduate students, and the rest were staff, faculty, and community members.
What we found
The ways participants sorted the items and named the categories taught us several things:
- Many participants put laptops and tablets together in the same category.
- Many participants created a large category called “accessories.” This category tended to combine items that had previously belonged to several different categories.
- Participants tended to group the items “monitor,” “Bluetooth keyboard,” and “mouse” in the same category. This finding was surprising because under the legacy system, each of these items was organized in a separate category.
- Participants often used words in their category names that were not used in the original category names. Some common words were “adaptors” and “power.”
I created a spreadsheet to analyze the data by listing the original category and suggested categories for each item. The color-coded tags were helpful for seeing at a glance which items should change categories. At this point, I normed the suggested category names, meaning I combined similar suggestions under the same generic category name until our team could decide on the exact wording.
Balancing user feedback with design constraints
Once we had distilled the card sort data down to these findings, it was time to develop recommendations for AIS, the department that manages content for the “Borrow technology” pages. But as our team discussed the findings, we realized that they did not always lead to straightforward recommendations.
In particular, the finding that users tended to group laptops and tablets gave us pause. One of the items in question was the Microsoft Surface Pro, which has elements of both laptops and tablets; Microsoft advertises it as combining “the power of a laptop with the flexibility of a tablet.” If we went with the users’ suggestion of putting laptops and tablets together, we wouldn’t have to categorize the Surface Pro as one or the other.
On the other hand, there was a compelling reason to list laptops and tablets separately. We had learned from usability testing that users often use device images to navigate the site and locate items within categories. Knowing the importance of images, we anticipated that if a student wanted to borrow a laptop, they would look for an image of a laptop to find more information.
Laptops and tablets are both popular items for students to borrow, so it was important for the website to show images of both types of devices to make them easy to find. But there was an issue with showing both devices together. Each category card on the website was designed to show an image of one device that was representative of that category, and it would be inconsistent if one card showed two devices. This could make the page look busier and confuse users.
After weighing these considerations, we decided to keep laptops and tablets in separate categories, each with its own representative image. This decision ran counter to the way most users had grouped items in the card sort, but it aligned with our findings from usability testing as well as the UX design principles employed throughout the library’s website.
And what did we do about the Surface Pro? We recommended including it in both categories, so that users would find it whether they were searching for laptops or tablets.
Our decision-making process shows how sometimes a UX team has to balance UX research results with input from designers and stakeholders, and not every design decision is based on participant responses alone. We can’t assume that users are operating with all of the same information that the UX team has access to. Sometimes designers work with constraints that users don’t know about, and ideally, those constraints will be invisible to users. In this case our decision placed design needs ahead of user input from the card sort, while still taking the user needs that we uncovered in usability testing into account.
Recommendations and next steps
Based on the findings from the card sorting study and our ensuing discussions, we recommended sorting the items into seven categories:
- Laptops
- Tablets
- Laptop & tablet accessories
- Adaptors & power cords
- Audio & recording equipment
- Scanners
- Maker tools
As much as possible, we adopted the language that participants had used to label categories in the card sort. For example, the participants’ word choices influenced our decision to use the words “adaptors,” “power,” “accessories,” and “equipment” in category names.
We also recommended cross-listing items that could fit in multiple categories. This applied to the Surface Pro as well as to items like noise-canceling headphones and Bluetooth headphones, which could be considered both “Audio & recording equipment” and “Laptop & tablet accessories.”
After we discussed the recommendations with the AIS team, they decided to implement all of our suggestions for categorizing items and our recommended category names on the refreshed site.
Follow-up: documentation and guidelines for categorizing new technologies
Once the categories had been approved, we incorporated the findings from this study into our documentation in a couple of ways.
First, we defined “categories” in our content guidelines as “a group of items with shared functionalities” and explained how categories should be named.
By using these guidelines, anyone adding to the website would know their contributions would be consistent with the existing content. According to Kristina Halvorson, author of Content Strategy for the Web, consistency is a critical aspect of content strategy — both in terms of presenting consistent content across all pages on an organization’s website and creating content that is consistent with the organization’s brand. Consistency helps to inspire users’ trust in an organization and its website. For this reason, we knew it was important for categories to be used consistently across the “Borrow technology” pages.
Findings from our card sort research also informed a framework to help library employees decide how to categorize future technology acquisitions. Once we nailed down the categories that would appear on the “Borrow technology” pages, we wrote descriptions of each category so that as the library acquires more items, staff will know which category to put them in on our website.
As a final step, we ran an informal usability test within our team to evaluate the category descriptions. This test led us to revise definitions that weren’t clear and note where new categories may be needed in the future as the library acquires more devices.
What we learned
This case study shows how findings from a card sorting study can inform UX design and content strategy. Findings from the card sort helped the UX team and our stakeholders make decisions about the information architecture for user-facing pages as well as internal documentation and classification systems.
We intend to follow up by conducting usability testing with library users to validate the categories and using site analytics to determine which items and categories are most popular so we can give them more prominence on the site.
As the 2022–2023 school year begins, the demand for technologies is growing as students return to campus and begin work on new courses and projects. We hope our work on the design and content of the “Borrow technology” pages will help hundreds of students, staff, and faculty find and reserve the technologies they need to succeed in their work.
Acknowledgements: Thanks to UX team members Aly Higgins, Bob Liu, and Leonardo Echeverria, and AIS team members Travis Teetor, Morgan Hyde, and Peter Schwarz for their contributions to this study.