Better Navigation Design Through User Research

Bill English
RetailMeNot Product
8 min readJan 8, 2019

If you are working on a consumer-facing website or app hopefully you have found a target audience and valuable content for them to access. Having great content is only a partial victory, though, since it is in danger of going untouched if your users struggle finding it.

Providing a clear and direct path for users to find the content they want begins with an understanding of their perspective. At RetailMeNot, we recently made an effort to improve our navigation design, utilizing a variety of tools and methods to bring our users’ voice into the fold.

IA vs UI

When designers on our team talk about navigation design, we break the design effort into two halves, each with a matching deliverable, key metric and research strategy.

  • Information architecture (IA). IA is the structuring and labelling of site content and functionality. The most common visualization is a site map, a series of boxes and connecting lines that show how content is linked and clustered together. An effective IA should increase the findability of content, which is the measure of how easy it is for users to find content they assume should be there.
  • User interface (UI). The IA should inform the navigation’s UI design, which involves the placement and the visualization of menus, links and buttons that lead users away from their current page. Navigation UI can drive discoverability, which is the measure of how likely a user is to notice new content they might not have been aware of.

Developing a Research Strategy: Information Architecture

Before designing an interface (links, colors, buttons), it was important to validate the underlying “tree” of categories and subcategories that shows how our content sections cluster together and branch out to one other. One major benefit to this phase of testing is that there are no biases or distractions based on visual design elements. IA testing allowed us to test ideas quickly without putting in hours building screens and prototypes.

We relied on a two research tactics: card sorting and tree testing, to make sure our architecture aligned with our users’ mental models.

Card Sorting

Card sorting is a method that allows users to group an array of content in the way that makes the most sense to them. It also identifies and weeds out internal and industry terms that aren’t relevant to users. There are different types of card sorting commonly used: open sorting (where you allow users to make their own categories), closed sorting (where you have users sort into pre-defined categories), and hybrid sorting (a combination of the two).

To start, our team made a comprehensive list of all content available on our site, member account pages and links that we expected to appear further down our roadmap. Each individual piece of content was placed on a digital card, and we invited participants to cluster this information in a way that made sense to them. Some example cards were: “My Favorite Categories”; “Change Password”; and “Clothing Deals.”

We utilized UserTesting.com to gather participants based on our own screener and OptimalWorkshop to run our card sorting tests. UserTesting.com supplied us with video of users running the test, so our team could gather qualitative insights to the decisions participants were making.

Card Sorting to Solve Member Information

We ended up running two card sorts, one for our logged-in member information and one for our general content offerings. Both tests provided great insight into how our content sections relate to each other.

In the case of member information, we had three distinct buckets of content for logged-in users (Account, Profile, and Wallet) but weren’t quite sure which pages went into each one. Take for example the Stored Payment page, where users can add and edit credit cards. Would that be part of a Wallet (a natural place for credit cards), or an Account (a common location on e-commerce sites)?

Below is the resulting matrix for that particular card. Each cell reflects the number of participants who placed the card into each column. You can see that 60% (12 of 20) users placed Stored Payments into the Wallet bucket. Results like this allowed us to make more grounded decisions when it came to building our IA.

Tree Testing

Tree testing can validate how well users navigate your site architecture to accomplish tasks. It works by arranging content into a nested tree and having users complete tasks by navigating down sub-menus to where they think the task can be completed.

For the first round of our test, we reconstructed the existing navigation found on our site. We then compiled a list of 20 tasks for users to complete following that tree. Sample tasks for us included: “You want to find deals on back-to-school clothes” and “You want to update your credit card information.”

We kept a running spreadsheet with each task along with its measured results. We had access to the following metrics:

  • Success: the percentage of participants went to the correct part of the tree, even if they had to jump around the tree a few times before doing so
  • Directness: the percentage of participants who did not backtrack at all when selecting an answer, even if the answer was incorrect
  • Time taken: the median time in seconds to select an answer
  • Overall score: a score out of 10 based on a weighted average of success and directness

After the first round of testing we analyzed which tasks were falling short on those metrics. Below is an example task where we asked users to update how frequently they receive email communications from RetailMeNot; this was found under Account > Security > Preferences > Email Settings:

After observing users try to complete this task and analyzing the most common paths participants were taking, we discovered the first step, Security, wasn’t an intuitive enough term. In the second round of testing, we updated the architecture to move Notification Preferences just one level below Account. Allowing users to control how frequently they receive email from us is important, so our team wanted to give it more visibility. Here are the subsequent results after changing the path to Account > Notification Preferences > Email:

We analyzed and adjusted the other tasks in a similar manner. We ran four rounds of testing before settling on the tree that would become our provisional information architecture.

Developing a Research Strategy: User Interface

Having completed the process of constructing a valid information architecture, we converted that structure into low-fidelity screen designs. We had to talk to users directly to see if these screens were clear and if all of our content could be discovered.

Moderated User Testing

We brought in six participants to test a comprehensive set of tasks on our prototypes. The tasks were prioritized based on common use cases for our product and reflective of our marketing goals. Using a moderator, we gathered their shopping habits, aesthetic preferences, emotional reactions and raw metrics on task success rates.

Here are a set of insights we gathered from talking to participants about the prototyped UI options:

  • Search matters. Amazon and Google have set an enormously high bar in terms of what users expect from search. For many content-related tasks our test participants instinctively went to the search bar to find what they are looking for. Many were repeatedly tapping the search area to complete tasks even after noticing it wasn’t active (for purposes of the test). We found out that if users know exactly what they want, search is usually a better place to go than navigating through menus. As one of our participants said “it is the Google of the website”.
  • Obvious and exposed wins. Participants responded well to exposed links and actions that were clearly labeled. Users also were able to decipher links much faster if they had an accompanying icon. Participants also liked the idea of a flatter hierarchy with less content nesting and need to “navigate the navigation”.
  • Navigation UIs can drive discoverability. On some tasks participants told us they weren’t aware we provided the feature being tested. We paid attention to cases like this, particularly if the participant had an easy time completing the task. We believe if the task could be completed easily in a test environment, the UI could potentially drive discoverability of new features for our users.

Wrapping Up

Through a combination of research techniques we were able to drive towards a finalized information architecture and user interface for our global navigation. Eventually this new design will be A/B tested in a production environment for further user feedback and to gauge traffic effects.

References

--

--