IA in Practice: Measuring Navigation Health with Analytics

Rachel Price
Known Item
Published in
5 min readApr 27, 2021

Last year we rolled out a new header navigation strategy for Microsoft Docs. We have a single global header and 50+ (and growing) product-specific secondary headers. Product teams manage most of these headers because there is only so much time and so many IAs. To scale our IA team, we provide guidance and work with product teams to put that guidance into practice. One of the best questions I got during our initial rollout was: “How will I know if my navigation is good?”

Anatomy of the Docs header navigation

My answer at the time was, “Well…we have some heuristics and best practices you can follow, and we can also do card sorts and tree tests once in a while.” Active navigation research (the kind where I have to plan it, run it, and analyze it) is the most established way of assessing if a navigation structure is “good”. We do that as we are able. But between studies, we need a way to passively collect “goodness” signals so that the teams who manage these headers can assess and iterate without spinning up a study every time they want to make a change. We don’t have the time or resources to routinely run tree tests and card sorts on 50+ navigation headers (and honestly that sounds like a drag…unless you are an IA who thinks doing that for your full-time job would be great, then please call me).

So how do we collect data between studies? “Let’s put together an analytics dashboard!” I said. But wait, how do you measure navigation health with analytics? I dutifully googled this, assuming I’d find someone who had already written a treatise on the subject, but struck out. So I’d like to share our first attempt at using analytics to assess navigation health on Microsoft Docs.

Simply put, navigation exists to help people orient themselves (by seeing it) and to help people find things (by interacting with it). Given that, what are measurable signals we could look for that tell us how well a header is supporting orientation and findability?

We can’t collect passive signals of orientation, as we don’t yet have the capability to track mouse hovers and do eye tracking at scale. (Yes, I know there are tools that do this.) So what about findability signals, which result from interacting with the header? After determining what was possible with our current telemetry, here’s what we came up with:

  • Usefulness of categories: Are the navigation categories and their child links getting used at all? Which ones are getting used more than others? Which ones are getting completely ignored? We can get click-through-rate for each link in the header as a relative signal for which links are getting more attention than others, and use that to dig into questions of whether a link is useful at all (maybe we can remove it) or where in the header it should be (maybe we should place it more prominently because it’s getting a lot of clicks even down here way at the bottom!)
  • Quality of header clicks: Beyond just seeing which links get clicked at all, I want to see if those clicks on the header are resulting in quality page views. Are there signals telling us the user found the resulting destination to be useful? We can measure session length and bounce rate for destination pages from the header. We are comparing session length of users who came to a page through the header to average session length for all users on that page, and making the same comparison between bounce rates. I want to see that users who came to a page through navigation have equal or higher content engagement rates than the averages. If a link in the header has high CTR but exceptionally high bounce rate, then I may need to work on my label or consider digging into what’s missing on that destination page that users were expecting to find.
  • Engagement with the header navigation at all: This is more out of curiosity than any strong feelings about how users should engage with navigation all-up. Nobody comes to a site purely to click on navigation. Most users don’t actively engage with site navigation because they simply don’t need it. That’s fine! But beyond that, we want to know if there’s a non-benign reason why some users are avoiding the navigation. Our theory is that users will avoid bad navigation because the cognitive overhead of trying to figure it out isn’t usually worth the effort — if it’s not immediately coherent, they’ll ignore it. Our hypothesis is that good navigation will have more engagement than patently bad navigation. We can measure engagement rate with the navigation at all: for all the users who were on pages that featured this header, how many users engaged with the header at all? We want to establish a baseline, and not dip below it with any new changes. It’s tempting to look for an increase in engagement over time, but that’s probably just a vanity metric.
A portion of the header performance dashboard

We partnered with our analytics team to put these metrics into a dashboard so that every Docs author could check on the performance of their specific header. As an IA team, we use these metrics to establish benchmarks and further our own understanding of how users interact with navigation in general. We’ll also continue iterating on these metrics as we get more clarity on what’s a valuable metric and what’s just a vanity metric.

For folks who manage navigation headers on Docs, we hope to empower them to experiment with their categories, labels, and structures with more frequency. Navigation should absolutely change over time to reflect changes in content coverage and user needs, but we often hear that our partners are hesitant to touch the navigation lest they break something or upset users. Having some data to help assess problem areas and see the impact of small changes will hopefully get us all over that barrier and turn what used to be a brittle navigation approach into a living, breathing navigation strategy that brings much more value to our users.

--

--

Rachel Price
Known Item

Information Architect. Chicken-fried-steak huntress.