Information Architecture, Or: How I Learned to Stop Worrying and Love the Process

Katie Luo
9 min readNov 11, 2019

It’s Week 4 of 10 in General Assembly’s User Experience Design Immersive class. All of us are a little sleepy, and all of us are now a little caffeine-addicted. But all of us are ready to tackle this looming, spooky, strange thing that our instructors and older classmates are calling Unit 3. Notorious for being drenched in ambiguity, my fellow cohort members and I buckled down and dove into our task:

Unit 3 is all about analyzing the structure of a website. You’ve been hired by National Geographic Expeditions to assess their desktop website.

Little did past me know how often I’d go back to stare and think at this screen.

National Geographic Expeditions is an “expedition” booking site that allows users to choose from a vast number of travel itineraries, activity choices, and transportation options. Every trip is created by NatGeo themselves and catered to a specific user, whether they’re interested in photography, river cruises, hiking, or private jet trips.

Now, it was finally time to dive into the information architecture assignment of our course — and I was really excited.

So, what did assessing NatGeo Expeditions’s desktop website entail?

First, we received a persona from previous research. Welcome to the team, Jayse!

Thankfully, we already knew our primary user and their goals, needs, pain points, and behaviors. With Jayse in mind, we went ahead into information architecture analysis.

What about these heuristics?

I won’t lie: heuristics felt really unnatural at first. We were putting so much into user research and trying our hardest to make no assumptions for the past 3 weeks, and now we were going to do no research at all?

Eventually, I understood heuristics to be a tool for establishing a foundation for how well-designed a site was, especially since what qualifies as a “best practice” in a heuristic had to have been well-researched and already established as a solid design choice. In reality, I wasn’t going off of zero research and making assumptions willy-nilly; I was just taking the research tons of UX designers had already put out there and using that as a guideline to evaluate National Geographic’s website.

To evaluate our client’s website, my cohort and I followed the Abby Method created by Abby Covert. This is a method that conglomerates popular heuristic methods such as Jakob Nielsen’s 10 Usability Heuristics and Peter Morville’s UX Honeycomb.

I analyzed the National Geographic Website based on ten particular traits:

  1. Findable
  2. Accessible
  3. Clear
  4. Communicative
  5. Useful
  6. Credible
  7. Controllable
  8. Valuable
  9. Learnable
  10. Delightful

We analyzed four pages total each, choosing pages based on a potential user flow. We scored each page on the ten traits listed above, and we made recommendations for where there were issues. It looked a little something like this:

Google Sheets were the MVP for this project.

After heuristics were analyzed, we began creating our first visual hierarchy sitemap for the site.

Sitemaps were kind of confusing at first, too.

Our class spent a lot of time trying to decide what counted as a page or a navigation function. Primary, utility, and secondary navigation were discovered easily enough, but how could one differentiate between an element on a page — something part of the UI — and an actual navigation function?

In the end, I came to understand something like this: only document the pages on a website, not the dynamic content. If I click on something, will it lead me to either A) a permanent fixture on the site (as opposed to a trip information landing page that could change in the future, and B) a page that I can navigate out of with ease, somewhere I can return to anywhere on the NatGeo site from?

These two questions helped me realize what was just part of the UI — a button that led to a certain trip itinerary for Baja, California, for example — and what was actually part of navigation, like a button that took me to all the trips that involved kayaking & rafting.

I made my first sitemap in Draw.io, and I tried my best to make the sitemap readable and clear.

Now, with heuristics analysis and the original sitemap in my tool belt, it was time to conquer the most challenging part of this project: the card sorting.

We all went into card sorting with pretty much no instruction.

Our instructors sent us off with a brief lecture and a copy of a script we could use when we ran our sorts. Ta-ta, little caterpillar students! Go start growing into the butterflies you must become by the end of this course!

I tackled open card sorting based off the instruction we were given, which was just enough. I put the names of all the secondary navigation names from the NatGeo site onto index cards (modifying the vague, one-word names to be just clear enough), and I gathered up 5 participants to sort my cards.

Once I had all my data, I began to synthesize my sort results. The biggest challenge here was coming up with some way to analyze my findings. I found online resources to be incredibly helpful, and I merged what I read from Shanshan Ma’s article on UXmatters and what tidbits we were told in lecture into my ultimate data analysis method.

Really, you could boil down what I did as organizing by colors on Google Sheets and skipping out on all the math I could (similarity matrices scared me, and I’ve always been 100% a visual learner— just ask my exasperated math teachers how good I am at numbers).

First, I documented the trends in groups. I came up with five total groups in common: Destinations, Activities, About Us, Transit Type, and Testimonials.

Then, I listed cards in rows. In the columns, I listed every single group (the five main ones above, and every individual group which had no or few commonalities).

I counted how many times the individual card appeared in various groups. I color-coded the numbers: dark blue means a strong consensus; light blue means less consensus.

Then I color-coded cards based on how spread out they were over various labels (range: red for severely spread out; yellow for fairly spread out.)

  • Yellow: 3+1+1
  • Orange: 2+2+1
  • Light Red: 2+1+1+1
  • Red: 1+1+1+1+1

And now everything was organized! I could see from a glance where there was consensus and where there wasn’t; I could tell which cards needed serious renaming, and which ones could probably get on okay.

Time to change things from the open card sort to prepare for the closed card sort!

I came up with new categories for users to sort cards into:

  • Transit Type
  • Stories & Testimonials
  • Destinations
  • About NatGeo Expeditions
  • Things To Experience

These categories were pulled from trends seen in open card sort category labels.

One concern was that three people created a “Testimonial” type group, despite there being very little actual testimonial content on NatGeo Expeditions. I decided to create the category and modify certain card names to try and compensate for the technically inaccurate label.

I modified other card names, too, particularly ones that had a strong lack of consensus or were being misinterpreted often. With these changes, I went into…

The closed card sort!

I found 5 more participants for the closed card sort and charted my results on Google Sheets, as usual.

In my closed card sort analysis, I followed the same structure as my open card sort analysis. I listed group names by columns and cards by rows, and I counted each time each card appeared in which group. Again, dark blue means a strong consensus; light blue means less consensus.

Additionally, I bolded and italicized the secondary nav names that I modified based on my recommended changes.

I highlighted in yellow, orange, or light red the cards that were either mildly or severely spread out.

  • Yellow: 3+2
  • Orange: 3+1+1
  • Light red: 2+1+1+1

A lot of the analysis of the closed card sort came from looking at this spreadsheet. I documented the names that needed the most help, and the names that needed the least. I analyzed how my changes affected where previously problematic cards were sorted, and how helpful my changes were — such as “Signature Land” being changed into “Premium Journeys.” My change did not allow this card to be unanimously sorted into its proper label, but now, users did not have explicit confusion with what the name (“Signature Land”) even meant.

I also charted differences in how certain names performed in the open card sort versus the closed card sort.

For example, “Family Outing” had very different results for each kind of sort: while in the open sort, it was primarily put in what is now Things To Experience, this card struggled in the closed sort. One user thought it would be reviews about family trips. Another thought it would have to do with the type of transportation a family would have. Finally, one thought it was an outlier: “I don’t know what this means.”

Finally, I noted particular interesting things to watch out for, such as “Photography” being dominantly categorized into Stories & Testimonials despite it being a thing to do, according to National Geographic.

Based on the data I analyzed from my closed card sort, I came up with even more recommended changes that would go back into another card sort test, if this were a long-term project.

Moving on from card sorts…

With my results from my open card sort, I made another revised sitemap based on the changes I made after my open card sort.

Most of the structure remained the same, as all of my changes consisted of renaming certain labels or categories.

And that was it! But what comes next?

In the field, if I were working with National Geographic Expeditions, I would try to continually test names and labels to see what worked best. While the information architecture on the current site is actually quite excellent as is, there are definitely some improvements that could be made (“Signature Land” is a certain issue!).

We could potentially also reorganize the site’s navigation and attempt to merge my sitemap with the current site’s layout, but considering that the actual navigation layouts of the site seem to meet best practices, this may not be totally necessary.

In the end, I came away with some valuable lessons.

Our teammates and I supported each other a lot during this process. I said often during this project, “We’re all wandering in the dark here.” But I didn’t ever mean it ominously — this was simply the nature of our project. Without clear instructions, our main resources became the Internet — and each other. Moreover, there was no way any of us could be wrong because there was no explicit way to be right.

Additionally, I found out that discovering the methodology for how you’re going to analyze something can take longer than the analysis itself. To me, that was a big moment; I’d spent a long time trying to analyze the results from my open card sort before I settled into something that worked for me.

One of the beautiful things about this project was that everyone came into analysis with their own methods, all of which worked for them as individuals! I can’t think of a better way to showcase the diversity of our thoughts and the validity of every analysis method, so long as you can confidently explain your analysis and back up your conclusions with hard data.

All in all, I can’t wait to dive into my next project! Every week at GA feels like a year with how much I learn and grow, and I’ll be sad when it’s all over.

--

--

Katie Luo

UX designer & MFA graduate of Stonecoast, USM. Adventurer, dreamer, and deep-diver of good stories.