Facilitating cross-functional usability conversations

Loriah Pope
Movable Ink Brand & Design
9 min readSep 11, 2020

How our EPD squad used a heuristic analysis to inform product decisions.

(source: Giphy)

Background/Problem

At Movable Ink, our Engineers, Product Managers, and Product Designers work on close-knit teams to ensure we’re carefully considering the feasibility, viability, and usability perspective of all projects.

In this model, everyone on the team can bring their expertise to the larger group to facilitate more robust conversations.

A recent project found one of our teams in a position to evaluate an external tool to help us build better experiences for our users. For this effort, we knew we might look at multiple different tools to choose the best one. Thinking about this from a design perspective, we wanted a consistent way to evaluate how various tools might fit our existing platform experience. How could we make sure we were evaluating different tools equally?

A Venn diagram between Engineering, Product, and Design highlighting the focus on the end-user experience.
We needed a way to measure the end-user experience to evaluate these vendor tools.

After a conversation with the design team, I decided to try using a Heuristic Analysis, starting with the heuristics outlined in Jakob Nielsen’s 10 Heuristics for User Interface Design. This list of guidelines would provide a foundation to make decisions that, ideally, could apply to any vendor tool we considered as part of this project.

From my perspective, this exercise could also be collaborative to help the full team learn more about what usability principles to look for in these vendor tools. There was an opportunity to demystify this part of the design process and to equalize the team’s collective knowledge.

Before the meeting

There were a few things I could prep beforehand to make the most of everyone’s time during the meeting:

First, I wanted to give the team a basic understanding of what a heuristic analysis is and communicate how it fits into our broader planning strategy.

Next, the team needed to align which heuristics were most important for this project and assign weights to each item.

Third, we also needed to be clear on our judging criteria.

Finally, I wanted to go through the exercise myself to speak to specific items, without taking extra time during the meeting to establish a design opinion.

I began to create artifacts that could help facilitate our exercise, creating a new page in our shared workspace to house all of our supporting content.

What are “heuristics,” and which are important in this context?

The first resource should provide the background to the team as to what a heuristic analysis is. I made a new table in with the following columns:

  • Weight: Used to assign an order of importance to a specific heuristic based on the project.
  • Heuristic: Used to keep track of the heuristic’s place in the list of usability heuristics
  • Heuristic: The name of the heuristic
  • Nielsen Norman Description: A short description of what the heuristic is
  • How is this relevant to this project?: A brief description of how the heuristic might apply to this use case
  • Positive Examples: List a few examples to help the team relate to an item on the list
  • Link to Nielsen Norman Heuristic: Link to the heuristic on the Nielsen Norman website
  • Severity: Relate to an additional table to highlight the severity score of a heuristic
A table with columns to help explain a heuristic analysis.
The first table answers, “what are our heuristics, and which are important?”

I wanted the team to follow along with me as we walked through each item in the table, and this consolidated view of all of the background information could easily ground the team. Including columns to specifically call out how a heuristic applies to the project context and including familiar examples would help the team identify and recognize these heuristics without a lot of previous experience, hopefully leading to a more collaborative conversation.

After walking through this exercise, the team needed to assign weights to each item based on its relevance to this project. This exercise would help us identify which items were critical to us and which were lower priority. For example, for this project, the team would probably write our own context-specific support documentation for our platform users, so we’d likely rely less on this vendor’s documentation to contribute to the Help & Documentation. I documented my perspective on what the weights should be to help give the team something to react to. That way, the team could ask questions and agree or disagree based on their understanding of the heuristic.

What are our judging criteria?

Next, the team would need to align on how to evaluate the tool against a particular heuristic. Our Senior Design Researcher, Hannah Graffeo, recommended a scale that was both flexible enough to support different observations and strict enough to highlight items that were not going to work. I created a second table to document these guidelines on the same page so the team could follow along as well:

  • 1 (not a problem): No problems, good as-is.
  • 2 (cosmetic): Small, cosmetic problems. Can very easily be resolved. May involve updating copy or other small fixes.
  • 3 (minor): Small, non-cosmetic problems. May be solved with minor code changes, additional instruction, or Support Center documentation. Can be resolved as a phase 2 post-general release.
  • 4 (major): Larger problems that may require larger code updates and implementation changes. Should be resolved prior to general release.
  • 5 (must fix): Non-negotiable problem. This issue may not be changeable with the out-of-the-box solution. If we cannot update this, we should not move forward.
An image describing the severity scores and their rules.
The second table answers, “how do we recognize a problem?”

How do we apply our judging criteria to what’s important in this context?

The team would need to connect the weights we assigned each heuristic to our judging criteria, spending the bulk of the meeting walking through the tool together in the context of our heuristics. I created a final table to keep track of observations and overall scores:

  • Heuristic Name: The name of the heuristic
  • What does this mean for our project?: Related to the first table to playback how we should think about this heuristic
  • Positive Examples: Relates to the first table to playback familiar examples
  • Observations: List any observations that might contribute to the severity score
  • Severity Score: Select the severity score based on our observations
  • Severity Score: Relates to the previous severity score table
  • Screenshots: Link to any screenshots that impacted the severity score
  • Example Link: Link (if applicable) to the
  • Severity Score: Extract the numerical value from the severity score column to calculate an average
  • Perfect Weighted Score: Multiples the item’s weight by the perfect severity score (1) to understand the benchmark
  • Actual Weighted Score: Multiples the item’s importance by the actual severity score to understand the overall score
  • Team Notes: Used to keep track of any notes from the team
  • Heuristic: Links to the first table to pull in information for a specific heuristic
  • Weight: Pulls the assigned order of importance for a particular heuristic
  • Perfect Severity Score: The ideal severity score (1) — used to calculate the ideal weighted score to understand the benchmark

That’s a lot! I only showed the relevant columns in the meeting, so the final table looked more like this:

The third table answers, “how does this tool measure against our values?”

Again, I went through the exercise myself and added observations and screenshots to help ground the team in evaluating the tool during the meeting.

With advice from our Director of Product Design, Ben Weaver, the last thing I did before the meeting was to organize a summary deck to help provide structure to the session. This deck walked through the expected meeting outcomes, briefly explained heuristic analysis, and included a summary of my findings to help ground the conversation for our team approaching this exercise for the first time.

During the meeting

In 60-minutes, the team would learn about heuristic analysis and how it could help us in our vendor selection process, agree on which heuristics apply to this scenario based on our understanding of feasibility, product goals, and user needs, and assign a score to the tool based on that understanding.

We emphasized that this exercise would help us learn as a group to think about usability, but this list was not law, nor an exhaustive evaluation as a tool. As a team, this tool’s final score should also take feasibility, viability, and solving for user needs into account.

Introduction slides to explain the meeting outcomes and explain heuristic analysis to the team

We walked through the first table (What are “heuristics,” and which are important in this context?), taking the time to walk through examples from recognizable interfaces. We talked through which items might not apply to this project and which are more critical, referencing the weights I assigned and adjusting where necessary.

Next, we walked through our second table (What are our judging criteria?) and aligned on what affects the severity score, agreeing on definitions and judging criteria.

Next, we went through the actual evaluation exercise using the third table (How do we apply our judging criteria to what’s important in this context?). I shared my screen to walk through the tool’s demo environment, and the team went heuristic by heuristic to talk through the interface against our rules. I pointed out my observations, and the team added their thoughts as well.

After we went through all of the items in the list and discussed the severity for each one, we were able to assign a final score and discussed the next steps. This particular tool had a 2.4 average severity (out of 1), a 97 total weighted score (out of 34), and one 5 (must fix) item. The must-fix item was deemed a blocker to continuing with this particular vendor tool until we could find a solution. We now also have a baseline score to measure other products against and make the best decision for the interface usability.

An image of the final scores from our evaluation.
Our final summary provided a benchmark for measuring other tools.

Tips

Take time beforehand to do the exercise yourself

I had the most previous experience with this methodology as the team designer, so I made sure to be prepared to answer any questions the team had. Taking the time ahead of the meeting to go through the exercise meant I could answer questions more confidently as they came up.

Take all questions into account — this should feel collaborative!

Although we have different titles, we are building a product together. Nielsen recommends that three to five people participate in a heuristic analysis — this exercise is only helpful if everyone participates! Making sure that the full team is representing their perspective is also critical in gaining alignment.

Give the team something to react to

Giving the team something to react to was great advice from our Senior Product Manager, Amber Britton, while walking through the exercise in our 1:1. In a one hour meeting, it could be overwhelming for the team to learn the ins and outs of what a heuristic analysis is and then immediately apply it. Working through the exercise myself helped encourage the team to make their own observations and make sure we made the most of our time together.

Photo by adrian on Unsplash

Takeaways

This was such a fun exercise to work through as a team! Having different perspectives represented during the heuristic analysis helped to elevate the whole team and reduce any unknowns. There were so many questions during the session that helped to shift my perspective and uncover additional values we might have when evaluating tools. I would recommend this process for anyone interested in facilitating a conversation about usability within a multidisciplinary team!

--

--

Loriah Pope
Movable Ink Brand & Design

Product Design @hubspot. Happy nerd. Includer | Strategic | Futuristic | Developer | Positivity | ENFJ