UX research findings: distinguishing base hits from home runs

Paul McInerney
IBM Design
Published in
8 min readApr 16, 2018

You’ve collected mounds of data for your UX study. And you’ve stared at the data long and hard to make sense of it. Finally, you’ve drafted a set of findings and verified that each one meets the basic quality criteria.

Many researchers would feel ready to share their findings with the project team. This article proposes two additional quality checks that can help you deliver better findings in your next UX study. These additional quality criteria are:

  • Value: The finding delivers a medium or high value.
  • Course correction: The course correction prompted by this finding is clear and advisable.

By thinking about these criteria intentionally, you can often fine-tune your findings to make them more valuable, clear, or persuasive.

Let’s start by discussing the value of a finding. It’s useful to be able to articulate the value of each finding because stakeholders often say things like, “We can’t address all these findings right away — which is the most important one?”

By analogy to baseball, we should be able to recognize whether a finding gets us to first base (low value) or is a home run (high value). A low value finding is still worthwhile — just like a steady stream of base hits often wins the game.

So, how can we assess the value of a research finding? To explore this question, let’s do an exercise using some real-life research findings. Review the examples below and rate the value of each one as high value, medium value, or low value using whatever criteria you deem appropriate.

Case study examples

Example 1: Advisor

Background: The project mandate was to deliver a tool that provides advice on one topic. UX research aimed to understand fine-grained user needs such as, How much detail do users want in the advice provided by the tool?

Finding from UX research: We discovered that users also need advice on a related topic.

Outcome: The project team was surprised by this finding. They expanded the project scope to include advice on this second topic. The resulting increase in project cost was negligible (+10%) because the algorithm that generated the advice required only minor enhancements.

Example 2: Troubleshooting aid

Background: The project mandate was to deliver a tool to help technicians troubleshoot problems. UX research sought to identify the information that technicians would find useful so the tool could provide it.

Finding: We were not able to produce one list that worked for everyone. Instead, we found there were two types of technicians, each using a different troubleshooting method that required its own set of information.

Outcome: The team expanded the solution design to cater to two troubleshooting methods. This expanded scope required the project budget to increase by 50%.

Example 3: Dashboards A

Background: The project mandate was to enhance the content in existing dashboards in response to anecdotal customer requests for enhancements. UX research was conducted to identify suitable changes.

Finding: We discovered the problem was not with the dashboard content but the overall dashboard organization. To address the real user need, an entirely different set of dashboards was needed — dashboards organized around each task rather than around each part of the system.

Outcome: The team pivoted from tweaking the existing dashboards to defining a new set of dashboards. There was a 200% increase in project cost — because creating a new set of dashboards is a bigger undertaking than tweaking existing ones.

Example 4: Dashboards B

Background: The project mandate was to deliver a new dashboard. UX research aimed to validate the content the team planned to include.

Finding: One suggestion was to group the issues on the dashboard by category. When we elicited the underlying need, we found out that users often work with issues in one particular category at any given time.

Outcome: The team recognized the suggestion was good and adopted it. There was no impact on the project cost because the budget already accounted for the need to iterate on the original design.

The examples are summarized below:

Summary of examples

Of course, these thumbnail descriptions gloss over many real-life details. Before a project team accepts a finding, they need to see evidence and understand why users feel the way they do. For instance, in Example 1, the team would want to know why users want advice on the other topic. As well, some team members can be … er … not easily convinced when a finding indicates they’ve been on the wrong path. So, getting from research finding to product team outcome can be a more arduous and convoluted path than these descriptions suggest.

Quality check #1: Value of the finding

What criteria determine whether a research finding is high, medium, or low value? Here are some answers I’ve heard in the past:

“The value of a finding is proportional to the size of the benefit for users; a great research finding has a great deal of benefit for users.”

“The value of a finding is proportional to the size of the response by the project team; a great research finding is one that greatly changes the thinking and direction of a project team towards a better outcome.”

While these answers each identify one important factor, the approach proposed here considers multiple factors. The approach is based on the Importance vs. Feasibility matrix used in priority-setting exercises. The criteria for high, medium, and low value research findings are shown below:

Value of a finding based on UX impact and project feasibility

This approach rates the value of a research finding by considering both the impact on users (that is, the improvement to the UX) and the cost to the project team (such as pushing out the delivery date or redeploying staff from another initiative).

The assessment is done using a before-and-after comparison. For instance, consider one of the case study examples:

Example 3: Dashboards A

  • BEFORE the finding: The project team planned to tweak existing dashboards.
  • AFTER the finding: The project team pivoted to replace the existing dashboards with different ones.

Considering the size of the impact on the UX, this example rates a High score, according to my own rule of thumb:

  • High impact: At least 50% of the original user experience is changed by the finding
  • Medium impact: 10% to 50%
  • Low impact: less than 10%

On the feasibility dimension, this example gets a Low feasibility rating because it entailed an incremental project cost of 200%. So, the high impact rating and the low feasibility rating would result in an overall value rating of medium or possibly low.

My ratings for all the examples are shown below:

Ratings of examples (impact X feasibility = value)

Examples 1 to 3 vary only in their feasibility, which accounts for the different value ratings. Example 1 illustrates the type of research finding that is most appealing to a project team — it produces a large improvement in the user experience with a small incremental cost. This is a home run!

Example 4 illustrates the common humble research finding — it’s not going to change the world but at least there is little or no cost entailed. This makes it a medium value finding in my book.

In summary, this section examined research findings on a quantitative dimension by asking how much value a research finding provides.

Quality check #2: Type of course correction

It’s a commonplace that a finding must be actionable; in other words, it should prompt some type of course correction to the project. As findings can steer a project in different directions, it’s worth scrutinizing our findings and asking the following questions:

Is the course correction prompted by this finding clear? Is it advisable? Could the finding be re-framed to prompt a different and better outcome?

Answering these questions can help clarify the implications of what you’re proposing and thus better communicate and advocate your findings.

To help perform this quality check, this section will present a taxonomy of course corrections. But first, you might want to re-read the case study examples; the Outcome section of each example illustrates different type of course correction. See if you can identify each one.

The first point to note is that a finding can point of a shortcoming in either the (1) problem statement or (2) the design solution. For instance, consider one of the examples:

Example 1: Advisor

  • BEFORE the finding: The project scope was to provide advice on one topic.
  • AFTER the finding: The project scope was extended to provide advice on one more topic.

This example illustrates a course correction to the problem statement: the finding resulted in the project addressing a larger problem scope.

All the remaining examples involve changing the project solution. For instance:

Example 3: Dashboards A

  • BEFORE the finding: The solution was to tweak existing dashboards
  • AFTER the finding: The solution was changed to replace the existing dashboards with a new set

The second point to note is that a finding can change, extend, or refine some aspect of the project direction:

  • Change: “You’re going in the wrong direction, so change direction, i.e., pivot.”
  • Extend: “You’re going in the right direction, but you’ve overlooked a major opportunity, So, extend your project scope to address it.”
  • Refine: “You’re mostly on the right track, but you need a minor refinement.”

Considering both points (or dimensions) together, we can summarize the case study examples as follows:

Type of course correction illustrated by each example

Using this taxonomy, we can clearly describe the course correction in each example as follows:

  • Example 1 (Advisors): Extended the problem statement to also provide advice on an additional topic.
  • Example 2 (Troubleshooting aid): Extended the solution to support the full user population, which consists of two types of users.
  • Example 3 (Dashboards): Changed the solution from tweaking existing dashboards to replacing them with a new set.
  • Example 4 (Dashboards): Refined the solution to add a breakdown of issues by category.

Here are additional examples to fill in the missing cells in the table, based on modifying Example 1:

  • Example 1A finding from UX research: We discovered that users did not want advice on topic A; instead they want advice on topic B.
  • Example 1B finding from UX research: We validated that users wanted advice on topic A, but only on three key subtopics.

In summary, this section built the commonplace that a finding must be actionable by providing a quality check to help us be clear about what type of action we’re proposing.

Conclusion

We’ve seen two quality checks that can improve the quality and impact of research findings. The first assesses the value provided by a finding, from low to high value. The second quality check helps us ensure that the type of course correction prompted by each finding framed clearly and steers the project in the right direction.

Paul McInerney works in a UX research and design role at IBM in Toronto. The above article is personal and does not necessarily represent IBM’s positions, strategies or opinions. Thanks to Jake Volz for his valuable contributions!

--

--

Paul McInerney
IBM Design

UX research and design for 20+ years. Working in Toronto.