Always Ask Your Users One Simple Question

Tim McCollum
Nutanix Design
Published in
7 min readMar 22, 2021

Successful product releases must typically go beyond just providing new capabilities. The adoption, user satisfaction, competitiveness and delight of any new release are often determined more by how a new capability is delivered rather than a capability’s fundamental utility. In other words, the overall UX of a newly released capability, or product, will often disproportionately determine its adoption rate, competitiveness and overall appeal. However, balancing the correct mix of utility and design attributes required to deliver a compelling offering, while not wasting effort on largely irrelevant design traits, is a classic example of a wicked problem [1] which defies a single algorithmic solution.

The overall appeal of a UX is determined by a variety of intersecting traits including, but not limited to:
1. Utility
2. Discoverability
3. Aesthetics
4. Intuitive navigation and “sense of place”
5. Insightful, and/or even playful, delighters
6. Putting the user in control
7. Understandable, and actionable, feedback
8. Well written UX text
9. Designed for recognition rather than recall
10. Matches a physical world analog, or provides a clear, consistent mental model
… to name just a few

In addition to the complexity of assessing a multitude of traits, the value of each trait works on a highly context-sensitive sliding scale. As a simple example, if I’m delivering a new feature that extends some existing functionality which is easily discoverable and obvious to use, I may not need to spend time adding “delighters” to the UX, or augmenting it with additional graphics. On the other hand, if I’m delivering a novel capability that requires a shift in mental model, I may need to focus on new conceptual graphics, exceptionally clear guidance, and gamifying delighters to encourage use and rapid adoption.

To further complicate matters, a new capability may deliver enough utility, that even when solid design practices aren’t well executed, the feature still results in a substantial UX leap forward, and justifies shipping it before other design shortcomings are addressed. So, while measuring each design element separately enables teams to identify specific UX problem areas, we cannot simply add up all the UX component scores to yield a meaningful assessment of a new release’s overall experience. No single formula, applied to a weighted set of design elements, guarantees a competitively advantageous UX design.

Several good approaches and techniques have been developed for determining when a design meets an acceptable threshold. Approaches like heuristic analysis [2,3] attempt to verify if best design practices have been followed, and thereby increase the probability the release will deliver a satisfying user experience. Companies also develop score cards that largely depend upon the professional judgement of design and product experts. Many companies also make sure to work in partnership with users throughout the design and development cycle in the belief that satisfying even a small set of users means you are more likely to satisfy a majority of users. Still other approaches like SUS and Umax-lite [4,5,6], ask users to judge whether an offering meets acceptable levels of usability. However, none of these techniques provide a simple means to assess whether utility, usability and delight have been successfully balanced to produce a truly compelling offering capable of driving adoption and competitive advantage.

Successful UX-focused organizations utilize variations of all the above techniques because employing a few validation steps prior to shipping is much less expensive than support engagements, fixing customer found defects, frustrating customers, and/or irreparably damaging the first impression of the capability (and possibly your brand). However, starting a new program for evaluating UX readiness can be a daunting task requiring one to navigate a maze of possible approaches and variations. In my experience, I‘ve found starting as simply as possible is best. At the core of any program to assess UX readiness are two simple practices:

1. Work with real users through-out the design and development process.

2. Explicitly ask users how they think the feature/product will be received by their colleagues.

Make sure you have evaluated any significant feature’s design with at least 6 real users. Regular feedback from users is the most cost effective way to continuously verify you are providing the right mix of utility, usability and delight.

Many teams make the mistake of thinking user validation typically comes at the end of the design cycle. This leads to an often heard chorus of “user feedback slows us down” followed by a refrain of “and it’s too late to change anything anyway”. It is just as vital, perhaps more so, to validate early directions and conceptual models with users prior to even having any mocks to show. By doing these early check-ins with real users, we have found, more than once, that basic theories about what we intended to deliver was off target. In one case we found that users didn’t want us to automatically fix configuration issues. Instead, users wanted us to better help them understand why system configurations had drifted apart. They wanted to maintain detailed control over when and how the configurations align rather than trusting it to a single button click. This significantly changed the direction of the UX and the product roadmap.

When user validation is done toward the end of the cycle, these kinds of directional questions often get overlooked, or avoided. Users, who are asked to review high fidelity mock-ups, tend to focus more on detailed UI issues rather than the need for, or the underlying assumptions of, the feature itself. To compound matters, the product team often doesn’t want to address these kinds of directional changes late in the design cycle, and frequently biases the conversation away from such topics, either intentionally or unintentionally.

A good rule of thumb is to evaluate your direction with at least 3 users prior to creating any detailed UI. Evaluate in-progress designs with another 3 users as designs near the three quarters completion point (i.e. When primary task flows are mocked up, and all primary “blast impacts” to adjacent features are understood). Keep in mind that this is a minimal requirement. Doing more user assessments is generally better.

Just a quick note about users. It is important that the users truly be users. Organizations will often use the terms “customer” and “user” interchangeably. They aren’t. Users are the people who actually have their “hands on keyboards” using your products. Customer encounters often attract managers, execs and purchase decision makers. UX leads should attend those customer engagements, and the customer input should be integrated into the design, but the readiness of your design should be determined by feedback from real users.

2. Explicitly ask them how they think the feature/product will be received by their colleagues.

As I said above, “start as simply as possible” and the simplest way to start is asking one basic question. The single question I’ve found to yield the best results is a variation of a Net Promoter Score (NPS) question customized for UX assessments:

“On a scale from 1 to 7, rate how compelling your colleagues will find this feature/product to use?”

1 (useless) 2 3 4 (meh) 5 6 7 (thoroughly delightful)

Notice the question does not ask the user “how appealing you find this product”. I have asked the “you” question in the past, and I’ve asked it in conjunction with the “your colleagues” question above. In nearly all cases, asking users how they personally like the product results in higher scores. This is most likely due to “good subject” bias [7] where participants are less likely to risk directly offending the testers/designers who are in the room. If you have the time, and inclination, to ask both questions, do so, and see what you find. However, if you only ask one question, ask them to rate the overall appeal through the eyes of others. You are likely to get more reliable and unbiased answers.

If you can just get your organization to always ask this one simple question after any user validation session, you can easily begin estimating the overall readiness of the specific feature/ product you’re evaluating. In addition, you can also use this same question to generally assess the appeal of your existing features, and estimate the net benefit an update’s overall experience will provide to users.

Lastly, I’ve tried several scales for this question ranging from thumbs up/down, ABCDF, 10 pts, 5 pts. etc. The 7 point Likert scale, in my experience (as well as the experience of others), yields the most meaningful sensitivity for UX evaluations. While no systematic standardization of results has been done, my experience has been that any UX receiving an average rating below 5.5 (n>5) means your overall UX probably isn’t ready. In addition, if you receive more than one rating below the midpoint, the UX isn’t ready to ship regardless of the average rating (unless your sample is large, n>20).

Implementing a set of practices for assessing a UX’s readiness for release can be daunting and require buy-in from many stakeholders. The lighter you can keep a new process, the more likely you are to gain their buy-in. Starting with a simple question like this immediately gets you useful, meaningful data, and provides you with an opportunity to more diligently evaluate how you can best integrate other assessment techniques into your team’s culture and process.

Most of the standard instruments in use today are quite useful, but were primarily developed to ensure a baseline level of usability. Due to the unpredictable ways utility, usability and delight interact, these techniques may not provide a straightforward measure of a new capability’s overall appeal. If you’re looking to jumpstart a program for easily assessing the experiential readiness of a release, start with two simple practices. Evaluate your design direction early, and often, with real users, and then simply ask them how they think others will like it.

References

1. https://www.interaction-design.org/literature/topics/wicked-problems

2. Neilsen/Norman’s “original” heuristics: https://www.nngroup.com/articles/ten-usability-heuristics/

3. Shorter version (meh…) https://uxdesign.cc/a-quicker-heuristic-analysis-589cb8a6561a

4. https://uxdesign.cc/a-practical-guide-to-sus-9f41a2cb5a55

5. https://uxdesign.cc/measuring-ux-account-for-aesthetics-600fa66f4cd9

6. https://measuringu.com/umux-lite/

7. https://www.alleydog.com/glossary/definition.php?term=Subject+Bias#:~:text=Subject%20bias%2C%20also%20known%20as,the%20purpose%20of%20the%20study.

8. https://uxpajournal.org/response-interpolation-and-scale-sensitivity-evidence-against-5-point-scales/

--

--