The 90/30 rule for user research confidence scores

Peter Parkes
Published in
2 min readApr 29, 2020

Confidence intervals are everywhere in the quantitative research world, but in small scale qualitative research there isn’t a truly analogous concept. Nevertheless, being able to express how confident you are that an insight is accurate is useful, and that poses a question: how do you calculate a confidence score for insights from user research?

At Qualdesk, we’re big fans of making them up. And this isn’t as crazy as it seems.

In your own research, you’ll almost certainly have shared findings that you were very confident in, and others that you weren’t so sure about. And when you shared them, you (hopefully) let your audience know, separating the ‘safe bets’ from the more ‘emergent’ insights.

Whatever your choice of terminology, the premise is the same: if you share an insight with someone, and you also let them know how confident you are in it, it helps them to make an informed decision about the actions they might take as a result.

How do you score an insight?

The 90/30 rule for user research insight scoring

There’s a trade-off to resolve here between perfection and speed, and at Qualdesk we believe that perfection is difficult. Trying to gauge the accuracy of data gleaned from small sample sizes is difficult.

On that basis, we’ve prioritised speed. And that’s where the 90/30 rule comes from.

For all of the insights to be scored, assign them either a 90% or 30% confidence score.

A 90% score means that we’re pretty certain about it and a 30% score means that more research is required — that there’s something about the insight that, well, we’re not confident in.

Next, for all of the 30% insights, we ask whether making any of the following changes would increase our confidence:

  1. Reducing scale (e.g. an insight that says “Users do X” could be de-scoped to “Users in Segment Q do X”)
  2. Reducing frequency (e.g. changing “often” to “sometimes”)
  3. Reducing impact (e.g. changing “furiously angry” to “mildly annoyed”)

If we can do that, we either increase confidence to 60% (if we still believe more research is required), or 90% (if the scope reductions make us confident).

Is this perfect? Absolutely not. Does it help us screen out the ‘good’ from the ‘maybe’ insights quickly? Definitely. Does it help us keep moving? Yes. And does it help us plan continuous research? Yes.

A final word

You may disagree about the 90/30 rule, and you may have your own system and rules for scoring — and that’s why we’ve made the confidence score property in Qualdesk Insights a simple percentage, so you can implement your own system.

However you do it, assigning scores gives your peers a clear guide as to which findings you believe are the most accurate, and allows them to plan and take action quickly as a result.

Originally published at



Peter Parkes

Founder of Qualdesk, formerly at Made by Many, Skype and Expedia