Why we need to stop talking about the value of UCD in policy making (and get on and do it)

Jack Collier
policydesign
Published in
4 min readMay 8, 2019

Does UCD add value to policy making?

Photo by bonneval sebastien on Unsplash

During my time leading the User-Centred Policy Design team in the Ministry of Justice, I was often challenged to answer how exactly we knew that applying user-centred design in policy making was better than if we didn’t. Someone once described it as ‘belief driven rather than value driven’ and it was always a threat that UCD for policy making is just a ‘nice to have’.

‘belief driven rather than value driven’

Even now, colleagues setting up their own user-centred design teams across government ask for case studies or figures that can prove definitively to their bosses that user-centred design adds value to the policy making process. These are questions which are impossible to answer.

In fact, lots of very intelligent people have tried. Thomas Prehn, previously of MindLab in Denmark, the world’s oldest policy lab, has written about how hard it is to measure the value-add of a UCD methodology. Equally, Shanti Matthew of Public Policy Lab in New York described how her team measured impact through a number of methods, including longitudinal studies. This was incredibly high-effort but was also inconclusive.

So why pursue a difficult and often expensive user-centred design process in the policy making world if it’s not clear whether it’s better to do so?

First up, we’re asking the wrong question.

That’s because you’re trying to measure the impossible.

We talk about the value of policy design as a ‘risk mitigator’. It helps reduce the risk that we design the wrong thing. It’s similar to why we include lawyers on teams — to reduce the risk that we design something that’s open to legal challenge and failure.

Photo by Melinda Gimpel on Unsplash

However, it’s impossible to measure this ‘what-if’ scenario — that is, what would have happened had we followed a different approach. You’re trying to prove something that didn’t happen, which is impossible to estimate benefit from.

We also need to consider what exactly we’re evaluating

‘User-centred design’ is an umbrella phrase to capture a broad methodology. The tools and practices used in one project may differ from another, yet both are user-centred.

What’s more, we’re making a conscious effort in government to adapt user-centred design practices that have been built for the commercial sector for a social impact context. The methods are constantly evolving. Annoyingly ,the thing disappears out of your hands as soon as you try to define it.

Finally, what would be the purpose of tracking and evaluating the methodology itself?

Sure, we’d have a lovely set of performance indicators out of the other end of it, but we would have just spent a load of time and effort navel gazing, that is, building some vanity metrics that can make us feel good or prove a certain point of view, rather than actually delivering value to users.

Ok, frustrating. So what are the right questions to ask?

If you start with the solution — user-centred design — you’re biasing your outcome. We’re not user-centred design evangelists. It’s not a religion, it’s a tool and we can renounce it as soon as it’s not useful. We want to make sure we choose the best tool for the job. Therefore, we need to be neutral about the tool we select.

‘The Church of UCD’ is something we want to avoid. Photo by Stephen Radford on Unsplash

To choose the right tool, we’d want to ask how far it achieves the following things:

  1. Does the methodology increase my confidence in the decisions I’m making?
  2. Does the methodology reduce the time between making a change and measuring an outcome?
  3. Does the methodology mean I can make decisions more quickly?

When viewed through this lens, user-centred design comes out well against other methods for problems when they have a behaviour change element. It’s clearly a good way to develop policy as projects run in this way focus on rapid testing, at small scale, measuring outcomes and decision makers feel better able to make evidence-based decisions.

In summary:

You can’t measure whether UCD leads to better policy than other approaches because:

  • UCD is not a thing that exists to be measured
  • Unless the measures are user outcomes, we’re creating vanity metrics
  • You can’t measure the ‘what if’ scenario

Instead we should judge any method on whether it achieves outcomes for a decision maker. Does it:

  • Increase our confidence in our decisions
  • Reduce the time between a change and measuring a result
  • Reduce the time required to make decisions

The powerful thing about it is that we are accepting that UCD may not always be the best approach. For example, we can imagine a world where virtual simulations offer the best answers to these questions, at which point we would have to stop talking about UCD and start talking about virtual scenario analysis.

By the way, this list of questions is not exhaustive, and the point of this article is to say we should consciously use UCD approaches when they make sense to. However, we should stop asking for evidence which simply doesn’t exist (such as ‘how much money has it saved?’) and instead get on with delivering outcomes.

So, given the strategy is delivery and all, let’s stop talking about how to measure the impossible and start trying to deliver outcomes to users in the fastest possible way.

--

--

Jack Collier
policydesign

Doing product, delivery and UCD to change things for the better. Previously DD in Gov, now at Transform.