How to quantify the value of design
As a designer, a recurring theme in almost every company I’ve worked at has been in the justification of design as a contributor of value to projects. Over the past twenty years, I’ve been asked to ‘colour in’ wireframes, ‘prettify’ slides and — horror of horrors — ‘Mac stuff up’. The preconception seems to be that design is purely an aesthetic discipline.
In this article, I’m specifically talking about design as a broad spectrum of activities that generally fall under the digital design umbrella: visual, content, UX, UI and service design, alongside user research.
It’s an odd position to take, given almost everything man-made that we interact with in our daily lives is informed by a set of deliberate choices — design choices. From the way you brush your teeth to the bumps on your F and J keys to the trees outside your house. Design informs all of it. Even that bit of the M25 was designed.
Design inherently has value in our lives, yet is regularly relegated to a superfluous activity rather than a core driver of change and a contributor of value. Understanding and communicating the value that design activities can bring to a product, project or programme has been a focus of mine for a while. Because if designers are not responsible for articulating the value of their work, who is?
Over the last few years, a number of think tanks and high-paid consultancies have researched and proposed models for calculating and communicating the value of design. Studies from organisations including the Design Council, the Design Management Institute and frog Design outline how ‘companies that invest in design reap the financial benefits’ (usually indicated by greater value on the stock exchange), higher revenues and how it contributes to the UK economy.
McKinsey have taken it even further, by creating their proprietary ‘McKinsey Design Index’, or MDI. Over five years, they tracked the design practices of over 500 publicly listed companies and assessed each company to generate an MDI score. They found a correlation between companies which had embraced design practices throughout the organisation, and companies that performed better on the stock market and delivered a greater return to shareholders.
That’s great if you’re a company wrapped around a product or service provision, like T-Mobile, Pixar or IKEA.
But for UX designers, engaged in coal-face work, where they are asked to justify their inclusion in a project plan or researchers whose insights get stripped from user stories because ‘they’ll add overhead’, how do they describe their contributions in terms that stack up next to more easily quantifiable disciplines, like engineering? A high MDI score matters little to a delivery manager that’s trying to work out whether they need another content designer.
Rather than the abstract notion of ‘design as structural enabler’ that underpins the McKinsey/Design Council/DMI/frog thinking, I’m interested in how we measure the value of design-as-activity in real terms? What does the inclusion of user research in a project delivery mean in time or monetary savings? How does service design as an activity contribute to the success or failure of project delivery? And how do we measure it?
Forrester’s report The Business Impact of Design goes some way toward this, showing how design efforts for some companies paid off with tangible business benefits:
- Facebook ran a ‘fix-a-thon’ to address 100+ minor visual issues, leading to a significant increase in revenue from the target segment
- Nationwide curbed investment into self-service features where research showed that customers valued interacting with a human. This eliminated wasted spend and effort.
- PayPal saw a significant increase in conversion and revenue after redesigning the content of their international checkout, to remove jargon and legal terminology
The report goes on to reinforce the point that designers need to get comfortable with the language of business to quantify the benefits of their work. Embracing design-specific metrics and analytics, and establishing them early within projects needs to be the default, not the exception. Linking qual insight around experience improvements to quant data showing actual behaviour is critical in showing the impact of design, and something that is relatively easy to do.
For me, part of the ‘magic’ of design comes from helping teams to focus on goals and outcomes, rather than metrics, like NPS scores, or FTE savings. Design works to reframe the problem to look at underlying causes, and to explore multiple potential solutions, most often in a user-centred way. It shifts the conversation from asking HOW to asking WHY, opening the floor to any number of possibilities.
Focusing on cost-reduction metrics, such as reducing the number of calls to a customer contact centre, or through the automation of repetitive processes, often puts the (perceived) solution ahead of the problem itself. These solutions often impact the user experience, leading to users not engaging with the product or service — the opposite of the desired effect. By understanding what the organisation is trying to achieve, and more importantly why, designers can expose greater opportunities that help to deliver the desired outcome.
In the above example, it’s highly likely that good content design, or a user-centred redesign of a help page would lead to fewer calls.
Design as a contributor of value
As Head of Design at Kainos, I’ve been working on a framework that allows both designers and non-designers to articulate the quantifiable value that design can bring.
This describes the things we do and the tools and methods we use, as practitioners, e.g. design sprints, co-design workshops, user-centred content design, experience mapping, qualitative user research. This isn’t about discipline-level roles; to be able to attribute direct benefits, it needs to be more granular and specific than just ‘user research’ or ‘UX design’.
This talks about the direct outcomes of the aforementioned activities: in the case of a design sprint, this might be an increased speed to market. Or that multiple ideas have been explored. These are direct outcomes, but still relatively difficult to quantify with any certainty.
To help us get closer to measuring the value derived from the activity, we need to describe the direct business benefit each outcome delivers. For the example of increased speed to market, this might be the de-risking of decision making, or a clearer understanding of user and market fit.
Once we have established the benefits provided by each outcome and delivered by each activity, it becomes far easier to work out how to measure the value. This needs to be quantitative, underpinned by a specific, objective metric — something that traditionally has been difficult for design to do. This might be seen in the number of ideas tested, shorter time taken to deliver a minimum viable product (MVP) compared to previous projects, or reduced cost to deliver an MVP compared to previous projects, for example.
Stealing from the Connextra user story template, you can humanise this framework of design activity, outcome, benefits and measures:
So a direct, measurable outcome of experience mapping is how quickly the team can prioritise what to work on. Or the number of pain points identified. Or number of areas identified for further exploration.
My colleague Cat Bliss pointed out that this framework can be inverted, too. For teams looking for specific benefits, or measurable impacts, they can select one and work backwards to identify an activity that will support them in their objective e.g. we need to get prioritisation done in a day, which means we need to understand pain points in context. To achieve that, an activity the team could undertake is experience mapping.
Why do this
I am sometimes guilty of overlooking the fact that not everybody I speak to is a designer, or conversant with the language of design. As a practitioner, it’s easy to forget that the tools and methods we use every day can seem incomprehensible and unfathomable. Breaking down the monolith of ‘design’ into component activities and their subsequent outcomes democratises the language around design, makes it more open and gets buy-in from a wider number of potential stakeholders and customers.
In organisations where design is often treated as an overhead, being able to clearly show the value of committing to qualitative user research through measurable impact on revenue is vital in helping to mature the practice and create advocates for the roles we can play.
We also owe it to our clients to equip them with the right way to communicate design value to their stakeholders, particularly when they are being tasked with justifying budgets. Recently, a client communicated to me what success looks like for him: that he could get easy access to funding to support his user-centred design initiative because the budget holders easily saw the value design would provide. We need to give him the right way to communicate that value.
By being specific, upfront, and being open about the outcomes, benefits and measures we are holding ourselves to, we can only be in a position to take design out of ‘colouring in’ and getting a seat at the table alongside strategy, product, engineering and delivery management.
Please leave comments, feedback and ideas below — I’d love to hear it. If you want to add to the activities/outcomes/benefits/measures, please drop me a line on firstname.lastname@example.org.