Using research techniques to improve a Design System: 1 year on

A year on from creating baselines for Design System efficiency

Max Dunne
Pion
4 min readJun 26, 2024

--

One year ago we in product design set out to create baseline metrics to measure the efficiency of our Design System within Figma and, after major restructuring coupled with targeted education, the results of our work are in.

A recap: How we set the baselines in March, 2023

Firstly we identified who used or visited Figma within the company to get a sense of who our changes may affect. Product designers, researchers, brand & marketing design, product managers, engineers and the people team all used Figma regularly, with departments like B2B marketing dipping in and out, too.

We didn’t want to get too clever with our baseline metric and wanted to avoid the minutiae of measuring things like component usage, design time cycles and much more. The main thing we needed for the system was for it to be easily navigated and ultimately enjoyable to use.

This is why we landed on one key metric to measure: Discoverability.

In March 2023 I ran one-to-one interviews with three product managers, engineers and product designers. I asked them all to complete three tasks based on discoverability, speaking out loud as they went. The measurements I documented were:

Number of Hesitations — qualitative and quantifiable. Hesitations cause doubt and too much doubt causes abandonment and affects morale, energy and confidence.

Number of Wrong Turns — qualitative and quantifiable. Too many wrong turns can lead you down the garden path. Without direction, 1 wrong turn could lead to abandonment and a bottle neck.

Task Completed — a key metric. In theory the higher the number of hesitations and wrong turns the greater the likelihood of an incomplete task.

Time-to-Task — the money maker. Quantifiable. You can calculate actual cost to the business with this metric. You can then put a cost on the efficiency (or lack thereof) of your design system.

To work out the correct time-to-task number, I asked them all a series of simple questions based on daily Figma usage, and then capped the time-to-task at 5 minutes. Once the interviews were complete I asked:

On average, how many times a day do you think you:

a) Ask a designer for a design, flow, or element of a design or

b) Want to look for or need something in Figma

I go into much more detail in the original article which can be found here.

Last year’s baselines

The extrapolated averages across product managers, design and engineering were illuminating. Based on our old system product managers spent on average 37 hours per month lost in our system, with engineers spending 57 hours and designers 9.

Across 8 Product Managers 37 hours is spent searching Figma for something, 60% of the time — worked out by the ‘Task Completed’ metric — leaving them empty handed and potentially frustrated. For one Product Manager, that’s 4.6 hours per month spent lost in Figma, or 55.5 hours yearly, which is the equivalent of 7 working days per year, per Product manager. That’s 56 days per year, almost 1 full quarter, spent lost in Figma across all PMs.

So. Have we improved efficiency one year later?

A big yes.

After over a year of several very busy designers fitting hardcore Ops work into their day-to-day, we’ve made it out the other side and the results are in.

Figures showing on average 50% reduction in time spent lost in the system

On average, we’ve improved our Design System’s discoverability by 50%, reducing average time spent lost searching the system by about 6 weeks across the three disciplines. This equates to massive time and cost savings for the business.

There were 63 hesitations across the three main discoverability tasks in 2023, with a 40% reduction in 2024 to 38. Wrong turns reduced a staggering 68% from 70 to 23 and completed tasks grew 110% from 11 to 23.

As we hypothesised, if you reduce the amount of hesitations and wrong turns you’ll ultimately increase completed tasks. This has also had a positive impact on enjoyment of using the system, countering comments in the original tests like, ‘I feel stupid because I use this every day.’

Our System Usability results increased 61% from 44 (Awful) to 71 (Good).

These results are strong and show us our hard work has paid off. The needle has moved dramatically in the right direction and we’ve gone a way to making ours and other teams more efficient. As ever, there’s plenty of work still to be done to make our system more slick, and to help the people who use it understand it even better.

--

--