Lessons learned from doing an Interface Audit with a small team
The good, the bad, and the 132 unique headings.
It’s 2019, and everyone and their grandma seems to be building a design system. There’s a bewildering array of books, articles, tools, and resources. However, when you delve into them, a lot of the advice is written for and by people at large software companies (the likes of Shopify, Airbnb and Atlassian), or agencies doing work for large brands or government departments.
Qwilr is neither — as of writing, our team has 10 engineers (three of them remote), two designers, and little old me (a freshly-hired prototyper). We want to share our learnings along the way, as a small product company building a design system.
The first version of Kaleidoscope, our design system, housed a working but very small set of components. After feedback from the engineering team about how they were and weren’t working, we wanted to know where we should focus our limited resources for the next version of Kaleidoscope.
We decided to conduct an interface audit (I’ve also seen it called an interface inventory or a visual audit) on our core product, the Qwilr web app (purposely excluding our marketing site and CRM integrations).
What is an interface audit?
Our interface audit was adapted from the process described in Chapter 4 of Atomic Design. If you’re looking to do this yourself, I recommend you go and read it. The TL;DR in Brad Frost’s words:
The interface audit exercise involves screenshotting and categorizing all the unique UI patterns that make up your experience.
We hoped that it would help us to answer questions like:
- Which patterns should stay, which should go, and which can be merged together?
- What pattern names should we settle on to give us a shared vocabulary?
- What are the next steps to translate the interface inventory into a better version of Kaleidoscope?
The interface audit is extolled as a great tool to improve internal visibility and buy-in for your design system. To maximise this benefit, we chose to conduct our interface audit as a group task. A group audit also lets you gather input from different perspectives, and divide and conquer what can be a mammoth task.
What does a group audit look like in practice? I audited a few of the trickier categories in our product (buttons and links, headings, blocks, and layouts), wrote up instructions about how to do it, and begged asked for volunteers to spend an hour auditing a category of their choice.
The remaining categories were; icons, form elements/controls, messaging, 3rd party components, images and media, interactive components, animations, lists, navigation, tables and graphs, colors, and global elements. To get us up and running fast, we shamelessly stole the categories from Atomic Design and tweaked to fit our product.
From start (planning the exercise) to end (report of findings), the audit process took almost three weeks, and we had 10 people in total covering 17 UI categories. I circulated the exercise instructions in a shared document. All of the audit screenshots were collected in Airtable, which is like a spreadsheet, but with a cool gallery view so you can see all your screenshots laid out.
- As advertised, the interface audit gave our project visibility and helped to cement team buy-in. A direct quote from a colleague:
“I expected it to be bad, but it was shocking to see it all laid out like that, and really good to know someone would be taking care of it.”
- On the education front, it also got the team to start thinking about user interfaces in a componentised way, which will be important moving forward.
- Attacking this as a team exposed me to my own blindspots, like just how differently we refer to specific components. The questions people asked along the way and the UI patterns they chose to both include and not include in their categories, gave me insight into the different ways people think about our UI.
It was more work than we expected, and took longer than if it had been a solo job or completed with one or two others. Completing the audit as a group task meant that there was the overhead of setting up the task, corralling volunteers, and then reviewing and cleaning up the collated results.
What would I do differently?
- Prune the UI categories more carefully at the beginning. Some of them were less relevant to our product, or needed to be explained in different ways. This happened partly because I’m a new hire and not that familiar with the product. The audit sure was a good way to get familiar though!
- Don’t ask for volunteers, just pick the most relevant product people. That is, the people your design system will eventually be serving. This will save you the time spent waiting for people to volunteer, and make the whole exercise more valuable in the long-term. For example, our three wonderful interns all helped out with the interface audit, but they’ve since left the company, along with any insights and emotional investment into the problem which they gained through the exercise.
- Set a two-hour time limit on collating the results. The task I gave to the volunteers had a time limit of one hour per category, which isn’t enough time to do a complete audit for many of the categories. However, the collation stage should just be about standardising the way results are recorded and take no more than two hours. Any further time spent “completing” the audit in the collation stage (like the ~10 hours I spent) will result in rapidly diminishing returns.
The most valuable impact of the interface audit was to increase visibility within the company into the work going into Kaleidoscope. The exercise also gave us quantitative insights into the scale of the problem, and helped us to answer the questions we hoped it we hoped it would:
- Which patterns should stay, which should go, and which can be merged together? In one example, we consolidated 132 unique headings into a type hierarchy with 8 headings. The audit was critical for making sure the type hierarchy we came up with covered all of our use cases.
- What pattern names should we settle on to give us a shared vocabulary? It’s an ongoing process, but the audit has been and will continue to be a useful reference.
- What are the next steps for Kaleidoscope? The intersection of UI patterns that were most commonly used and most inconsistent were prime targets for the next steps. The audit allowed us to focus on the 10 most important parts of product to work on initially, and the 25 most important parts for an internal beta release. (We used just the first the “parts” stage of this Part, Products, People activity as a framework.)
That being said, keep in mind the limitations of an interface audit — it is just a visual audit, and won’t expose inconsistencies at the code level. For example, two buttons might look visually identical, but not actually be reusing the same code component. Ultimately what we want to strive for is consistency on both levels.
Should you conduct an interface audit?
I would recommend conducting a group interface audit if:
- you have 5+ people (developers and designers) working on your product, resulting in a fair amount of inconsistency cropping up in the UI;
- you want to enlist organisational support for a design system;
- you’re going to introduce a design system to an established product, or;
- you want to embrace the inconsistency and document the current state of your product (and not necessarily even build a design system!)
I would not recommend it if:
- your product is new or very young and you can quickly identify all of the inconsistencies in your product. (That’s not to say you shouldn’t build a design system if you want to — you can probably just skip the audit);
- you don’t have a lot of time and already have organisational buy in — in this case you can probably get away with doing a Parts, Products, People activity with the key stakeholders;
- you don’t have a lot of time but don’t have organisational buy in — an alternative way to make people “feel the inconsistency”, that takes next to no time, is to run CSS Stats over your product’s CSS.