Running a design audit of Handy’s app ecosystem
Our design team recently started a project to create an atomic design system for Handy’s customer and professional apps. As a younger startup with a very small product design team, creating and adhering to a strict style guide was never our top priority (when there are just two designers who sit next to each other, it’s easy to feel like you’re creating consistent experiences). However, as the product team grows and Handy scales up its product offerings, it’s clear that our design system needs an overhaul, so we made it our goal to create a consistent, atomic design language that we can use internally and share with our engineering team.

We knew this project would be a large undertaking, but also that the efficiency and consistency created by an atomic system would streamline the product process and ultimately create a much better user experience. We want to spend less time fussing with the UI and more time designing the clearest user flows.
Design consistency heavily influences the customer experience…Inconsistencies won’t make anyone trust you or like you. — Paula Borowska for Designmodo
Since we were starting this new pattern library from scratch, we needed to define the elements (or atoms) of the new system and make sure that the new pieces would work within the existing structure of our apps and flex to include all use cases. To figure this out, we decided to run a design audit of our current customer and professional apps.
Running the Audit
The goal of a design audit is to systematically review all the elements in your design ecosystem and identify inconsistencies. Our team focused on documenting visual inconsistencies because we wanted this audit to be the foundation of our redesign project. However, the visual inconsistencies often reflected bigger user experience and interaction irregularities, so fixing those also became part of our roadmap.
We started the audit by collecting all the UI elements used across our apps. We tried to take screenshots of every single screen, modal, half-sheet and error state. We then moved the screen shots into a Sketch file and deconstructed each screen by slicing the images into their basic UI elements (headers, pickers, list views, icons, primary CTAs, etc). We created an artboard for each design element by app, so we could easily compare the slices.
The biggest downside of the audit was the time-consuming nature of this process. At Handy, we have an iOS and Android app for customers as well as an iOS and Android app for our professionals. For each of those four apps, it took one person about 2 hours to take the screen shots, an hour to move them into Sketch, and then another few hours to slice, organize and label the images.
However, once we had all the slices organized, this visualization of the UI became an invaluable tool and led to productive conversations. We shared the audit artboards with all team members on Invision. At our next design meeting, we put the boards up on the big screen in a conference room and went through each element board together. With everything laid out that way it was easy for people to quickly assess the biggest issues and explain their thoughts while pointing at an example of what they meant. This process went quickly and we left with a list priorities and “biggest offenders” for each design element.
For example, one takeaway was that whenever we show users a list of things, the individual cells (or cards) don’t look tappable. There was no affordance in the design to let the user know that if they tapped on a specific cell they would be taken to a details page or to the next step in a flow. This was true across all our list views — lists of available services, scheduled bookings, available professionals, etc. We also felt that some of the lists looked crowded and could benefit from bigger cells with slightly more information about each list item. When we move to the redesign portion of our project and begin creating modular list elements, these insights will provide a lot of good direction.

Another good example was the realization that our feedback loop design is completely random — depending on what a user succeeds in doing, they might get a modal, a big green checkmark, a temporary green banner, or an icon color change to signify that they successfully completed the action. Seeing screen shots of all these things next to each other made it clear that creating a standard success indicator UI will improve both our visual design and our user experience.
Listing all the realizations that came from this exercise would take way too long, but it is safe to say that we are rethinking every detail of the apps — from the most basic (standardizing our background color) to the formatting of our headers and navigation.
Next Steps
As we begin the next phase of our project and start to build a design library, I think this audit will be incredibly helpful. We can build the library element by element, starting with the biggest issues and moving down the list we created while going through the audit results together. Everyone is on the same page about the problems we need to solve and the decisions we need to make going forward. We will also be able to take each new element and easily test it’s flexibility and usefulness against all of its current iterations, since we’ve already done the work of organizing them. Overall, the audit proved to be a totally worthwhile exercise and a really effective way to kick off this project. I’ll try to document our next phase in a later post, in case anyone wants to follow along.