Let’s say you and your team have been working on the foundation of your own amazing Design System. You’ve completed the design principles, colors, and typography standards you need, and you’re ready to create some components for others to use.
But wait, which components should you focus on? Which ones would the team work on first?
There are many different approaches to making that decision:
- Roll some dice and pick them at random
- Each design system team member makes a choice based on their individual interest
- Have a group voting session based on what the team thinks are most important
- Prioritize your components based on user recommendations and requests first. Then, narrow your options down using one of the other approaches mentioned above.
Any of these methods will get the job done, more or less.
However, when I had the chance to organize and faciliate a 5-day Design System workshop, I wanted something to be done more logically and collaboratively.
We also needed a method that would produce a backlog of requirements covering 10 flagship products. Those requirements needed to be agreed upon by all 15 participants in the workshop.
We dedicated nearly two workshop days for this process, breaking it down into six phases:
- Understanding the product eco-system
- Discovering the design decisions
- Identifying the components
- Bucketing the components
- Prioritizing the bucketed items
- Analyzing the results
Understanding the product eco-system
We often fall into the trap of focusing only on the product that we are working on and overlook the bigger picture. That’s why we invited our information architects and principal engineer to give us a presentation on how different products work together within the eco-system we were designing for.
As we learned about how different products were integrated together, it started an internal dialog within the team on a cross-product strategy for our design work.
Discovering design decisions
Have you ever looked over your co-worker’s shoulder and wonder why the design is done the way it is? We wanted to answer those questions to understand the motivations and rationale each person had towards their work.
We named this exercise “The Screen Museum”. It started with a wall full of UI screens — a physical wall with screens printed on A3 papers.
And yes, it was a lot of wall space and tree cutting, but it was essential to make it tangible.
The best way to avoid unstructured discussion is to make things tangible, so we always have something to look and point at during the conversation.
The exercise has the following steps:
- The product designer presented the UIs of his/her product.
- Others wrote down their questions, doubts and feedback on sticky-notes and placed them on the screen print outs.
- The product designer then addressed and answered each sticky-note.
- Then we repeated the above steps until all products are covered.
We used the note-taking method in this exercise because it made the feedback visible and tangible. It also helped to avoid interruption during the presentation, which can lead to disruptive discussion.
Identifying the Components
Once we had an understanding of the UI design in different products, we were ready to break those screens down into smaller parts.
- We divided the group into smaller teams and allocated each team several products to identify. The only rule was that the designer who designed the product couldn’t be assigned to his/her own product.
- The team circled the components and placed sticky notes with the component names beside the circles.
- Everyone walked around and reviewed everything on the screen print outs.
- Next, we placed another colour sticky-note beside the original ones to capture any questions or disagreements of the identification or terminology.
- One-by-one, the facilitator led a small discussion to come to an agreed identification and terminology.
- The facilitator finally stacked all the related sticky notes together and placed one final note, in a different colour, on top to represent the final agreement.
With this exercise, we identified all the components living within our ten flagship products, each with an agreed terminology. That’s when we could begin to talk in the same design language.
Bucketing the Components
A component in a flagship product doesn’t necessarily mean it has to be in a Design System. It can be something product-specific or have limited customization ability (e.g. third party plugins).
For these situations, we used the following steps to separate out what shouldn’t be included:
- We first transfer all the sticky-notes stacks from the screen print-outs to a big surface.
- Then we affinity clustered them into groups and gave each group a category card.
- Everyone voted on the component categories that they thought should be included in the Design System. We didn’t put a limit on the amount of votes participants could use.
- Anything with two votes or less were isolated to a remote part of the whiteboard. They should be considered as the lowest priority.
Since votes were unlimited, one participant could put 100 votes on one thing if they thought it was really THAT important. Because of that aspect, it made the exercise great for eliminating un-important things.
However, we also understood that top-voted items from the bucketing exercises didn’t necessarily mean they deserved the highest priority.
Prioritizing the Bucketed Items
When prioritizing the Design System backlog, we needed to consider the time it would take to deliver those components. For this, we used the Impact/Effort (or Eisenhower) Matrix to prioritize the items.
The traditional way of doing this exercise (where the facilitator asks the participants to yell out where each item should be placed on the matrix) had the potential to extend out our group discussions. With a large number of participants, it could have easily turned into a situation where the loudest person in the room always wins the discussion.
To avoid that possibility, we approached the prioritization matrix exercise in a slightly different way:
- The facilitator prepared the Impact/Effort Matrix on a whiteboard.
- Next, the facilitator held up a component category card and read it out loud.
- Each participant used a marker to draw a small circle where they thought the component should be on the matrix.
- The facilitator places the category card where most circles are. If the circles are spread out, you simply take the average and place it in the middle of the circle groups.
- Erased all the circles.
- Then we repeated the steps until all cards are on the matrix.
One significant Key Performance Indicator (KPI) for any Design System is product adoption. To drive product adoption, we need to feed the Design System with the most impactful components in the shortest amount of time.
Therefore, the highest impact components which require the least amount of effort to execute become the top priorities.
But which effort should you concentrate on? Design effort or implementation effort? Something easy to design can be very difficult to implement.
To capture both, we conducted the above exercise twice; once with designers and design leads, and the second with developers and information architects.
Analyzing the Results
If we had passed the bucketing results along with two extremely busy looking Effort/Impact diagrams to our Product Manager, he’d probably haa no idea what to do with it. Without proper analysis, anyone would struggle with creating a roadmap from such a rich, qualitative data set.
We needed to put everything together into one single prioritized list. Here’s how we did it:
- We created the first column of our Excel sheet, based on the component category cards from the bucketing exercise.
- Next, we put the number of votes of each component from the bucketing exercises in the second column.
- We placed two rulers on the matrix — one with ascending values on the vertical axis; and the other with descending values on the horizontal.
- We measured each sticky note and gave them both an impact and an effort score — less effort gets a higher score.
- We recorded all these scores in columns 3–6.
- Next, we created a sum of all the scores of a component, keeping the numbers on the same row.
- Finally, we sorted the list with the total score (from high to low).
And voila! We were set with a prioritized backlog list that captured the needs of all 10 flagship products agreed by the team.
After a day and a half of a productive, focused workshop, our team was off to a great start. We were aligned with a sense of achievement from the beginning. Even better, our project manager now has a tangible list of project deliverables he could integrate into the project roadmap.
How does your Design System team decide and prioritize your backlog?
Please share your stories by dropping a comment below. Thanks!