How to understand your Design System’s health — and eventually, its success

Christos Kastritis
Deliveroo Design
Published in
24 min readJun 1, 2020

--

How might we communicate the value of our design system? How do we measure success? These are the questions that occupy the minds of many in our field. We strive to find answers to these, but let’s take a step back for a hot minute and look at the bigger picture.

Before we can answer these questions with confidence, we first need to understand the health of our design system. What is that you say? Well, to me anyway, it means establishing a set of methods and measures that will give us a pulse on how we’re doing. Think of it like our vital signs — in measuring them we will unearth a plethora of insights and opportunities. Let’s approach our design systems like we would with any product. Health. Pulse. Vitals. Comprende?

Like me, you may have contemplated questions like these, but are not sure where to start:

  • Where and how can we be effective while delivering the most impact?
  • Which metrics are best suited to us? How can these numbers and insights help us drive change?
  • How might we align with the needs of the broader business and communicate our value in a language that makes sense to them?
  • How might we increase buy-in and headcount?

We don’t have a crystal ball, but I’m hoping what I share here is the next best thing. At least, to get you up and running. Those in larger teams might be more familiar with what I am about to share. Those in smaller teams like myself, this guide will help you establish a baseline or a pulse on your design system.

By the end of this article, you’ll have a grasp on:

  • Sentiment: How to gather a large body of data from our people — both qualitative and quantitative.
  • Adoption: High-level insights around the adoption of your design system as well as inside your design tool, Figma (Sorry, Sketch).
  • Time and cost: High-level insights around your current ROI.
  • The road ahead: A clear set of opportunities that can easily fit into our roadmap, backed by insights and aligned with the business.

As you go through this article, keep reminding yourself of the needs of your business and how you can better align yourself with them. A friendly reminder that design systems are different from business to business. What worked for me may not be best suited for you. And that’s fine.

Right, enough jib-jab. Let’s get cracking. Insights gathering time.

Sentiment

While attending the DesignOps Summit in New York, I was incredibly inspired by The 8 Types of Measures for DesignOps, which was compiled by Abby Covert. There are many things that we Design Systems people can take inspiration from when it comes to DesignOps.

Understanding how people feel when using our design systems is a no brainer. It is incredibly helpful to gather a rich body of data that will trickle down into our types of measures. And yes, it can totally be quantified through the wonders of a survey. But before we do that ask yourself, do you have enough insights to create a robust survey? If not then join me on a road of discovery through workshops and interviews. I find it’s better to do this anyway as you can gather a ton of additional insights through conversation versus a survey. It’s also a great way to speak to people you haven’t before while building trust in the process.

Understanding your Product Designers

At the time of joining Deliveroo, Tim (my previous manager) and I had formed a Design Ops function. Naturally, our focus was drawn to our designers first. For you, that may be different. Tim decided on running a Now: Next workshop where designers could collaborate together on a set of topics.

Our topics were largely DesignOps focused, but can easily be tweaked to work for design systems such as:

  • Support and structure for creating great work
  • The function and value of design systems in the wider org
  • Essential tools and resources for great work to happen
  • Community and culture within design systems

Ensure that the topics you decide on reflect your potential areas of focus. They should be areas you want to get a pulse on.

3 people discussing a theme with each other
A heated debate on what to have for lunch

During the workshop, designers would rotate, gather, and discuss these topics together. We had a topic for each corner of the room with their own number eg. 3. We gave designers a random four-digit number eg. 4132. This ensured that they were speaking with different people at each corner. First, they shared their thoughts on the 'Now' of things. Once all corners were complete, the process began again with a focus on 'Next'. At the end, we gathered and shared our thoughts as an entire group.

A person analysing a bunch of post-its, on the floor. Because why not.
Myself pondering in deep thought

Affinity mapping is a great way to cluster the post-its and bundle them into themes, which in turn can become your focus areas. Our focus areas became:

  • Tools and Design Systems
  • Culture and Community
  • Training and Mentoring
  • Sharing and Visibility
  • Ways of Working
A graphic representing the themes we had grouped together
A more detailed representation of our focus areas

Over time, our focus areas evolved as we became more design systems focused. You’ll see this later on.

Understanding your Engineers

At this point, I had been at Deliveroo for around 4 months. While our initial focus was on product designers, I was also having ad-hoc conversations with engineers. It became evident that there were a lot of opinions floating around. There was also a massive gap in my understanding of how engineers really felt about our design systems. The logical next step for us was to run some interviews. Interviews provide a greater depth of feedback which would allow me to structure a cohesive survey, becoming our baseline. I was fortunate enough to be able to bounce ideas with a researcher who helped me piece together a script (Thanks Audrey!). I ran the interviews by myself, and boy were they insightful.

Focus Areas

When you run your own, have a think about what questions you would like to ask. What would you like to learn? Are there any gaps in your understanding? Focus on those. For myself I wanted to learn more about:

  • Their usage/adoption of our design systems
  • Level of experience with using a design system
  • Collaborative process with designers
  • Technical and general feedback

Some of the questions which gave me the most interesting feedback were:

  • When was the last time you used something from our design system? What was it? Could you walk us through it? — I found that people were much more comfortable with showing rather than telling. Plus, you gain an insight into their workflow.
  • Have you collaborated with designers before? Was there ever a situation where it was outside of a design system?— The answers for both of these are more than likely ‘yes’, but it allows us to dive a bit deeper into their experience and process in this scenario.
  • Could you share with me a story of when you struggled or had an issue with using our design systems? How might you improve that problem? — I found that having them explain how might they improve that problem gives you an insight into their thought process.
  • Is there one thing you feel is missing? That would help you work better?

If you have uncertainties about what to ask, Brad Frost covers a set of areas and questions which is a great starting point.

Pointers

If you’re unfamiliar with running and analyzing research interviews, here’s a few things I picked up:

  • Ensure you have a strong mix of participants: different platforms and different levels of experience and tenure. This covers any potential insights that you might have missed. For myself, I interviewed several within each platform/experience/tenure. We want to have a strong sample of participants.
  • Having a few ice breaker questions allows people to warm up and eases them into the more nitty-gritty questions.
  • It’s ok to not stick to the script. Go with the flow. You will uncover insights you would have missed otherwise. The script should be more of a guide than anything else.
  • Run your questions by someone. Then do it again. We’re guilty of constructing sentences that do not make sense to others. I find collaborating with a researcher on this one is super helpful. The main thing is to ensure our questions are neutral and have no bias in them.
  • When analyzing the findings, pull in everything that you find interesting. Anything. Group them into themes. Then group them again so you’re left with a set of high-level findings. Doing it this way means you always have a rich set of insights to fall back on.
A graphic representing the themes we had grouped together
A breakdown of themes associated with our engineers

I’ve got the themes, themes, themes

Interestingly enough, after analyzing my transcripts we can see a ton of similarities between engineers and designers. This is good. It means that some of our ongoing and future initiatives can benefit both users of our design systems. Create a list of opportunities that came out of these interviews. Have a think about short, medium, and long term actions for each opportunity, including the level of impact and effort. You may not take on all these opportunities, and that’s fine. The point I’m trying to make is that when you go and revisit these insights several weeks or months later you have plenty to run with. Here’s a couple that I tackled afterward:

  • A “Design Systems 101” onboarding session for anyone that joins our Tech org.
  • Figma onboarding session for anyone that joins Product Design.
  • An update to our structure around how we communicate updates.
  • Introduction of Auto Layout to some of our Figma components. It’s roughly 250% faster to build a screen with AL than without. At least from my own calculations.
  • An audit on the apparent inconsistencies between Figma and our code-based component libraries. Figma descriptions were updated to cover guidelines and platform availability.
  • A survey gathering insights into what engineers would like to learn about using Figma in a more efficient way.
“Several initiatives later” in Spongebob themed title card

Creating a baseline survey

The above two methods are great for getting the ball rolling. But we got to think long term. How might we turn those qualitative insights into something quantifiable? Can we do this on a recurring basis? Enter our baseline survey. I decided on doing this for a few reasons:

  • To see if there is any similarity between this and the previous rounds of research in terms of feedback.
  • To track, over time, how these perceptions change, allowing us to better prioritize our focus areas.
  • To branch out and include as many designers and engineers as possible.
  • We already have insights. But now we are merging common themes between design and engineering into a recurring survey to capture what progress has been made.
  • To make these insights quantifiable, in a language that our stakeholders, or really anyone can understand.

Setting it up

A Google Form was sent out with multiple-choice questions. These were for product designers and frontend engineers across each platform. I like Google Forms because you can export the results into a spreadsheet, speeding up the processing of survey data.

The options were mostly numbers-based (1–7) from strongly disagree to strongly agree, although it did vary when a question suited more to a yes/no response. People also had the option to provide more feedback if they felt like it.

I focused on statements around their overall experience which consisted of quality, speed, confidence, contributing, completeness, and comms. I’ll be breaking these down in the next section as I find that’s where most of us can get stuck. Which is where those previous insights and research studies come in handy.

The ECTS grading scale (on page 6) was used as I believe it is the fairest when it comes to grading a design system. Memories of your teen years flash before your very eyes.

It’s all about the KPIs

Statements. Questions. Topics. KPIs. Whatever you want to call them, they are beyond important. Especially for your baseline. I split our survey into two sections: General and Experience. Let’s dive in.

1 — Role

“What is your role?”

Content/Product designer, Frontend engineer (platforms are separate options as they have different experiences), Other (for any scallywags).

2 — Tenure

“How long have you been at Deliveroo?”

Having different options for this will give us an insight into whether new hires have very different opinions in comparison to those who have been here longer. What are your assumptions within your own company? At the time of running this survey I had only started initiatives around educating new hires across our Tech org, so expected the results to vary.

3 — Quality

“Our design systems make it easier for me to build a high-quality UI.”

Ratings-based answer
Quality is a staple and widely-understood benefit of a design system, thus it should be included.

4 — Speed

“Our design systems help me implement or design features faster.”

Ratings-based answer
Much like quality, speed is a staple and widely-understood benefit of a design system, thus it should be included.

5 — Confidence

“I feel confident in using our design systems.”

Ratings-based answer
It is important that people feel they are armed with the right information and are getting the most out of our design systems. I feel the word confidence captures this well. We know from previous insights that there are a lot of uncertainties around many things so for us it was imperative to include this and monitor it going forward. Remember our vitals? I feel like this is an opportunity for a pun, but I can’t lose all credibility just yet.

6 — Contributing Back: Ease

“I can contribute back to our design systems with ease.”

Ratings-based answer
This is broken out from Speed based on the insights from our own research. Our insights signal that different platforms have trouble contributing back, which can be either Ease or Speed related or even both.

7 — Contributing Back: Speed

“I can contribute back to our design systems with speed.”

Ratings-based answer
The same rationale behind number 6.

8 — Completeness: Availability

“I find myself using something that is outside of our design systems.”

Frequency-based answer
One of our main insights is around the disparity between what is available in our design tool libraries vs. our code-based component libraries. There is a different percentage of components available in each platform and there will be some cases where:

  • What they require is in Figma but not code (yet)
  • There is something similar available but does not fit their needs
  • There is nothing similar available

So it’s something that our team should definitely include.

9 — Completeness: UI Coverage

“Design systems covers at least 80% of my use cases when building a UI.”

Yes/No answer
Our design system will not cover 100% of use cases, which is totally fine. 80% is based on the Pareto principle (80/20 rule). I believe this is a realistic number for us to be aiming for in terms of adoption across each application. Those 100%-ers are living in a pipe dream. For now, doing this through a survey is a good place to start until we can properly quantify how much is actually covered in each application.

10. Comms

“I know about the latest updates in our design systems.”

Ratings-based answer
I don’t need to say anything here, do I?

Pointers

Base your themes and questions on the problems that surfaced from previous studies. This helps you hone in on the areas you should be focusing on.

If your question includes an “and”, separate into two questions. This is because you are technically asking two questions. A participant’s response may lean towards one of them. Separating them removes that ambiguity and leads to a more accurate response. I originally did this for my question on Contributing Back, so thank you, Audrey, for the tip.

I mentioned this before when running interviews, but please, run your questions past several disciplines. Researchers, content writers, designers, engineers. What makes sense to you might sound completely alien to others. We want to be certain that what we send out is understandable, reusable but most importantly is not a waste of effort.

Think about what metric you wish to communicate with your audience. I felt that a grading system would be the most straightforward and easily understood within our business.

Leave space for extra feedback as optional. Let’s face it, surveys can be time-consuming. Those who wish to finish it quickly can do so by answering the MCQs. Those who feel invested and wish to contribute have that option to do so but now, we are no longer blocking the rest of our participants.

The analysis should separate Designers and Engineers into separate streams.
I decided on this because:

  • They’re using the systems in different ways, thus the feedback may be drastically different.
  • From an analysis point of view, the understanding of our designers is not as detailed as engineers. Water cooler chats excluded.
  • New feedback may differentiate the next steps that have already been defined.

Analyzing for days

By taking the responses of each statement per discipline, I calculated the median for each one. In this scenario, the median makes more sense because it is likely this data will have some extreme values (high or low). In such cases, the median gives a more realistic estimate of central value. We can then turn that into a percentage, and from there we use our grading scale and end up with a very simple grade that anyone can understand. Create a report. Have a high-level summary at the top. But also go deep into your analysis. For each statement include things such as:

  • Why you chose this statement
  • A breakdown that is separated into each discipline
  • A graph per discipline
  • Quotes directly related to the statement. I find including the rating they gave and their tenure helps gain better context around their feedback.
  • What the feedback signals
  • What the opportunities are
  • The grade, followed by a brief summary.

I could go into more detail around analyzing and structuring reports, but let’s leave that for another blog post.

A breakdown of the grades. Quality= C+, Speed= B-, Confidence= C, Contributing= D-, Completeness= D and Comms= D.
Get a load of them grades

Report card

Yes, at this point it feels like that time when you’re going home with your report card to your parents and awaiting a shellacking. This is where the truth can hurt. But let’s not kid ourselves. Creating, maintaining and evolving design is hard. Look on the bright side, we have a much clearer sense of our strengths and weaknesses plus where and how we can improve. Opportunities, opportunities everywhere. What to prioritize from here is entirely up to you. For myself, it was clear we need to focus more on building confidence in the people of our systems, improving the quality of our work, and looking for areas to improve our operating efficiencies.

Adoption

Alright, now that’s out of the way let’s get into some numbers. Given our team’s current skill set and size, for now, we can only focus on UI Coverage and Figma related which should help validate some of our insights even further.

UI Coverage

This is the first place to start. It provides us with a high-level overview of the percentage in the adoption of our design systems across our platforms. Based on the Pareto principle (and Nathan Curtis’ wisdom) we should be aiming for around 80%. With the latter having a significantly lower ROI. The metrics below were mostly calculated manually.

Applications using our design systems

At a basic level, this metric gives us a pulse on which applications should have more of our attention. Depending on priorities, we should collaborate with those applications to migrate them to our design systems. This metric can be considered as a ‘vital’. Let’s take a closer look.

Adoption

Adoption rate is high. A large part of this is down to our create-deliveroo-app tool on Web. Every time an app is set up using the tool, the component library is bundled with it. It takes less than 5 minutes to get your application up and running.

Parity between our Figma UI libraries and our code-based component libraries

This will identify potential components to be contributed back to their respective component library.

Having component libraries that aren’t fleshed out results in further misuse and inconsistencies down the road. This is a big issue. We will also have designers use something that they think exists because it’s in Figma, then the engineer realizes it is not available and use’s something else or creates their own. In an ideal scenario, this would be flagged and encouraged to be contributed back. But as a team of one (now two), this can’t always be the case.

It will also validate what we’re seeing in our qualitative data on whether the ‘inconsistencies’ and ‘lack of components’ were accurate assessments. The process was done manually through an audit. Using these formulas I was able to calculate an accurate number:

[figma components + code-based components added this quarter] — [code-based components that are unavailable/in-progress] = value1

[value1] is what percent of [figma components + code-based components added this quarter] = value2%

Let’s look at a high-level summary.

Adoption %’s. iOS is 40%. Android is 56%. Web in PDS is 63% and for Tools it is 85%.
Parity across platforms
  • These should really be at, or close to 100% of what is already available in Figma.
  • Our mobile platforms rate poorly for ROI in comparison to our Web platforms, especially iOS. We have had a considerable turnover of iOS engineers who were invested in design systems.
  • The numbers validate the concerns from engineering and design disciplines on their being a lack of components or inconsistencies.
  • The numbers further validate our need for headcount in these areas to improve engineering efficiencies.
  • A priority sheet has been devised for the missing components, but with our current team size and priorities, it can’t be looked at right now.

Using Figma’s library analytics

The introduction of Figma’s analytics dashboard can now give us insight on the tooling side of how we are performing as well as guiding us in making more confident decisions.

High level overview of the analytics dashboard. There is graphs that allow you to compare libraries.
High-level overview the insights available to us

Component inserts

At a glance, this metric gives us a snapshot of team inserts, but also trends and comparisons between other libraries. We can get a sense of when teams are designing the most and see if the percentage split of the team’s using our libraries add up to our expectations.

My favorite thing about this metric is that it can be incredibly powerful in deciding what the default variation of a component should be. This is especially useful when you are reducing the number of instances of a component. Let’s look at how we fixed that.

I wanted to remove ‘1 line’ and ‘2 line’ variations of our Row component. And merge them into one. But, how would I know what the correct default of that component should be? We want to remove friction in our libraries where possible.

A GIF showing the comparison between two components
A comparison between the 1 line and the 2 line

We can see here that the 1 line variation has 6.1k inserts, whereas the 2 line has only 796 inserts. So the 1 line is used 760% more. That’s huge. The result was similar across the board too. Yay data!

Component detaches

This is a very interesting metric because it highlights that a specific component may not be meeting the needs of our designers, causing them to detach from the library.

From watercooler chats, interviews, and our survey I was aware that components using multi-line text were becoming a growing pain for a number of reasons.

What’s that multi-line thing again?

Multi-line is when there are many text styles inside one text layer. So, updating was a no brainer because:

  • Auto Layout is gradually making improvements
  • Engineers have difficulties inspecting styles
  • Different disciplines are detaching our components
  • It would also break when using our own plugins
A screenshot of what the detaches list looks like.
Screenshot was taken while updates were being made

7/8 of our most detached components had multi-line text. Further validating the problem. Quite refreshing knowing we could resolve something like this.

Future metrics

Again, without the right people, gathering these numbers is difficult. Here is a couple that I would like to focus on in the future.

Apps on outdated library versions

This will identify applications using outdated libraries, but more importantly, it will give us the opportunity to understand why they are using an outdated library. Some reasons could be: a specific component they need is no longer supported, no team currently owns the project, the process for using the latest version is cumbersome.

Coverage per Figma file

How much a Figma UI library is being across our team spaces and individual projects. This will help us identify which team spaces have a low percentage. Although there are many variables around a file having a low % of adoption: older designs, deprecated components still to be updated, vision/future thinking projects, concept/throwaway files, the ease in detaching a component from our component library. In this case, I would look at it more as an opportunity to dive deeper rather than it becoming a hard metric.

Coverage per application: could use vs. being used

Based on an article by Veriff:

“Since not all products are equally complex or have equivalent requirements, measuring the absolute usage of components wouldn’t quite work. That is why we chose to compare the components that the product could use versus the components that were being used.”

Page/Screen level adoption, per app

Once we can get an overall adoption percentage per app, having the ability to identify what an adoption percentage per page/screen will give us another layer to hone in on in terms of what could be updated to our design system.

As I’ve mentioned before, initially, we want to gather some base level numbers. You can get nitty-gritty with adoption metrics. How far you go depends on your team's maturity, size, and priorities.

Time and Cost

A big driver in using a design system is the ability to increase your product’s speed to market. This, in turn, returns engineering and designers' maker time back to the business, allowing them to solve the hard user problems more readily.

Why can this be difficult?

In the midst of creating a design system, tracking the time spent per component is often an oversight. There can be lots of backtracking involved. There are also a plethora of ways to capture time. Like I touched on before, what can drive us change? what can increase the buy-in across our execs and orgs?

Why is this important?

Time savings are universally understood within a business. We are able to speak the same language as our stakeholders and execs. Framing the overall success of our design systems is what our stakeholders care about the most, at least at Deliveroo. We talk about making teams more efficient and increasing their maker time. Well, we’re in the endgame now.

The ideal scenario vs. Reality

In an ideal world, you have a design system that has tracked the time spent on making each of its components. If so, I am incredibly jealous and this will make your lives 10x easier. And you can probably skip this entire section. But the reality for many is that there is zero tracking and limited consistency across teams in how they document and track work. So wake me up when it’s all over.

But there is a way. Enter, minimum time estimates.

This is the next best thing given our current situation. Time estimates can give us a pulse on how much time has already been spent. I know what you’re thinking. It’s a rough number, but we can make it more accurate. I decided on minimum time estimates because:

  • We could focus only on the minimum amount of time it would take to build that component from an engineering standpoint.
  • We could remove a ton of variables that are even harder to measure; testing, design QA, code reviews, accessibility checks, PR’s, etc.
  • While the number will be smaller, it is a more reliable number. Thus removing ambiguity and building trust with our stakeholders and others. It’s all about getting a ballpark number first, people.
  • Doing so would allow us to establish a set of year savings and projected savings.

Which components to estimate?

There’s a lot I could have included here, but for the sake of time (ha, get it?) I focused on establishing a core list of components. These are components that have a high chance of being used within any application.

Setting it up

Create a survey that includes your core list of components. Get your engineers involved. Get your team involved. Ask them to give their estimates in a unit that the business widely uses. For us at Deliveroo, it is in days.

For designers, we included design system analysis, high-fi work, Figma component, specs + basic documentation. It excluded time spent on design reviews, iterations, and research.

For engineers, it only included the minimum time it would take to build each component.

I also manually mapped the number of applications that were using each component through the wonders of Github. I used it to search for import strings and combed through the results eg. org:[your company] “import [location.of.component]”. This was definitely painstaking but if you have a small set of applications, it’s totally doable with a little patience.

An example of a spreadsheet I made. I will include the link to the template below.

What insights can we gather here?

The survey responses were imported into a spreadsheet. From there I could get a holistic view of each component. I made a template on Notion for anyone that would like to use it. Here we can gather a ton of information around:

  • The total time spent so far across each design system and component library.
  • Design and engineering days saved across each design system and component library.
  • The total cost saved across each design system and component library.
  • Design systems where the estimates are higher than others.
  • The median time to build each component.
  • The total time and cost saved if we were to create a new application.

Let’s look at some examples.

Minimum ROI to date

Components that are used in more than one application are included here. We can see that Tools that we have made large savings due to the number of applications supported (~40).

A table showing the minimum return on investment to date
Minimum ROI to date. The cost shown here is for demonstrative purposes.

Minimum ROI if we created a new app today

Based on using all of our core list of components that are currently available in each component library.

A table showing the minimum return on investment if we created a new app today
Minimum ROI if we created a new today. The cost shown here is for demonstrative purposes.

Buttons in our Tools Design System

A table showing the minimum return on investment for buttons
Buttons

Cards across our Tools Design System

A table showing the minimum return on investment for cards
Cards

What does it all signal?

With such a high proportion of applications using our Tools design system it comes as no surprise to see the amount of time and cost saved so far.

The lower numbers on mobile further validate the lack of parity vs. Figma and the level of investment on these platforms. Which is great for us as we have several sources of data to prove it. This can really help us push for more headcount.

Let’s not forget tooling

I largely focused on components here but let’s not forget about the effort we put into our tooling. Some very basic examples could be:

  • Drastically reducing the number of instances of a component eg. I reduced our system banner components from 24 to 1.
  • Introducing auto layout into our components and as full-screen templates. From what I’ve tried to measure, auto layout can help us speed up the amount of time spent designing a screen by 250%.
  • Combine that with a data plugin and you’re off to the races. We’re working on a live data Figma plugin. What would once take designers over an hour to populate a screen can now be done in a handful of clicks. Dare I say… 1250% faster.
  • If your business cares about performance gains you can also look at server costs, PRs (new lines of code shipped across diffs with Vs without design systems), and streamlined processes. Just remind yourselves about how you can align better with the business’ goals, and focus on those numbers.

Future metrics

The future of showing our ROI I think largely depends on us knowing how long projects take within our product teams, the number of disciplines in these teams, and the time they normally save from using our design systems. Easy peasy right? Ha. I have seen two wonderful people that have covered this: Lily Dart and Bryan R.

Don’t you (forget about me)

As the curtain draws a close we are left satiated with insights and opportunities. We can feel more confident in driving a roadmap that is backed by data. We can start looking beyond these measures and dive deeper if needed. Let’s not do the cart before the horse.

Pro-tip, for each opportunity you take on be sure to include:

  • DS Themes/KPIs from your survey eg. Confidence, Quality
  • Your focus area eg. Tooling, Accessibility
  • Company ambitions eg. Optimize operating efficiency
  • Principles, be it design or engineering eg. Automate the mundane
  • Rationale — an obvious one, but an opportunity to summarize the insights that have led to this decision, where everyone can see.

All of these will add weight to the work you are doing. More power to you I say.

Signing out,
@NotChristos

👋

Special thanks to Andrew, Audrey, Carrie, Lydia, Stuart, and Tim for the support during this journey.

--

--

Christos Kastritis
Deliveroo Design

design systems stuff @WhatsApp. mostly in spreadsheets. fond of vibrant colours, anything symmetrical and the golden hour. brownie points if it moves.