Reimagining a Token Taxonomy

Notes from design token projects, from audits to implementation

Nathan Curtis
EightShapes
10 min readNov 26, 2022

--

Design token dreams have turned to reality across most design systems. Yet some teams now see a years-old inventory as out of step with how naming design tokens has evolved. Others lack alignment across outputs and continue the struggle with libraries and adopters clinging to old variables.

These challenges can shackle a system. Without a strong token foundation, objectives like dark mode, consistent API and multi-brand theming remain out of reach. A moment to reimagine a token taxonomy may be coming.

Over the past 18 months, I’ve driven efforts to redesign design token taxonomies for four separate design systems. This article summarizes an evolving process, from identifying problems and planning an approach to implementing taxonomy across design, code, and documentation. In each project, audits and analysis of as-is token use gives way to collaboratively forming a draft, workshopping choices, and deciding a taxonomy together.

1 Plan

1.1 Recognize problems to solve and goals to reach

Most design systems have an established visual language applied across a catalog of UI components. There’s well-defined color choices, groups, and relationships; typography, space and size have some semblance of tokenization too.

Yet, the team isn’t satisfied. Based on what’s there, the team shouldn’t be. Problems are evident:

  • Token gaps are widespread, with hard-coded values and generic options resting where rigor is needed. Opportunities for depth abound.
  • Tokens are too shallow and simple, inhibiting the pursuit of dark mode, themes, and more.
  • Tokens are too general, applied broadly for distinct purposes.
  • Tokens are applied incorrectly: …brand-red for errors, …text-muted for dimmed borders, …forms-success for a positive trend, and so on.
  • “Tokens” aren’t really tokens, but disparate sets of poorly organized and conflicting variable lists.

As you ponder a next generation of a visual foundation, set expectations what, why, and how current state is imperfect, imprecise, and incomplete. It’s time to change, grow, and scale for a reason.

And it’ll cost you. To refactor tokens is to recognize that you’ll touch an entire component catalog. Weaving a new language will break existing naming patterns, yet most changes are encapsulated within UI components that may evolve visually little if at all, patched as non-breaking change.

So get the team together to agree on problem(s) you’ll solve, anticipate how language will change, and recognize the impact that will spread across the catalog by the time you are done.

1.2 Plan an approach

Grounded in problems to solve and goals to reach, it’s time to plan an approach. This small-to-medium size project need not plan all steps robustly. Yet, a discussion should resolve:

  1. Is your approach organic — find and tune as you work? — or deliberate to audit “all the things,” propose a taxonomy, and implement change catalog-wide? In recent work, we’ve always chosen the latter.
  2. What token types will you impact: generic and semantic colors only? Component-specific colors too? Or also typography, space, size, shape, elevation and more? Most projects are ≥80% color, and address other tokens as needed.
  3. Will component design remain unchanged, or will you also “enhance as you go” (e.g., add focus ring states) and/or “fix as you go” (e.g., correct for inconsistent divider line colors)? Most projects enhance and fix design as they go.
  4. When is your project done: when you have sufficient, token-infused specification or when integration is complete across Figma libraries, component code and documentation site?
  5. What UI components are in scope (usually, all of them), and which will you audit to inform early decisions (usually, many of them)?

Consider setting up a sheet to track components to be audited and what components are in scope overall. This becomes, the Token Taxonomy spreadsheet. While Google Sheets and Excel are sufficient, I favor Airtable and use it for the remainder of this article.

2 Audit as-is “tokens”

Equipped with a plan, dig into design and code resources, find the tokens (and more masquerading as such), see how they are applied to UI components to identify concepts and form ideas of what to do.

2.1 Map token path(s)

Consider mapping token flow across design assets, documentation and especially into the depths of code repositories. A big picture can expose the breadth of impact, weaving from where tokens live into the files where partners import, integrate, transform and apply tokens with their work.

Zoomed out board of code file screenshots tracing a token’s flow from Style Dictionary through transformations and utilities to component files per platform

Map a token’s flow through screenshots of real code files. Remind collaborators of general concepts, and then zoom in to focus on practical examples. Be literal: show this web button.scss file, annotate that Android palette.xml file, thread a token path into Card.swift iOS file too.

Most surprising can be how code repositories import tokens only to map them to local variables and styling utilities. Design system teams are usually unaware of these additional layers of transformation. Study and reveal them, highlight how they disrupt and redirect token flow, and consider how token taxonomy updates will require work per platform and product as well.

With a map in hand, you can spend zooming in, ask provocative questions that you’ve prepared, and sensitize stakeholders to their work to come.

2.2 Collect existing, as-is “tokens”

Once you begun to inspect where “tokens” live, excavate tokens from:

  • Figma text and color styles in libraries managed by the design system, platform or local teams
  • Token code implementation(s) in tools like Style Dictionary
  • Variables, mixins and other utilities in component code repositories across web, iOS and Android implementations
  • Token and foundations pages in documentation websites

Here, tokens is “tokens” because what may exist is a trove of inconsistent, malformed variable names, rogue Figma styles and sets collaborators assumed were tokens but are unrelated to tokens published by the system. Find and log ’em all in your taxonomy sheet including:

  • “Token” name levels in separate columns (Property = text , Variant = primary) concatenated into full names to filter and sort later.
  • “Token” locations, such that different values and concepts may show up in one or more places (Figma, code files, etc).
  • “Token” descriptions (“Use for…”) across tools to identify what’s described where and what to reconcile across outputs.

2.3 Inspect how styles are applied in Figma

With tokens well understood, I’ll inspect how tokens are applied across components with a focus on frequency, intent and (lack of) precision. Figma’s Selection colors feature is a great help. Select all variants, and then trace through to each application one color style at a time, discovering how each is applied.

Inspecting Figma’s “Selection colors” to trace applied color styles

Look for specificity. For example, perhaps Badge applies $esds-color-palette-neutral-90 for it’s neutral background color, whereas Alert applies $esds-color-background-light for the same purpose. Neither token is wrong, per se. But intents are the same and more specific: a neutral tone amid alert colors. A new token is needed.

Look for precision and distinct intent. General tokens may be applied too broadly across many purposes. For example, $esds-color-interactive-primary and $esds-color-interactive-text may be used for selection, completion, focus, and clickability / tapability of text, border, backgrounds, and icon fills. These two tokens are too general; distinct tokens are needed.

Same / similar tokens applied to different properties, components and semantic intents

Look for errors. Usually, a token like $esds-color-text-disabled is applied correctly many times to text. That token’s imprecise use for icons can suggest that a parallel $esds-color-icon-disabled is justified. Yet, concern should be greater for the instances of $esds-color-icon-empty, $esds-color-interactive-empty and $esds-color-divider-section being applied for the same intent. Those need to be fixed.

Varying tokens often erroneously applied to other intents

2.4 Inspect “tokens” applications in code, too

Depending on your comfort level, you or a teammate could inspect and log lines of code that apply tokenizable decisions in today’s libraries and products. Focusing on even a small set of components can lead you to styling constructs and utilities across web’s CSS, iOS and Android to observe how tokens, variables and hard-coded values are applied.

Sometimes, you may be encouraged by thoughtful application of existing semantic and component tokens, such as the esds-tabs CSS rules below.

Code applying existing tokens with rigor and precision

On the other hand, you may also find hardcoded values and poorly named variables too, as exhibited in esds-card CSS rules below. This is why we audit. These examples in real code highlight tangible opportunities for improvement and can be contrasted later with emerging to-be tokens.

Code with poorly named variables and hardcoded values

2.5 Identify concepts as you explore

Once equipped, you can slice, pivot, sort and filter to analyze the data. As you synthesize what you’ve found, record the important ideas and concepts you’ll need to work through with your team. At its most formal, you could track those ideas in a table with columns for:

  • Proposed term to recommend, given evidence to date
  • Alternative terms already in use or discussion
  • Token level the term applies to, such as property, variant, or state
  • Related as-is tokens used today that could be impacted
  • Rationale and other notes to inform decision making
Table to capture ideas and concepts to inform a first draft and workshop(s)

3 Decide to-be tokens

Equipped with as-is token data and emerging themes, it’s time to draft an improved taxonomy, get feedback, and iterate towards decisions.

3.1 Propose a taxonomy

Having paintakingly audited and analyzed the landscape, get to work! Establish a separate sheet of proposed “to be” tokens, that includes:

  • Token name, concatenated from additional columns per level
  • Token type (Generic, Semantic, Component) for sorting and filtering
  • Token value
  • Token aliases, via dynamic references to other rows

As you form a point of view, direct message collaborators to get input, short circuit disagreements, and generate alternatives to discuss together.

3.2 Workshop important choices

As proposed tokens take shape, convene a workshop. Always include essential design system teammates — designers and developers — and add other relevant stakeholders as needed. Workshops can serve two purposes: update the group on progress and make decisions together.

For the progress update, use a Figjam board with inspectable screenshots of spreadsheets to tell the story of your audit, analysis and proposal(s).

Figjam format, balancing presentation materials (above) with a series of workshop sections (below)

After the update, drill into specific topic areas with sections to focus discussion. For each topic, start with:

  • Visual examples of relevant components, variants and states
  • An example token indicating levels to discuss
  • Repetitive token structures ($esds-color-{concept}-{the gray one}-background, $esds-color-{concept}-{the red one}-background) to see how decisions play out
Set the stage with visual examples and sample token structures

Adjacent to the examples, introduce a column per choice (e.g., “What’s this {concept} called?”) with prepared alternatives (e.g., “Alert, Feedback, Messaging, …”) as stickies. A protocol per topic could be:

  1. Individually, silently inspect examples and example structures.
  2. Individually, dot vote on an alternative for each choice.
  3. As a group, discuss each choice, eliminating options (as red) as you go
  4. As a group, agree on an alternative (such as alert for {concept}, error for {the red one}, and so on) by marking it green.
Choices with options and voting for alert tokens

3.3 Iterate choices to completion

After a few workshop(s), don’t be surprised if energy and interest in discussing token taxonomy wanes as decisions become increasingly niche. At this point, shift to direct messages, async updates, and the occasional Slack poll to iterate towards completion.

4 Implement the new taxonomy

With decisions made, it’s time to record tokens in a specification and transform code libraries, Figma assets and documentation.

4.1 Record as a specification

Most teams I work with record token taxonomies in a design specification rather than spreadsheet (too hard to find for most) or just Style Dictionary (a repository invisible to most token consumers).

The specs are a sustained source-of-truth for not just developers but also designers making Figma libraries and writers publishing documentation, too. Specs typically group sets (like -alert, below) with a swatch, value, name, alias, Figma style, and use description per token.

Color token specification, including swatches, values, names, aliases, Figma styles and descriptions

4.2 Handoff and review(s)

As opposed to collaborative workshops, handoff sessions will typically be short: I’ll introduce the specs, we’ll discuss how to review and comment, we’ll resolve any high priority open threads.

I’ll avoid a token-by-token walkthrough. Instead, I’ll circulate token specifications across relevant reviewers (at least one authoritative designer and one authoritative developer) for individual inspection and comments.

4.3 Implement across Figma, code and docs

With a new token taxonomy in hand, it’s time to implement across outputs:

  • For Figma, implement new color and text styles and then apply them to every component, one-by-one across the catalog.
  • For Style Dictionary, implement the new tokens in 1+ JSON files and publish the update.
  • For the documentation site, update tokens page(s) and other impacted docs (such as Color, Typography, and Space).
  • For component code per platform, refactor how tokens flow in, variables and utilities refactored, and components updated one-by-one.
  • For your community, announce the change across communications channels like Slack and email.

For code, I’ve had success in providing code migration samples for a few components. In each, I’ll propose code updates by line with before and afters, or even begin a code branch with the updates. Once they see the patterns, my work is done.

“Oh, so that’s all that’s needed? I can do the rest of this in code. That is, unless you don’t mind doing the rest yourself? 😉

It all depends on your confidence working in their code, and their comfort with you suggesting, starting or even completing what must be done.

Migration sample, indicating as-is value and to-be token proposed per relevant CSS rule

As with every projects, steps and formality vary based on culture and needs. Best to your team as you tackle your next token taxonomy challenge!

--

--

Nathan Curtis
EightShapes

Founded UX firm @eightshapes, contributing to the design systems field through consulting and workshops. VT & @uchicago grad.