Experience debt — what it is, how to measure it, and how to pay it down

Sam Irons
Designing Atlassian
13 min readSep 6, 2021

Here’s a story. Let’s say you start as a designer working at a software company. Your team moves fast. You get scrappy and learn by shipping features to production. You’re seeing what sticks, what the customers love. Soon, you need to revamp a previous idea. You look back at your early work and realize that it’s janky. It doesn’t fit with the rest of the product. It feels like an idea, like a beta, rather than a polished experience. You worry about its quality.

Sound familiar? What you’re experiencing is normal. Your team made some deposits on experiences to see what value they had. It’s natural. But, you’re in debt to your customers. And, it’s time to pay.

At Atlassian, we talk a lot about design quality and the right level of rigor we need to develop our features. We try to balance moving quickly with exploratory features and polishing our proven, foundational features.

Along the way, we collect technical and experience debt. Some of this debt comes from the challenges we face as we scale our #TEAM and our products. Some comes from learning new things or reacting to the changing demands of the industry around us.

Software products inevitably collect debt from natural industry and market entropy. Our development counterparts have been discussing technical debt for years, trying to formalize strategies for maintaining and refactoring their codebases. Why should design be any different? What can we learn from them to help our design practice?

Here’s how Atlassian thinks about its design debt. You might be surprised that we’re not fully against it.

How does software accrue experience debt?

Software development thought leaders have experience here. They’ve created frameworks for assessing different aspects of the technical debt. Looking at their frameworks, it’s easy to see similarities in how software applications collect experience debt.

Steve McConnel, author of Code Complete, distinguished technical debt as either deliberate or inadvertent. Martin Fowler, who brought the world the concept of refactoring, expanded on this to include whether the debt was prudent, or reckless.

Fowler plotted these aspects into a quadrant. I’ve adapted the quadrant here for discussing experience debt:

Prudent and deliberate. Fowler argues that prudent, intentional debt provides the business a short-term gain. The team makes a conscious decision to release earlier. They believe the payoff is greater than the cost of taking on the debt of a future iteration. You may recognize this as the minimum viable product (MVP). The team compromises on the vision of a feature so they can deliver faster. And, they plan future milestones to iterate towards their vision.

Reckless and deliberate. Reckless and deliberate debt is very common. The team goes “quick and dirty”, ignoring good design practice. Hackathons, innovation sprints, community development — these practices create little messes in the product. Fowler warns that these “result in crippling interest payments or a long period of paying down the principal.” These shopping sprees put the team further into debt without a real understanding of a feature’s value.

Reckless and inadvertent debt comes from ignorance of good design practice. The team takes on debt without realizing it or being aware of its consequences. Companies that scale rapidly feel this. Even with the best intentions, they may not be able to control how teams interpret what’s required to keep up good design practice. It shouldn’t happen, but the bigger the company, the more of a risk that it will happen.

You may think prudent and inadvertent debt doesn’t happen but Fowler argues that “it’s inevitable for teams that are excellent designers.” This is the debt that collects with natural product and market entropy. Teams have done a good job but, in hindsight, “they realize what the design ought to have been.” The market requirements change. Unfortunately, this is some of the hardest debt to pay back. It may involve redesigning significant pieces of proven experiences.

DesignOps: You design it, you run it

The DevOps loop

Atlassian is moving the industry to the forefront of modern software development practices. We believe in the DevOps method of “you build it, you run it.” Software teams create independent, deployable units of code. And, they are responsible for operating and maintaining the quality of that code. These microservices are interwoven and dependent on each other to form our software applications.

Operating code means maintaining it and tackling technical debt. Development teams plan for maintenance work and bake that time into their working outputs. For example, Atlassian engineering teams spent about 25% of their time on maintenance work.

Our designed experiences are similar. We, as experience designers, build discreet, deployed interactions and flows in our cloud products. And, our products weave these discreet experiences into a functioning end-to-end product experience.

The difference between our current design muscle and the industry’s engineering muscle is the “you run it” part of our philosophy. As an industry, we’re getting good at designing experiences. We need to get better at maintaining these designs after they ship. We need to get better at handling our debt and negotiating that debt paid down by our teams.

At Atlassian, we broadly consider two areas of experience debt:

  • User experience (UX) debt, or the debt that’s discovered after we ship
  • Operational debt, where the absence of a process or expectation leads teams to take on inadvertent or reckless debt

Measuring UX debt

Alex Omeyer from DZone discusses measuring technical debt through bugs, code quality, and code cohesion. Drawing parallels, we can measure experience debt in a similar way to our engineering counterparts. Designers can look at fine-grain fixes like paper cuts and bugs. Then, evaluate their design flows and experience complexity. And, design departments can evaluate entire product suites and their experience cohesion.

Paper cuts and experience bugs

Paper cuts are usually the easiest to “sniff out.” They don’t prevent consumers from carrying out a task or journey. But, they can be visually jarring, and highlight inconsistencies between different parts of a flow. For example, incorrect padding, elements jumping around the screen when loading, typos, and other UI jank. Usually, you can identify paper cuts with automated testing.

Some metrics to track:

  • # of paper cuts
  • # of paper cuts resolved
  • # of paper cuts caught by automated testing

Bugs interfere with our users’ ability to complete tasks in our products. Usually, bugs are mismatches between the specified design and what actually shipped. Some bugs are gaps that engineers have filled using their best guess. Bugs can also occur when code regresses or deployments are rolled back. For example, unspecified error messaging and treatment, generic empty states, broken links, and more.

Metrics to track:

  • # of bugs
  • # of bugs resolved

Mature design teams catch paper cuts and bugs before they make it to customers. A design blitz or quality assurance demonstration can help. In this step of delivery, the whole team reviews the experience before engineers roll it out. Or, you might try automated visual testing.

Even with automated testing, you may still release paper cuts and bugs. Luckily, they are the easiest to negotiate fixing via your team’s backlogs.

Software consumers now consider error-free software table stakes. Fixing paper cuts and bugs will eventually produce diminishing returns. To make measurable gains addressing your design debt, you have to dig a bit deeper.

Experience complexity

Experience complexity is the complexity of each flow as part of a customer journey. You can investigate experience complexity by looking at three aspects of your designs: journey complexity, experience coupling, and conceptual modeling.

You can think of journey complexity as the number of linear, independent paths customers can take to complete a task. And, more granularly, the number of steps or actions needed to be successful in each path.

Some metrics that might be helpful to track:

  • # of happy paths that can complete a journey
  • # of steps per happy path
  • # of unhappy paths that can complete a journey
  • # of steps per unhappy path

The more steps in a path, the more complex the journey. Complex journeys have a negative effect on customer satisfaction. Very complex journeys may hinder task completion.

Experience coupling relates to the number of nested journeys needed to complete a job in your product. Think of them as side quests. Do this before you can do that. For example, if you want your email client to automatically reply to certain types of emails, you may need to first set up a filter.

Some useful metrics to measure:

  • # of prerequisite journeys
  • # of required prerequisite journeys
  • # of optional prerequisite journeys
  • # of opportunities for drop-offs in a journey — opportunities in the UI where a user could be distracted by a side quest (for example, spotlight feature onboarding)

Coupled experiences increase the cognitive burden on your users. They allow them to get distracted, to get frustrated, and to drop off. They make tasks more difficult than your users expect.

Conceptual modeling describes the absolute number of concepts present in an experience. These are the named concepts required to understand and find value in an experience. For example, Compass our newest product has relatively few concepts–components, dependencies, and scorecards. Our flagship product Jira Software has many–issues, projects, boards, filters, dashboards, backlogs, sprints…

Metrics worth considering:

  • # of named concepts in a product, journey, or flow
  • # of relationships between these named concepts
  • Depth of concept inheritance (for example, to understand an “issue type screen scheme” in Jira Software, you need to understand that: 1. “projects” contain “issues”; 2. that these issues have types — “issue types”; 3. that “screens” control each type’s layout; and, 4. that “schemes” map these concepts to a project)

Complex conceptual models negatively affect comprehension. Users need to map your product concepts to their real life, to the task at hand. If your conceptual models don’t fit their understanding, or they’re too difficult to comprehend, then they’re less likely to find value in your product.

Naming concepts while developing them is natural. Your team needs a way to discuss what they’re building. Mature design teams look for ways to abstract this from their end-users. They reduce, presenting only what’s needed for a user to be successful, which shows value quickly. I like the old adage: show all your work in maths (development); hide it in English (presentation).

Experience cohesion

Cohesion relates to the consistency of all the journeys that make up your product suite. Cohesive systems reuse familiar designed components that make them up. They leverage standard design systems.

Some metrics that might be helpful to track:

  • % of journey composed of standard design system components
  • % of components that reflect the latest version of that system
  • A design system compliance score
  • A content standards compliance score
  • An accessibility score

High cohesion usually means that experiences are faster to design and deliver and easier to use. Your customers only need to learn an interaction once, and can apply that to many tasks.

Experience complexity and cohesion can be difficult to see. Designers can get tunnel vision and focus on the current experience they’re working on. Design teams may only look at a particular section of a product or product suite. This is where design leadership needs to step up. Reviewing end-to-end journeys, implementing scorecards, and auditing content all benefit from a team-sport mentality. Design leaders need to captain the team, own the rituals, and be accountable for the complexity and cohesion of their products.

Preventing debt with operational best practice

Nielsen Norman Group’s definition of DesignOps

Operational improvements are meant to be a preventative measure. Ensuring that teams follow design thinking principles and good design practice helps teams catch issues before they ship. At Atlassian, we believe in autonomous teams. So, we focus our operational fixes on providing best practices, instead of top-down processes.

We encourage our teams to front-load their feature development with user research. We ask them to wonder about the problem before designing. We encourage them to define customer success before they start exploring solutions. We teach our designers to go broad and explore multiple solutions. We ask them to bring our customers into the design process. And, we challenge each other to measure outcomes, not outputs.

We’re also a very large company that’s scaling. Best practices are only good if teams use them. We have coaching teams to help us identify where our design operations may be lacking. They help us identify our operational debt. And, they help our teams course correct by providing services that improve their practice.

We also need to have some higher-level assessment of our products. We want to make sure moments in our experiences make sense across a customer’s journey. To do so, we host regular end-to-end demonstrations. We review our key journeys with UX scorecards. We implement quality assurance processes, where needed. And, we perform regular customer feedback review sessions.

Operational improvements are necessary things that all organizations face as they scale. These strategies help curb your team’s spending habits. They raise awareness about the current state of your experiences.

But, strict standardization and process may squash the innovative and agile spirit. They can hamper fast, iterative product development. Good software companies will always need to be scrappy and learn from exploratory releases out in the wild. The key is to put operational improvements in place where needed.

This idea forms the backbone of the agile manifesto. It’s baked into agile principles where the “highest priority” is “early and continuous delivery of valuable software”. And, the “preference [is] to the shorter timescale”. Where the mantra is “individuals and interactions over processes and tools.”

Simply put: when you produce successful software that’s delivered early and often, debt is inevitable.

How does codifying debt help repay it?

A collective understanding helps organizations identify and discuss design quality as a whole. Sharing a common language with your development teams can help you negotiate design improvements. And, interpreting engineering frameworks can help you understand where to put your focus.

  1. Prevent accruing debt that’s both reckless and deliberate. These shopping sprees make it impossible to ever pay down the principal of your prudent debt.
  2. Regularly pay down deliberate, prudent debt. Innovative, agile companies will feel the need to ship a lower quality experience in pursuit of a business gain. The more debt you gain here over time, the more your customer satisfaction will suffer. Left unattended, your products will gain complexity.
  3. Since debt is inevitable, try to keep your outstanding debts to only inadvertent, prudent debt. This is good debt. It comes from learning and adapting to natural entropy in your products and markets.

If design teams mature their muscle in identifying and tackling debt, they can make a mark on the business’s bottom line. They can prove the value of good design practice. Investments in good debt can provide big returns. Investments in other debt may only pay down interest.

This lens may help identify the kinds of experience debt that you should look to pay down outside of shipping net new features. It can help you understand which areas of UI/UX debt to put into active repayment.

These signals point to building better habits for circling back to deliberate tradeoffs made when shipping fast. It means giving some urgency to iterating on successful MVPs to make them consumer-grade.

Take control of your design debt

I’ve adapted some advice from thought leaders in the Scrum space to help you get a grip on your experience debt. Many writers have shared them before but they bear repeating. Hopefully, they can help guide your conversations about tackling experience debt.

Be transparent about experience debt. Make your debt known and regularly visit your experiences to understand how they accumulate debt. Try end-to-end demos, secret shopping exercises, external onboarding consultants, user testing playbacks. Regardless of the mechanism, bring your entire product team and stakeholders along with you. The product team as a whole needs to understand experience debt and recognize it before they will help invest in paying it down.

Use metrics to track experience debt. There are many ways you might track your experience debt. You could go deep and use everything suggested above. At the very least, count the number of experience issues that are present in your products and regularly review this metric as a product team.

Commit to regular repayment. Like technical debt, have your product teams commit to paying down experience debt every sprint. The engineering world recognizes that developers should spend 15–20% of development time on technical debt, experience debt, or code debt. As designers, we need to fight to increase the volume of experience debt fixes in our development teams’ commitments.

Track experience issues in a backlog. Like making debt transparent, one key to start tackling debt is to track it on the product team’s backlog. Then, they can estimate it and get fixes into sprint planning. For the dev teams to understand the amount of experience debt you have, and start paying down that debt in regular, measured payments, you should be moving tickets into their backlogs.

Put experience debt into your definition of done. This is a bit tricky. When you recognize tradeoffs and track the debt associated with releasing an MVP, you should try to include a repayment plan as a measure in your definition of done. Don’t call a shipped experience done until the team has a plan on how to repay the experience debt associated with the release.

Standardize procedures for collecting and paying down debt. Successful teams make deliberate experience tradeoffs to ship fast and learn. They likely do this in informal discussions, decision meetings, or Slack messages. Unfortunately, these decisions go untracked. However your teams decide the best way to handle their experience debt, they need to do so consistently. Document the process for collecting and repaying the experience debt you accrue.

One team, one dream

Hopefully, this helps you better understand why your products accrue design debt, what that debt looks like, and how you can go about paying it down. We all want to make high-quality products, ship early and often, and learn from real-world use whether our designs are successful. Striking that right balance between moving fast and ensuring quality is a tall order.

I’ll add one more tip from our Atlassian customer experience teams:

Speak the same language as your team. Using the language of your developers and support teams helps everyone play the same game. Express design problems in terms that our engineering counterparts recognize and understand. This makes negotiating their commitment much easier.

Now, go pay off your debt. :)

--

--

Sam Irons
Designing Atlassian

I’m a designer. I focus on design strategy, design thinking, user research, operations, and delivery. I work as a content designer in the software industry.