What is technical debt? And how to talk about it?

This is a transcript of my eponymous presentation at Confoo 2021.

Title slide

So, What is technical debt? And how do we talk about it?

We’ve all used that term: “technical debt”, “tech debt”, but has it been really useful? We keep struggling with maintenance, development hurdles, and lack of understanding of how our applications are even supposed to behave? Personally, I like my tools to be sharp, and, for me, a mere metaphor doesn’t cut it. So let’s get closer to the matter of the subject.

The metaphor

So first off, the term “technical debt” is a metaphor. It’s a figure of speech, a thing representing something abstract. So what is “technical debt” representing? Here’s what Wikipedia says:

“Technical Debt is a concept that reflects the implied cost of additional rework caused by choosing an easy (limited) solution now instead of using a better approach that would take longer.”

So we have our first clue. A technical debt represents the cost of a tradeoff. The trade of time for a better, less limited solution. Let’s look at another definition.

From the Agile Alliance website: “When taking short cuts and delivering code that is not quite right, a development team incurs Technical Debt. This debt decreases productivity. This loss of productivity is the interest of the Technical Debt.”

So here, we don’t really see a tradeoff, but we’re build on the metaphor by talking about interest. For a technical debt, the productivity loss is the interest. And we’re left to guess that the “not quite right” code is the principal. And we would be taking on a debt because we’re taking short cuts.

So… is that it? Is it just that simple?

While it is true that people under pressure take shortcuts, in my opinion, framing the problem as “being fast and reckless”, versus “taking the high road” and have no technical debt, that is wrong, and we’ll get back to that, but it’s also misleading.

Personally, I had many times the opportunity to write my best code. And every time, without fail, I had the same comment to make when I came back to it later: “Seriously? Who did that?”. And I know I’m not the only one in that situation. So if I invested more time to take the harder route, the one made of cleaner code and design, how come I can’t even resume developing on my own code without the need of reworking it first?

I would agree there’s a correlation between clean code and an absence of technical debt, but, as Robert Martin said: “A Mess is not a Technical Debt”. So maybe we have our causality wrong.

Here’s what Uncle Bob has to say:

Technical debt decisions are made based on real project constraints. They are risky, but they can be beneficial. The decision to make a mess is never rational, is always based on laziness and unprofessionalism, and has no chance of paying off in the future. A mess is always a loss.

So it’s not just about taking shortcuts and being reckless. There is something more here than just knowingly making a mess.

What about unknowingly making a mess? Mary Poppendieck says that:

Given a strong motivation, developers have three choices: redesign the work, distort the system (for instance, by ignoring defects), or cheat and game the system. And so if developers do not have the know-how to redesign the system, they are left with two options: distort or game the system.

So, yes, knowingly or unknowingly making a mess will create a technical debt. “Anything that makes code difficult to change is technical debt”. But I would argue the mess here is the least of our problems, and that mess might even be a symptom of a more subtle problem.

So let’s go back to our wikipedia definition. I said that framing the problem as “less work” for faster delivery, versus “more work” for better solution was misleading.

And I said that because, 20 years ago, in the Agile Manifesto, thought leaders expressed that:

Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.

They stressed that:

Simplicity–the art of maximizing the amount of work not done–is essential.

And they recognized that:

The best architectures, requirements, and designs emerge from self-organizing teams.

Hopefully, by now, we’re all on board with that. And agility is not limited to the software industry. For instance, no one expects the first draft of a book to be perfect, right? That’s why some books get first written as blog posts now, for early exposure to public.

The origin of the Metaphor

Now, let’s see what the person who coined the term “technical debt” has to say about “technical debt” and “agility”. That person is Ward Cunningham. And interestingly enough, Mr Cunningham co-authored the Agile Manifesto. He says:

[A] serious pitfall is the failure to consolidate.

Shipping first time code is like going into debt. A little debt speeds development so long as it is paid back promptly with a rewrite. The danger occurs when the debt is not repaid. Every minute spent on not-quite-right code counts as interest on that debt. Entire engineering organizations can be brought to a stand-still under the debt load of an unconsolidated implementation[…].

So here, it’s less clear there is a tradeoff. The metaphor focuses on the cost of the “failure to consolidate”, the cost of an “unconsolidated implementation”. What does that mean? Luckily for us, he later explained that:

[to rush] software out the door to get some experience [is] a good idea, but […] of course, as you learn things about that software, you would eventually go back and repay that loan by refactoring the program to reflect [your accumulated learnings].

So this is totally congruent with agile ways of working: it’s a “build to learn”, “learn sooner” approach. And it seems it comes with strings attached:

[Some] people would rush software out the door and learn things, but never put that learning back into the program, and that by analogy, were borrowing money thinking that they never had to pay it back. Of course, if you do that with your credit card, eventually all your income goes to interest and your purchasing power goes to zero.

So, if I go back to my own experience, this explains why, even when I’m doing my best to not make a mess in my code, it still ended up creating technical debt. It did because building it and releasing it allowed me to learn more about my problem space, and so doing, made the implementation “not-quite-right”, as seen from my new, more educated, vantage point.

We ❤️ agile ways of working

So what we learned here is that we want to deliver code sooner. We want to be agile. We also learned that agility comes with an explicit cost. As long as we build to learn, actually learn, and continuously consolidate that learning in our software, we fulfill the promise of the agile manifesto. If we only do the building, without the learning or the consolidation, what we get is “technical debt”, and this technical debt will make it harder and harder to build. Which is not what the author of the Agile Manifesto had in mind: “Agile processes promote sustainable development. The [whole team] should be able to maintain a constant pace indefinitely.”

And so, the past 20 years, our industries transitioned from command and control, big planning upfront, waterfall ways of working to agile ways of working. Here’s another quote that sums it up:

We are designing an entire system of delivery, from idea to end user, around the truth that every requirement is wrong, we’ve misunderstood it, and/or it will have changed by the time we can deliver a solution for it. The longer the delay between request and delivery, the more expensive that problem becomes.
–Brian Finster

Agile ways of working addresses the most important thing for a business: market fitness.

The biggest cause of failure in software-intensive systems is not technical failure; it’s building the wrong thing.
–Mary Poppendieck

Move fast, but…

So, we, as an industry, choose to deliver sooner to learn sooner, over delivering later for… for what? Increased confidence? Confidence about what? Safety? When we say “going faster”, there’s the thrill, right? But there is also the fear. We have the intuition that there is a change of balance in how we assess the risks we take. Like the risk of breaking things.

Mark Zuckerberg, standing in front of a screen saying “Move fast and break things”.

And while breaking things can be cathartic. It also triggers emotions like fear, and experiences like pain. And also, nobody enjoys broken products or services. The good news is that it’s not a choice we have to make. Remember when I said framing the problem as a tradeoff was wrong? That’s what I meant.

Move fast, and… The DevOps answer

For 6 years, with data from over 31 thousands professionals worldwide, the State of DevOps research program, the longest running investigation of its kind, identified the most effective and efficient ways to develop and deliver software. And here’s what they found

“We developed and validated metrics that provide a high-level systems view of software delivery and performance and predict an organization’s ability to achieve its goals. These metrics can be summarized in terms of throughput and stability. Many professionals approach these metrics as representing a set of trade-offs, believing that increasing throughput will negatively impact the reliability of the software delivery process and the availability of services.”

For six years in a row, however, our research has consistently shown that speed and stability are outcomes that enable each other.

So there is no tradeoff. Speed, measured as throughput, and stability go hand in hand, they build on each other. And we can see that manifested itself in reality.

Mark Zuckerberg, in 2014, standing in front of a screen saying “Move fast with stable infra”. Credits: Mike Isaac

A better definition of Technical Debt

Technical debt, beyond a mess, is the byproduct of delivering sooner, to learn sooner, if we don’t consolidate these learning back into what we build.

So now we have a better understanding of what the technical debt metaphor actually represent. It’s a lack of consolidation, a failure to rework our systems following having learned, from real use, that our systems are “not-quite-right”. This lack of consolidation here is the principal of the debt. The debt generates and accrues interests, in the form of future time spent dealing with it, leading to a reduction of productivity that can be measure in loss of software delivery throughput and stability.

The metaphor also hints at the non-linear relation between accrued interest and time, and the savvy investors in the room will be familiar with the concept of compounding interest, which can be shown like so:

A graph showing value over time. Simple interest grows value linearly over time. Compounded interest grows value exponentially over time.

It’s interesting to know that Ward Cunningham used the “technical debt” metaphor to translate his concerns to his stakeholders when he was working for a Financial firm. So the metaphor, in this context, might have been highly apt to convey the risks of not dealing with the problem.

…but why are we taking on debt?

So now we have a better understanding of what the technical debt metaphor represents. But we don’t know why, despite our best efforts, we are involuntarily taking on debt.

And the subtlety here, is that it’s not despite our best efforts, it is because of our best efforts. And to understand that, we need to invoke the ways of DevOps.

In his seminal book “The Phoenix Project”, Gene Kim shares the three ways of DevOps. They are: system thinking; amplify feedback loop; and culture of continual experimentation and learning. We already talked about that third way when we talked about agile ways of working, about the need to build to learn, and the need to consolidate these learnings. Let’s explore what the other ways can teach us about the “why”.

And the best explanation I found so far to answer why we’re unwillingly taking on debt has been clearly expressed in this research paper by Repenning and Sterman: “Nobody Ever Gets Credit for Fixing Problems that Never Happened: Creating and Sustaining Process Improvement”.

So hold on to your pants, and let’s dive in.

Process Performance, a system thinking view

The actual performance of any process depends on two factors. The amount of time spent working and the capability of the process used to do that work.

The performance of any process can be increased by dedicating additional effort to either work or improvement. However, the two activities do not produce the same outcomes. Time spent on improving the capability of a process usually yields more enduring change.

For example, crunch time increases team productivity, but only for the duration of the crunch period. Gain in process capability, however, boost performance for every subsequent efforts. We know that putting more effort to fix defects creates better software quality, but only as long we maintain that effort. But solving the root cause of those defects removes the need to fix them in the first place.

In system thinking, we represent that “persistence effect” as a stock. A stock is like an inventory, with inputs and outputs.

Time spent on improvement augments the process Capability. Time spent on improvement does not immediately improve performance though. It takes time to uncover the root cause of a problems, and then test and implement the solution. And no improvement lasts forever. Entropy works its magic, and without regular attention and care, our capability erodes.

Beside this factors, let’s also represent the “desired performance”, like management expectation.

That expectation compared to the “actual performance” is the “performance gap”. This gap is always more or less present, there is always more to do, the backlog is rarely empty.

To clause our “performance gap”, and short-of getting more capacity, we’re left with two basic options.

Working Harder

Performance gap puts everyone under pressure to perform. That pressure can be explicit, like KPIs or velocity target for instance, or implicit, like increased micro-management or slight shift in culture. That pressure incentivizes teams to spent more time and energy doing work. And an increase in effort also increases the performance of the process, and closes the performance gap.

This forms a “balancing feedback loop”. This loop constantly balances desired and actual performance.

The second option we have to close the “performance gap” is to improve the capability of the process. So as opposed to “work harder”, we can “work smarter”.

Working Smarter

Here we respond to performance shortfall by increasing the pressure on people to improve the capability itself. We might kickoff new improvement project, or increase training. If successful, this investment will, with time, yield improvement in process capability, increase throughput, and close the performance gap.

Limitations

From this vantage point, we all see that it’s better to “work smarter” than to “work harder”.: an hour spent working produces an extra hour’s worth of output, while an hour spent on improvement may improve the productivity of every subsequent hour dedicated to production. Yet despite its obvious benefits, working smarter does have limitations.

First, there is often a substantial delay between investing in improvement and enjoying the result of that improvement. And the greater the complexity of the process, the longer it takes to improve.

Second, investments in capability can be risky. Improvement efforts don’t always find the root cause of defects, new tools sometimes don’t product the desired gains, and experiments often fail. While investment might eventually yield large positive impact on productivity, they do little to solve the problems we face right now.

So it’s not surprising that we frequently use the “work harder” loop to both accommodate variations in daily workload and react to urgent needs created by unplanned work, like defects. When a production incident happens, we’re unlikely to react by sending our team in a training about “reliability improvement”. Instead, we do what it takes to fix the problem and get back to baseline. And of course, when everything is back to normal, then we should return our attention on improving our process in order to prevent further incidents, hoping to make up for the time lost dealing with the incident. But, it’s doesn’t usually happen. Instead, what the authors observed, and what is actually difficult to understand, are teams in which working harder is not just a means to deal with incidents, but instead it became their standard ways of working.

Rather than using the “work harder” loop to occasionally offset daily variations in work, teams come to rely constantly on working harder to match productivity expectation and, consequently, never find the time to invest in improvement activities. What starts as a temporary focus on working harder quickly becomes the norm.

And to understand why, it’s helpful to consider how “working smarter” and “working harder” are connected.

Reinvestment Loop

That connection exists because teams rarely have excess capacity. Increasing the pressure to do work leads people to spend less time on non-work related activities, they use the “work harder” loop. There are, however, obvious limits to this. After a while, one cannot continue to work harder. If the performance gap continues to widen, teams have no choice but to reduce the time they spend on improvement as they strive to meet their expectations. This connection between “pressure to do work” and “time spent on improvement” creates the “Reinvestment loop”.

Unlike the two other loops, “working harder” and “working smarter”, the reinvestment loop is a positive feedback loop that reinforces whichever behaviour currently dominates.

So, on one hand, a team that successfully improved its capability will experience increased performance. And, as the performance increases, the gap is reduced, and the team has more time to devote to further improvement. This creates a virtuous cycle, as self reinforcing loop.

On the other hand, if the team responded to the “performance gap” by increasing the work pressure, it will also increase its “time spent working”, and so cut the “time spent on improvement”. Their productivity starts to decay, the performance gap widen, thus forcing further shift toward “time spent on working” at the expense of “time spent on improvement”. Here the reinvestment loop acts as a vicious cycle, driving the team to always higher pressure and minimal capability.

The “reinvestment loop” means that a temporary focus on one option at the expense of the other is likely to be reinforced and eventually becomes permanent.

Understanding why the reinvestment loop typically works in the downward, vicious direction rather than the upward, virtuous direction requires that we add a final connection in this model.

Shortcuts Loop

As we saw already, cutting investments in maintenance and improvement in favour of “working harder” erodes our capability, and hurts performance. But capability doesn’t drop right away. It takes time for our capability to decay. In the meantime, the decision to skimp on improvements boosts the time available to get work done right now.

When the performance gap rises, and we resort to increase work pressure, the team eventually starts to cut back on improvement activities to free up more time to work harder. This is the “shortcut loop”. Increasing performance by cutting corners and taking shortcuts comes at the cost of reducing the time spent on learning and improving.

Shortcuts are tempting because there is often a substantial delay between cutting corners and the consequent decline in productivity.

A developer who forgoes tests or proper documentation in favour of meeting a deadline incurs few immediate costs. Only later, when they return to fix bugs discovered by the QA team, or worst, by the users, do they feel the full impact of a decision made weeks earlier.

Thus, the shortcuts loop is effective in closing the “performance gap” only because capability doesn’t change immediately.

System response

To illustrate these dynamics, let’s look at two different two use-cases and see how the process reacts to “working harder” versus “working smarter”. Both use-cases begin in the same equilibrium state. Now, let’s increase our expectations!

This first use-case shows the response to an increased focus on “working harder”. As more effort is dedicated to work, net performance immediately increases. Time spent improving falls immediately, but capability doesn’t. Performance therefore increases. The benefits of working harder, though, is short-lived. With less time devoted to improvement, capability slowly declines, eventually more than offsetting the increased time spent working. Working harder creates a “Better-before-worse” situation.

On the other hand, by “working smarter”, we see that an increase in time spent on improvement reduces the performance, in the short-term. Eventually, though, capability increases more than enough to offset the drop in work effort, and performance is sustainably higher. This is a “Worse-before-better” dynamic.

The Capability Trap

The interaction between the “shortcuts loop” and the “reinvestment loop” creates the “Capability Trap”.

The “Capability Trap” helps explain why teams often find themselves stuck in a vicious cycle of declining capability. The team in need of an immediate performance boost can get it by skimping on improvement and maintenance. However, capability eventually declines, causing the reinvestment loop to work as a vicious cycle. It’s the “Better-before-worse” dynamic.

And because working harder and taking shortcuts produce more immediate gains, and help solve today’s problems, teams unaware of this dynamic and its trade-off are likely to choose “working harder” over “working smarter”.

And unfortunately, the short-terms gains come at the expense of the long-term health of the system.

The research also suggests that teams caught in the capability trap are unlikely to realize the true source of their problem. We often don’t realize how deeply we are trapped-in. Instead, when caught, instincts and lessons learned often lead to actions that make the situation worse.

The Fundamental Attribution Error

We generally assume that cause and effect are closely related in time and space: to explain a surprising event we look for another recent and nearby event that might have triggered it.

So when we measure IT performance by velocity, or by number of defects, or number of incidents, we set ourselves up to make wrong assumptions. When velocity goes down, when defects keep popping up, when production is regularly on fire, we’re likely to look at the people in charge of, respectively, developing, QAing and operating our systems, because these people are close, in space and time, to the problem.

But the true cause of that problem might be distant, in space and time. And because the delay between the true cause and the problem is long, variable, and often, unobservable, a manager is likely to conclude that the cause of low productivity is inadequate individual effort and discipline, rather than concluding that the cause is actually a feature of the process.

This faulty attribution of a problem to individuals in a system, rather than to the system itself, is so pervasive that psychologists call it the “fundamental attribution error”. And this bias means managers and leaders are prone to push their teams in the “capability trap”.

Self-Confirming Attribution Error

Managers cannot observe all the activities of their teams, they cannot easily determine how much of an increase of performance is due to “working harder’ versus taking shortcuts. As a result, managers might overestimate their impact when increasing the “desired performance”, and are not aware of the trade-off they’re incentivizing their teams to make.

When we mix this lack of visibility with the fundamental attribution error, and when the team resorts to shortcuts to increase their performance and reduce the “performance gap”, managers are provided with evidence confirming their suspicions that the team wasn’t giving it’s full effort. This syndrome is called the “Self-Confirming Attribution Error”, and it fuels the “Work Harder” vicious circle.

So what?

First, the most important implication of this research is that our experience often teaches us exactly the wrong lessons about how to maintain and improve the long-term health of our systems. This means that successfully reversing negative dynamics involves a significant mindset shift of the both leaders and teams.

“What got us here won’t get us there”

The good news is that addressing the system works. Dr. Edwards Deming, the famous engineer and management consultant, estimated that more than 90% of problems find their cause in the system and not the individuals. Meaning the most effective way to improve, is to improve the system.

The “not so good” news is that changing a system is hard. Do not underestimate the amount of energy required, and the amount of distress and pain it will generate. It’s the “Worse-before-better” dynamic.

Second. As we saw, the manager’s “self-confirming attribution error” is mostly due to a lack of visibility. As a developer turned manager, I can personally relate to that. It is really hard to get a sense of the reality when you don’t have your boots on the ground. That makes me empathize with my manager, and my manager’s manager. I also empathize with testers in QA team who literally spend their day “black box” testing.

So visibility is our way forward to cut through the haziness of the technical debt metaphor.

Visibility

We’ve seen that, due to a lack of visibility, managers can push their teams in the capacity trap, by not realizing their teams started to skimp on improvement work due to high pressure of working harder. And because “you can’t have your cake, and eat it too”, that’s a problem managers and leadership need to deal with, or it will become someone else’s problem.

So let’s put on our manager hat.

As Mark Schwartz writes, “Sometimes we technologists can be a bit too clever for our own good.” He argues that:

Technical Debt has come to mean just about anything that technologists believe requires an investment whose value is not obvious to the rest of the business. [It’s like saying] to the CFO “these are the things you’ll never understand but have to give us money or allocate time for.

Here we see again that an abstract metaphor is not really helping anyone. For non IT people, the term “technical debt” can even be seen as a lack of transparency. So let’s be transparent about our work. Let’s be complete and exhaustive.

Part of our work is already visible to us and our teams. We’ve been designing and building systems for a while after all. And we do that with roadmaps, backlogs, and ticketing systems. We are neatly organized around the needs of our customers. We have things like epics and user stories for new features. We also have an issue tracker for bug fixes, and we keep a “nice-to-have” backlog.

But what about everything else? What about the “working smarter” work? Some of it is in the backlog, mixed with everything else. Some other might live in documents, or in peoples’ head. And there’s probably the stuff that we didn’t think about, or that we just forgot.

To achieve visibility, our work needs to be observable: being in someone’s head is not enough. What is observable needs to be exhaustive as well: so that we can get early warnings when we start to skimp on some of it. And all of it needs to be categorized: we don’t manage new features the way we manage a typo fix, or a disclosed vulnerability.

Flow Framework

For our needs, the Flow Framework has a solution. Here’s how Gene Kim describes it:

I’ve found the Flow Framework to be incredibly useful for describing the finite ways that engineers spend time and the consequences of not addressing tech debt.

The Flow Framework focuses on Value Stream Management via so called flow metrics. What interest us here are the units used for these metrics: the flow items.

Flow Items

Each of these 4 items are units of business value, pulled by a stakeholder, through the software delivery process. Here’s the list.

  • Features are new value added to drive a business result. This value is visible to the customer. Work delivers new Business Value. This work is pulled by customers, it looks like user stories and requirements
  • Defects are quality problems that affect the customer experience. Work here is delivering external quality. This work is pulled by customers, it looks like bug, problems and incidents.
  • Risks are security, regulation, and compliance exposures. Risks are not visible to customers. That is, not until it’s too late. Work delivers security, governance, and compliance. The work is pulled by a Risk Management Officer, it looks like vulnerability, regulatory and contracting requirements, and internal compliance.
  • Debts are anything that reduces of the ability to modify or maintain our software in the future. Work delivers removal of impediments to future delivery. It is pulled by architects, and looks like API addition, refactoring, process change and automation, or change of architecture.

As a whole, these 4 categories are mutually exclusive, meaning each unit of work can only be present in one of these categories. This provides clarity, and allows us to manage each group with its own specific processes. We don’t manage risks the way we manage features.

These 4 categories are also collectively exhaustive, meaning all the work we do can fits within these categories. And this is what provides visibility.

So if you only work on features and defects, odds are you’re not taking care of your risks and debts. This could be a strong signal telling you that you’re relying on the “shortcuts loop”. Which means, you’re in the “capability trap”.

On the other hand, if you actively manage your risks and debts, you’ll be addressing some of your defects’ root causes. Which will be reducing your defects workload. You’ll be addressing your “failure demand”. And all of that will be freeing up capacity to work on adding business value. You’ll be “Working Smarter”.

The content of these categories are pulled by a stakeholder. And from this pulled-base flow, the Flow framework derives process metrics like load, throughput, and efficiency. Within this framework, the only technical debt work that should be prioritized is the work that increases future productivity. Indeed, technical debt should never be worked on for the sake of it alone. By focusing on flow, paying off your technical debt can be planned in relation to its business relevance.

That relevance is contextual. Because, as opposed to a loan, you don’t have to repay your technical debt. The debt has to be paid only when you change your software. And in that way, it behaves much more like a goods and services tax. And like a goods and services tax, if you don’t buy the good, you don’t pay the tax.

A better metaphor to communicate: “Technical Delta”

Thinking is term of technical gap, or technical delta, can help us make that point. A technical gap, like the performance gap, is created by a discrepancy between what we have and what we want have, between our current technical capabilities and our desired technical capabilities required to enact the business strategy.

By connecting technology concerns directly to business concerns, we make the so called “technical debt” visible and readable to the whole organization.

An even more useful metaphor could be “health”, like in system’s health. This brings to mind “illness” and “care”. We can start to talk about life cycles, investment, maintenance, and eventual retirement. This seems much less opaque, and much more inclusive than talking about technical debt.

Summary

So, let’s summarize:

  • Technical debt, beyond making a mess, is the natural consequence of agile ways of working done recklessly.
  • It stems from the lack of visibility and awareness that managers have on the actual work being done (and not done). And using that metaphor doesn’t really help to make that work more transparent. So managers, stop blaming the developers.
  • We’ve also learned that the “fundamental attribution error” is pervasive, and that the best way to improve, is to improve the system. So developers, stop blaming the managers.

Make all work to be done visible.

  • Developers: itemize and capture your productivity impediments, so that managers can see them.
  • Managers: account for risks and debts. If you don’t talk about it, assume that no-one does, and that nothing is getting done about it. So that should be a problem.

Let’s face it, we’ve all jumped in the agility bandwagon, realizing we could be more productive.

But agility is not about being productive, agility is about learning fast that you’ve been building the wrong thing.

From now on, let’s assume we’re not done after a code release. With our team, we then need to review our work, observe our users, learn, and consolidate any learning back in our software.

And if we still need to take a shortcut, here is an advice:

Cutting scope and meeting the deadline is almost always the best approach.
–Mary Poppendieck

So if you really need to cut something, be agile, cut your scope, and learn sooner.

Special thanks to Tiani Jones for the support and the inspiration

Sources

In order of appearance:

More resources

System thinking

Technical debt

Visibility & flow

50% solution finder at @ExperiencePoint / 50% endurance cyclist. Will train for food and burn it for adventures.

50% solution finder at @ExperiencePoint / 50% endurance cyclist. Will train for food and burn it for adventures.