It’s a trap! Systems traps in software development

Stuart Rimell
14 min readMay 24, 2015

--

Have you ever attempted to improve a situation only to find that you've made things worse? Have you ever followed conventional ‘best practice’ only to find that it’s just not working as you’d hoped? Does it ever feel like the more you try, the worse things get?

In her seminal work on systems thinking, Systems Thinking — A Primer, Dana Meadows describes a number of systems structures that tend to result in “problematic behaviour”. She calls such archetypes ‘traps’, as a failure to recognise them as such can result in unexpected problems. Such traps are extremely common in all areas of life, and are no less prevalent in software development.

This post describes the 8 traps introduced by Meadows, giving examples of their manifestation from my own experience in software development.

How many of these traps have you fallen into? How are your scars healing?

Policy Resistance

Policy resistance might be better described as fixes that fail. This archetype can manifest when there is poor alignment between the needs of different parties within a system.

Take the supply of illicit drugs for example. Users want a prolific supply, law enforcement wants a low supply and dealers want the supply to be high enough to be able to maintain availability but low enough to maintain high prices. Any intervention in this system tends to result in the other parties doubling their efforts to better meet their own needs. Therefore little changes.

This is an example where balancing feedback loops work in opposition to each other, preventing meaningful change.

Meanwhile, back in the workplace (unless you’re actually a drug dealer, in which case my work here is done), lets consider technical debt and the goals of various parties with respect to it. Developers wish to maintain low technical debt for reasons that are obvious if you’re a developer. Managers also wish to reduce technical debt but are frequently under pressure to deliver quickly for stakeholders. This may require compromise to trade off the reduction of technical debt against meeting an urgent stakeholder need. Stakeholders may care little about technical debt (assuming that quality is a given) wishing to see their needs met as quickly as possible.

Attempts to reduce technical debt are met with calls to increase throughput and attempts to increase throughput are met with calls to reduce technical debt. Therefore little changes. This trap can lead to learned helplessness on the part of developers and stakeholders and before you know it you can find yourself in an organisation where the pervading mindset is “why bother?”

This particular example should be reasonably easy to fix if the interested parties are aware of the delay in the feedback loop associated with addressing technical debt. Attending to technical debt should actually result in higher throughput, but this benefit will be delayed until the the debt is sufficiently reduced. If your organisation perpetually runs in urgent and important mode, this delay is unlikely to be palatable.

Policy resistance is simply the result of conflicting needs between multiple parties in a system. May I propose that the ultimate solution for this might be simply (but oh so rarely seen) to attend to folks’ needs.

Tragedy of the commons

Tragedy of the commons is a systems archetype that describes an escalation in the usage of a common resource, eventually resulting in its depletion and ultimate destruction. Importantly, the individual consumers of the resource are incentivised to increase their consumption even though doing so will contribute to its downfall.

A commonly described tragedy of the commons is over-fishing. Each fisherman is incentivised to catch more fish even though doing so will eventually result in the depletion of the fish population and the destruction of the fisherman’s livelihood. A tragedy of the commons is characterised by the short term needs of the individual being met without sufficient regard to the needs of others or to the long term needs of a shared resource. A classic sub-optimisation.

In software, collective code ownership can be viewed as a ‘commons’ and is as susceptible to depletion though inattention to long term needs as fish stocks or grazing land. This can be especially prevalent if developers are incentivised to deliver ‘value’ quickly at the cost of maintainability or comprehensibility. If this is the case, each code change may deplete its overall quality, eventually resulting in unmaintainable software.

Other tragedies of the commons in software development:-

  • Too much work in progress leading to the depletion in quality and throughput of a team’s entire output.
  • Team sub-optimisation in a multi team ‘project’ (careful about those projects now….). If incentives are strongest at the team level, don’t be too surprised if each team look after their own rather than attending to the needs of the whole.
  • Individual sub-optimisation where developers are measured by their own performance rather than by the performance of the team/product/organisation.

This sounds like selfishness doesn't it? Well its not. Incentives, whether intrinsic or extrinsic, designed or accidental are powerful. Each player in a tragedy of a commons may be completely oblivious to their effect on a ‘commons’, often because this effect is completely invisible until its too late.

What are you incentivising?

No, think again.

What are you actually incentivising?

Drift to low performance

Drift to low performance describes a trap where a system not only resists positive change, but continually drifts towards poorer performance.

This trap leverages the human mind’s unfortunate propensity to believe bad news more than good news. When the perceived state of a system is poorer than its actual state, a drift to low performance can occur. Goals can erode over time leading to a downward spiral.

This trap often befalls legacy code, where quality can drift lower and lower over time despite attempts to maintain it. The perceived state (in the minds of the developers working on it) of legacy code can tend to be worse than its actual state. Therefore one’s goals for quality maintenance continually track lower than objective reality (if there was such an objective measure of quality). When one’s goals are always lower than reality, erosion of those goals is the likely outcome, resulting in this case in poorer and poorer code.

Why is the perceived state of legacy code often worse than objective reality? Well, just starting out with the name ‘legacy code’ already puts one in a negative frame. The goals for maintaining quality in legacy code tend to hover around “just good enough” and as time goes on, legacy just becomes more legacy and the perception of its state slips further, as does the perception of “just good enough” and the associated maintenance efforts.

This trap occurs when we base our future projections on the system’s current state rather than its best state or an objective, pragmatic ideal.

Where else have you seen a drift to low performance in software engineering?

Sprint forecasts in Scrum can fall victim to eroding goals. Fail to meet your sprint objectives? Forecast a little less next time. Still fail to meet your objectives? Why not forecast a little less? Sound familiar?

This trend can be especially prevalent if teams are guilt tripped into meeting their commitments or if punitive action is taken when deadlines slip. Simply working harder is very rarely the solution and if you wish to get your needs met by threat, well good luck with that.

Escalation

The arms race of systems traps. If you punch me, I’ll punch you harder and I’d better brace myself for and even harder reply. This trap is about keeping slightly ahead of the Joneses.

One might consider this trap to be the opposite of ‘drift to low performance’, in that one continues to escalate a system state in response to the perceived state of a competitive system. This can be both positive or negative, depending again on incentives, or the perceived system goal.

Lets imagine that the system goal is to attend to the needs of the customer more effectively than the competition. The sort of arms race produced in this case could be healthy. However if ‘attending to the needs of customers’ means reducing cost below that of the competition, there’s only so far this reinforcing feedback loop can run before other needs come into play (such as quality, profitability etc). In general, pursuing one-upmanship quickly leads to negative consequences as exponential change can not go on forever.

This trap can be seen frequently in software engineering, particularly when competition is used misguidedly within organisations as a motivator.

Have you ever been measured against other individuals by the number of unit tests you've written? The number of bugs fixed, stories achieved, code reviews completed or god forbid the number of lines of code you've written? Has your team ever been compared with another with you velocity metrics? Have those ‘KPIs’ ever been published widely in order to ‘motivate continuous improvement’ or (as is more likely) to name and shame? What happens when we’re all measured in such a way?

The measurements will improve, that’s what!

They’ll improve and improve until you've got loads of unit tests, loads of bugs fixed, loads of code reviews, and more lines of code than you can shake a stick at. Congratulations.

Now when you've finished inspecting these measurements, look up for a minute to see whether you've still got a business. Remember to turn the light out on your way out.

As my somewhat exaggerated example shows, escalation can be most toxic when combined with another system trap, ‘seeking the wrong goal’. Though even when the goals are entirely worthy, escalation can knock your system out of whack pretty quickly.

Success to the successful

The rich get richer while the poor stay poor. This in a nutshell is success to the successful: a systems archetype whereby opportunity is presented only to those who have been successful in the past. This is an example of a reinforcing feedback loop where success breeds success.

This is a trap you say? Surely this is the way of the world? Isn't this synonymous with Darwinian evolution? Isn't this meritocracy? Isn't meritocracy a good thing?

Well no, it’s not as simple as that. No.

Does your organisation proudly espouse meritocracy as one of its cultural values? This is taken to mean that success has nothing to do with privilege or wealth and has everything to do with ‘demonstrated merit’. How is this working out for you?

What might be the implications of incentivising employees to be, or more importantly to appear to be successful? If the most important factor in your ongoing progression is to appear successful, how might this affect your willingness to fail? How might this affect your ability (as an individual or an organisation) to innovate?

In her wonderful book MindSet, Stanford psychology professor Carol Dweck describes the differences between what she calls the Growth Mindset and the Fixed Mindset. Individuals with a fixed mindset tend towards believing that basic qualities such as intelligence or talent are fixed traits. Therefore they tend to spend their time trying to appear intelligent or talented rather than actively developing these traits. Individuals with the growth mindset on the other hand tend towards believing that brains and talent are just the starting point and can be developed substantially through dedication and hard work. The growth mindset creates a love of learning and a resilience to failure that is essential for great accomplishment.

What mindset does your brand of meritocracy nurture?

Linda Rising equates the growth mindset with the ‘agile mindset’, arguing that a willingness to fail, to adapt and to embrace continual improvement is incompatible with the fixed mindset. Yet our current cultural and organisational conventions tend to value the appearance of success above all else rather than the dirty, messy, failure prone path that gets you there.

The route of agility is in embracing failure, so be careful with your implementation of ‘success to the successful’.

Its not all about agility of course. Success to the successful can lead to the ‘Peter Principle’ where people are promoted into incompetency based on their performance in their former role. And please, if someone equates meritocracy with Darwinian evolution, remind them that there’s a name for that. It’s called ‘Social Darwinism’ and it was used to justify all kinds of unpleasantness by a certain Adolf Hitler during the first half of the 20th century.

Shifting the burden to the intervenor

Shifting the burden to the intervenor has a lot in common with addiction. The intervenor in this case is a solution to a problem, and this trap is sprung if the solution undermines the capacity of the system to maintain itself. Take alcohol addiction for example. The one thing that really solves the pain of withdrawal is a drink. This works until the next morning when the pain hits harder, only to be alleviated by more drink. Alcohol in this case is the solution. A really poor one.

Recall bias leads us to think of alcohol or drugs when contemplating addiction, but the same systemic patterns are all around us, including in organisations focussed on software development. And no I’m not implying that work drives you to drink.

Consider bug fixing for example. You often have a choice: the choice to patch over the symptom or to address the underlying cause of the bug. Rapid symptomatic relief is often achieved by choosing the first option, leaving the underlying cause un-addressed. Simply patching over a bug can result in code that is more difficult to maintain, resulting in more bugs which in turn are fixed with more patching. This downward spiral ultimately leads to the hangover of unmaintainable software: software that has fallen victim to the unwise solutions to its own problems.

Once you notice this pattern, you start to see it popping up all over the place. Here are a few more examples of shifting the burden to the intervenor in software driven organisations:

  • Suffering from poor quality in production? Hire more manual QAs to ‘focus on quality’ instead of investing in automated testing and continuous delivery techniques. More manual testing means less reliance on automation resulting in the requirement for yet more QAs as the codebase grows.
  • Suffering from poor flow efficiency? Hire more developers rather than working out how to increase flow efficiency. More developers means more communication and management overhead, resulting in decreased flow efficiency, which necessitates more developers.
  • Teams not meeting deadlines? Insist on overtime or ‘greater commitment’ rather than addressing scope or investigating ways to worker smarter. Greater commitment (or my favourite war cry: “more passion”) leads to short term gains, followed by long term fatigue, quality erosion, performance decline and inevitable further calls for greater commitment.

What are you addicted to? Remember the first step is admitting you have a problem.

Rule beating

Do speed traps really reduce speeding or do they incentivise speeding followed by rapid deceleration before cameras? Do departmental budgets really act as an upper limit to spending or do they serve as a target for over spending?

“Rule beating” describes the phenomenon whereby rules cause a system to behave in a distorted way, a way that would make no sense at all in the absence of rules. This is commonly referred to as following the letter rather than the spirit of a law. A rule beater makes it look as if he is adhering to a rule, while actually contravening it.

Lets consider an organisational rule that requires a lengthy sign-off process for any work estimated as taking longer than x days to complete. Will this result in all large efforts being subjected to the sign-off process? Or will this result in a glut of work estimated at x-1 days?

How about a scenario where development teams are given a performance target to complete their sprint commitments for 80% of sprints during the year. Will this result in better focus and increased attention to results? Or will teams intentionally (or subconsciously) overestimate their work to meet the target?

What if the ‘agile inquisition’ dictates that all user stories must be written in an “as a/I want/so that” format? Will this really get people focussing on user needs? Or will you end up with a bunch of user stories that start with “As an LDAP server”?

What rules are you beating right now? Which of your rules are being beaten?

Seeking the wrong goal

You have a problem with rule beating when you find something stupid happening because its a way around the rule. You have a problem of wrong goals when you find something stupid happening “because it’s the rule”. ~Dana Meadows

King Midas sought the wrong goal. He wanted to be exquisitely rich but defined his goal poorly and ended up dead. Golden, but dead.

The trap of ‘seeking the wrong goal’ often occurs when one’s progress towards a goal is evaluated via a proxy measure rather than the goal itself. In poor old Mida’s case, his pursuit of wealth was defined in terms of his ability to create gold. As he found out, turning everything you touch into gold doesn’t necessarily make you rich. The pursuit of happiness is another goal that is susceptible to this trap. One may for example chase happiness via the proxy measure of wealth, and while chasing wealth may well make you rich, it won’t always make you happy.

This trap is sprung when the proxy measure becomes the goal itself. We see this frequently in software development, where ‘value’ is tricky to measure objectively so we resort to proxy goals. In the absence of an objective measure of value, we may for example choose to measure velocity or ‘story points delivered’. This is horribly unwise as velocity is completely unrelated to value and it is easy to game, especially if teams are held accountable for its increase. Incentivising velocity increase may actually result in a decrease in value as perceived by clients as teams focus all their efforts on appearing to create value rather than actually creating value.

Other examples of seeking the wrong goal in software development:-

  • Stretch goals or unrealistic deadlines. Some managers believe that setting unrealistic deadlines will encourage focus and commitment. Unfortunately when the deadline becomes the primary goal, meeting user needs is relegated to being a secondary concern. Such managers shouldn’t be surprised when the deadline is met but the user is left wanting.
  • Seeking to be “more Agile” (capital ‘A’ intentional). The agile mindset is (in my opinion) super valuable in helping teams to be more effective and in helping to nurture a more collaborative, humane and value focused organisation. However “doing Agile” can easily become the goal rather than the path to achieving the goal. There is nothing more destructive to agility than confusing ‘Agile’ with the goal.

Systems, like the three wishes in the traditional fairy tale, have a terrible tendency to produce exactly and only what you ask them to produce. Be careful what you ask them to produce.

Summary

Writing this rather lengthy essay has really helped me to reconcile some negative systemic archetypes with my own experience as a software engineer. I’d be really interested to hear about your experiences in facing these traps and how you overcame them. Would you be willing to share your experiences?

Part II follows this post with some answers.

Further reading

Follow me on twitter @smrimell

--

--

Stuart Rimell

Software, hardware and wetware enthusiast. Comments mine alone. @smrimell on twitter.