When used for good, increments and work-in-progress constraints serve as forcing functions to trigger and guide continuous improvement efforts. There are many definitions for forcing function out there, but this definition from the Interaction Design Foundation stood out:
an aspect of a design that prevents the user from taking an action without consciously considering information relevant to that action. It forces conscious attention upon something (“bringing to consciousness”) and thus deliberately disrupts the efficient or automatized performance of a task (source)
Though a bit more focused on interaction design, the focus here on conscious attention and deliberate disruption is valuable for how we approach continuous improvement in software product development. Many questions emerge. Do our forcing functions cause us to stop and think deeply about the situation? Do we choose to experiment with a new path? Or do we check-out, complain a bit, and go back to business-as-usual?
One of our first stumbling blocks is how we perceive the idea of continuous improvement. Why must we improve? What’s wrong?
For individual contributors, there’s a sense of deficiency-in-need-of-fixing…a kind of Sisyphean slog with ever diminishing returns. Improvement implies — to many people — that something is flawed (“well Team X is kicking ass, but Team Y_____”). In addition to the “actual work” is a whole other gig: persuading management that you’re trying, and that you want to “get better.” It’s all front-line team focused, because front-line teams are easier to control and run experiment on. Between the 360 degree reviews, employee surveys, 1:1s, and retrospectives…nothing seems ever to change.
When I discuss continuous improvement with individual contributors, I typically hear some version of…
You want us to be more efficient, ship faster, save money, and have more “predictability,” whatever that means! But we’re so busy and under-resourced as it is. It feels like the focus is just on us…go faster, screw up less, and more hands-on-keyboard. The reality is that there are many external factors here, much of which is beyond our control. Shipping crap ideas fast is still shipping crap. And do you measure this? How does that relate to performance management? Who sees the data?
You get a sense here about what we are up against: trust, local vs. global maximums, incremental vs. step changes, the locus of control, career identity, and “fairness”.
In Continuous Quality Improvement in Higher Education, John R. Dew, Molly McGowan Nearing describe continuous improvement as:
a focus on “learning how a system functions and on improving the performance of the system.”
This wonderfully simple definition is free from the shackles of myopically reducing variance, conformity to requirements, maintaining control, and meeting standards. Continuous improvement may involve quality assurance, but it is not limited to quality assurance. The idea of shining a light on the system and learning about its inner-workings is very much in keeping with the notion of “bringing to consciousness”.
By “learning how a system functions” and delving into how that contributes to performance, we are also in a position to participate in double-loop learning. In the pioneering paper Teaching Smart People How to Learn, Chris Argyis writes:
First, most people define learning too narrowly as mere ‘‘problem solving,’’ so they focus on identifying and correcting errors in the external environment. Solving problems is important. But if learning is to persist, managers and employees must also look inward. They need to react critically on their own behavior, identify the ways they often inadvertently contribute to the organization’s problems, and then change how they act. In particular, they must learn how the very way they go about defining and solving problems can be a source of problems in its own right
Single-loop learning involves “the repeated attempt at the same problem, with no variation of method and without ever questioning the goal.” (Wikipedia). Double-loop learning “includes a shift in understanding, from simple and static to broader and more dynamic, such as taking into account the changes in the surroundings and the need for expression changes in mental models.”
Almost by definition, continuous improvement efforts must extend across the global system. And not just through 360-degree reviews, surveys, and the occasional AMA (“ask me anything”).
“Performance of the System”
This type of thinking comes in handy when we attempt to “improve the performance of the system.” What exactly is performance? And what contributes to and supports performance? Some see this task as daunting. Just stick to what you can control. But how can you improve on something you don’t understand?
Teams typically define performance according to prescriptive, low-level, output or basic quality driven objectives. We review our “stories” from the last Sprint, and talk about what went right and wrong. Are we on track to meet our quarterly goals? It looks like velocity is up! The sprint commitment was met! Standup isn’t working!
But…as an organization are we improving the performance of the system? Are we moving a metric that is proven to predict system performance? What is required to drive sustained performance?
Consider an athlete. An athlete must fine-tune their diet, manage risk, mentally rehearse, take recovery seriously, and keep their equipment in order…all to support breakout performances. “Health” is a prerequisite for performance as well as the ability to adapt to new challenges. Of these two forms of resilience, which resembles knowledge work in complex, rapidly changing socio-technical systems?
- Engineering resilience: Focuses on control and predictability, and returning the system to one pre-defined state (resuming equilibrium). Or …
- Ecological resilience: Focuses on variability and unpredictability, allowing for many possible states that match the altered environment. A resilient organization will return to an adjusted state that matches the changed environment.
In software product development, the goal is to adapt…not return to an outdated “business as usual.” So my vote is on the latter.
All this is to present the challenge. How can teams (and an organization) focus continuous improvement activities such that they 1) help us learn about the system (which involves refining our understanding of performance), and 2) improve the performance of the system, either directly, or through support capabilities. And then periodically challenge those assumptions.
Which brings us back to increments and WIP constraints. When I hear about “Scrum not working” or “the teams can’t use WIP constraints”, I immediately ask about the culture of continuous improvement. Are forcing functions used for good or evil? Are they effective? For example…
- Do teams and the organization take the constraints seriously?
- Are teams learning about the system through the use of the forcing function? For example, a team might come to understand the impact of upstream variables or learn about the challenges of high WIP.
- Are people actively challenging the definition of “performance”? Are teams beginning to converge on definitions that they understand and support?
- What is happening on the global level to address team impediments? How much are teams actually in control of their destiny?
- Is the team punished for “botching” a sprint?
- Is the team attempting to deliver an increment to production by the end of the Sprint? Is the team respecting the WIP constraints?
- If a sprint is unsuccessful, does the team shorten the sprint and/or reduce the amount of work attempted?
- Can the team focus on removing the actual blockers to performance (e.g., help another team remove an impediment)
- Is the team mapping a performance metric within their control to a performance metric that matters for the business?
In closing, keep this definition in mind:
[Forcing functions] forces conscious attention upon something (“bringing to consciousness”) and thus deliberately disrupts the efficient or automatized performance of a task