The Five Focusing Steps (for when the constraints are people)
I don’t normally read textbooks just for fun.
Ok, that’s not true. I wouldn’t normally read this textbook just for fun. It is, however, excellent reference material and worth a skim every once in a while.
The textbook is Bill Dettmer’s The Logical Thinking Process, which aims to articulate and standardize the practices described by Eliyahu Goldratt in the various forms of literature on the Theory of Constraints (TOC) he wrote over the years. A friend expressed interest in having a small study group on the topic, so we’ve forged ahead, textbook and all.
It’s going well. And by well, I mean to announce victoriously that I’ve managed to read the things I said I would, which is more than I can say for most things I try to read… as evidenced by the growing pile of books on my desk, on my filing cabinet, on my workbench, etc…
This reading overlaps with many of the ideas I must study and apply in my work with various venues, and I’ve begun to notice a recurring struggle with the factory/machine metaphor embedded in TOC. In particular, how we apply TOC to human systems and more specifically what to do when a constraint is human.
I in no way think I have the answer, and I’m sure what I’m about to say is premature convergence. However, in the interest of getting something out there, I’m going to stake a weak belief (strongly held) into the ground.
By “exploit,” Goldratt means we should wring every bit of capability out of the constraining component as it currently exists.
- H. William Dettmer, The Logical Thinking Process
In the Theory of Constraints as described by Dettmer, the Five Focusing Steps are:
- Identify the System Constraint
- Decide How to Exploit the Constraint
- Subordinate Everything Else
- Elevate the Constraint
- Go Back to Step 1, but Beware of “Inertia”
I am concerned about the application of these steps to human systems — when the constraint is a human or humans. I find value in contradiction, so I’ll also state that I am absolutely sure these steps are technically correct while pointing at the language (and maybe the practical instruction) as being broken.
Dettmer defines a constraint as “any element of a system or its environment that limits the output of the system." We often hear the “weak link” metaphor in TOC, but I’ll try to change the language here and state that being a human constraint isn’t about being a weak link… It’s about being so central that everything slows down if YOU slow down. (To me, human systems automatically imply complexity, so best of luck to anyone trying to apply metaphors that imply linear causality anyway.)
In TOC, the factory and machine context is inherent. But humans are different than machines. They think and they feel. They get tired. They get sick. And when they break down, they don’t always recover to the same state, but perhaps a new one.
More language changes inbound…People aren’t to be exploited or maximally utilized but cared for and enabled. What is human potential if not something that completely escapes from the metaphor of machinery? And unlike machines, the attributes of human success are things like health, well-being, enlivenment, and performance of best work, not utilization or even throughput. The object must be for the human constraint to operate at a sustainable pace, with respect to these attributes. Throughput be damned in the knowledge-creating organization.
I think there are also important implications to explore with respect to what it means to subordinate a human system to a human constraint. The system is composed of a network of relationships, and it is constantly in a state of potential, ready to be activated by the right stimulus. The network around the human constraint can do wonders to enable sustainable pacing and performance of best work. The trick is to sensitize all the other humans in the network to the signs of success and failure so they can make decisions accordingly.
I’m reminded of an article (the name or source of which I unfortunately can’t recall) that suggested openly sharing your career goals with others at work in order to enable them to better support you in pursuing those goals. For example, a colleague might be able to better remember your name and interest when something related to a goal comes up. That’s what I mean by network sensitization and activation.
The tangible network behavior in this respect probably looks like routing information differently, redefining the absolutes, being willing to adjust what success looks like, changing the game to match the new constraints, challenging old rules and policies, etc. And of course, if the network becomes less efficient overall but the human constraint is finding the right pace and doing their best work, then IT’S WORKING.
To further improve performance (to elevate), we need to teach and spread responsibility. Hire people to fill gaps. Maybe even reorganize structures according to the new way of working, all in pursuit of enabling the human constraint.
As a final aspect, we need to enable continuous adjustment of the network to be aware and activated so as to act when needed. We need to watch for when the constraint moves to another human. And, unlike machines, humans are likely to slip into old ways unpredictably, so previous constraints might need ongoing help from the network to remain in a place of sustainability.
Here’s what I propose as the Five Focusing Steps, for when the constraints are people:
- Identify the human who, if they worked just a little less, would cause the entire system to slow down.
- Help them find a sustainable pace and figure out how to work only on the right stuff.
- Activate the network around the human constraint to be sensitized to and make decisions that enable the constraint’s new, sustainable pace.
- Now, elevate. Make substantial changes to the system. Teach, delegate, expand the capability of the network, reorganize, hire.
- Repeat. Who else now needs this treatment?