Stop measuring effort

Zac Beckman
wcs-na
Published in
5 min readSep 2, 2023

--

Somewhere, someone made a terrible mistake.

It didn’t start out as a mistake. As with most innovations, it started with an attempt to make something better. A grand experiment, with the intended outcome of radically improving software development.

As with any experiment, it was a good idea at the time. Experimenting is how we improve and move forward.

This experiment had roots going back to the Industrial Era and manufacturing process. As Felix Lorenzo Torres and Herbert Benington theorized¹ in the 1950's, we could apply those well-understood manufacturing processes to software development and achieve gains in reliability. The basic idea seemed to make sense, and looked pretty elegant:

Waterfall Process (circa, 1956)

The idea of applying this simple, cascading process to software seemed brilliant. It worked great in other industries — aircraft and automobile manufacturing, publishing, food production. Why not software?

Naturally, the devil is in the details. And so many details, too. Those details started to surface problems as “waterfall process” was adopted across the industry. For example, what happens when you put your software into production, and you discover a bug in the system? Well, naturally, you need to add a feedback loop. You’ve got to send that feedback into the system, and build a new version of the software.

Winston W. Royce, a director at Lockheed Software Technology Center, wrote extensively about these problems in 1970,² ultimately concluding that waterfall had major flaws stemming from the fact that testing only happened at the end of the process, which he described as “risky and invites failure.”

In describing the problem, he modeled what waterfall really looked like in a software context. It was something like this:

Waterfall in all its simplicity

And, as he pointed out, it was risky and invited failure. By the time you add all the feedback loops — it leads to excessive overhead and by many measures is unworkable.

Ah, if only we had listened to Winston.

Flash forward to today. We still struggle with an industry torn between waterfall and the new kid on the block, “Agile” in all its diverse and varied forms. We are, in short, still experimenting wildly. The software industry has by no means standardized a uniform, proven, repeatable, reliable way to build software.

Ron Jefferies’ relatively recent article on abandoning Agile⁴ makes some excellent observations about how far off the mark so many companies are.

More recently we continue to flirt with industrial-era thinking. Take McKinsey’s overly complicated method⁵ for distilling developer productivity to measurable metrics. While there are some nuggets of gold in McKinsey’s process (such as relying on DORA-4⁶ to measure impact, reliability, and repeatability) it is, by and large a mishmash of sometimes useful and generally harmful practices.

For example, fixing “contribution analysis” and “interruptions” will surely drive management to eliminate some of our most effective mentors and thought leaders,⁷ ultimately hurting the team not helping it. The lesson here is, we need to stop trying to distill individual developer productivity to a number. The outcome is inevitably bad. Instead, we need to measure the impact. In their response to McKinsey’s nonsense,⁸ Kent Beck and Gergely Orosz succinctly describe why measuring impact and not effort is the right approach (Ron Jefferies also has some good thoughts on it⁹).

Courtesy of Kent Beck / Software Design: Tidy First?

Think about it. How do we measure the relative success of most research and development initiatives?

In medicine, we perform relatively small experiments and we observe the outcome of those experiments. If the experiments result in a quality of life improvement (in other words, it has meaningful impact) then we advance to the next step. In developing new energy sources we don’t try to solve the big picture on the first go. We experiment, creating incrementally better sources of energy along the way. This applies to old energy too: Oil companies don’t know if a drilling operation will be profitable. So they fund several small exploratory drills, experiments, knowing that eventually some of these will yield promising results.

This is the nature of research and development. Experiment, determine the impact, and then iterate.

And this is where someone, somewhere, made a terrible mistake.

Software development is not a manufacturing process, it’s a research process. It is far more like inventing a new medicine than assembling a car as it goes down an assembly line.

We don’t measure scientists by the number of test tubes they use in a day. Let’s not try to measure developers by how many pull requests they make or how many “interruptions” they have. Instead, let’s focus on the impact of what the team is doing. That means focusing on the goal, the value proposition right from the start — and measuring whether we actually delivered that value in the form of impactful software.

And while we’re at it, if it turns out the value and impact doesn’t materialize after the coders have coded it, don’t blame the messenger. Why don’t we take the conversation back to the product owner who came up with the idea in the first place?

We’re coming up on 100 years in the software industry. I hope we can get our act together soon.

Originally published at https://blog.bosslogic.com.

[1] Wikipedia. Waterfall model, citation, “The first known presentation … was held by Felix Lorenzo Torres and Herbert D. Benington at the Symposium on Advanced Programming Methods for Digital Computers on 29 June 1956.

[2] Wikipedia. Reference to first known diagram describing the waterfall model, “[often] cited as a 1970 article by Winston W. Royce.”

[3] Royce, Winston (1970), Managing the Development of Large Software Systems (PDF), Proceedings of IEEE WESCON, 26 (August): 1–9

[4] Ron Jefferies (May 10, 2018), Developers Should Abandon Agile.

[5] McKinsey & Company (2013), Yes, you can measure software developer productivity.

[6] Google (2020), Are you an Elite DevOps performer? Find out with the Four Keys Project.

[7] Dan North & Associates, Ltd. (2023), The Worst Programmer.

[8] Software Design: Tidy First? (2023), Measuring developer productivity? A response to McKinsey.

[9] Ron Jefferies (Aug 30, 2023), Developer Productivity?

--

--

wcs-na
wcs-na

Published in wcs-na

WCS North America’s publication on DevSecOps and Cloud Modernization best practices. Posts from our team on how we think about continuously delivering the highest quality Cloud architectures using modern DevSecOps methods.

Zac Beckman
Zac Beckman

Written by Zac Beckman

Programmer, technology accelerator, coach, change agent. If you like my writing, visit https://blog.bosslogic.com for a lot more!