#NoBigProcesses — Introduction

Dave Rooney
The Startup
Published in
7 min readSep 11, 2019

[Edit, Oct. 1, 2019: After feedback from a number of people, I’ve changed the hashtag from #NoProcesses to #NoBigProcesses.]

I’ve been involved with building software in some manner since I plunked out my first program in BASIC on an Apple ][ in 1981. I’ve seen so many trends come and go that I’ve probably forgotten more than I still remember. One trend that has had some staying power is Agile Software Development.

When I first learned about Extreme Programming (one of the many processes under the Agile “umbrella”) in 2000, many of its principles and practices resonated with approaches that had worked and worked well in my previous experience. While I had recognized these approaches before then, I started paying more close attention to how people come together to build and deliver systems.

This is the culmination of what I’ve learned in the near 20 years since finding XP, representing the second half of my career in software.

process: (Noun) A series of actions or steps taken in order to achieve a particular end.

Just as the #NoEstimates movement questions the prevailing approaches to estimating software delivery efforts and #NoProjects questions the approaches to how we organize ourselves around and fund those efforts, #NoBigProcesses challenges the assumptions behind and approaches to the steps we take to deliver systems.

I want to assure you, though, that I don’t mean that all process is bad! But let’s challenge some of the conventional wisdom that has given us the popular software delivery processes we use today.

Why Use a Process? How Does it Help?

The traditional waterfall process model defined by Winston Royce in 1970. Note, though, Royce’s first comment after the diagram.

This diagram represents the Waterfall model for delivering software. Although it had been practiced before, it was first defined by Winston Royce in 1970 and is still being taught in computer science programs in 2019. This particular diagram appears at the top of page 2 of Royce’s paper, which then goes on to explain why the approach is simplistic, and iteration is required to properly handle what we discover as we actually start to build the software. This simple model did, however, outline the steps that need to be taken for that delivery, assuming that each step was indeed complete before continuing to the next.

Despite being almost 50 years old and originating from a time when computers took up entire rooms and had only a tiny fraction of the computing power as one of today’s smartphones, this approach is still alive and seemingly thriving. Even organizations that believe they have “gone Agile” are still subconsciously clinging to waterfall via practices like Change Control Boards and many facets of ITIL (or, to be fair, their implementation of ITIL).

Project management practices are often tacked on before requirements gathering begins and intertwined with each step. Concepts such as Earned Value Management are used to provide some notion of a warm, fuzzy feeling that the work being done will some day satisfy a business need. Work items are broken down, estimated and tracked closely to ensure that the project is on time and on budget.

All of the steps and activities are codified as part of the Process that can be applied to many different teams doing different work in different domains. The Process is there to remove risk & variability and to provide certainty and repeatability. THAT is why we want a Process!

And yet, projects are still late, risk is still incurred, variability creeps in and repeatability is the punchline to a bad joke.

Why? Because the Process doesn’t account for the people involved!

What About The People?

Alistair Cockburn

I’ve often said that the business of delivering software would be much easier if it weren’t for people! Joking aside, in any situation where you have people working together you have to understand that we humans are variable. Even if you have two teams with people with the exact same skill sets and experiences, those teams will behave differently because the people are different.

Supporting that idea, Alistair Cockburn wrote a paper in 1999 titled “Characterizing People As Non-Linear, First-Order Components In Software Development” in which he stated,

In the title, I refer to people as ‘components’. That is how people are treated in the process/methodology design literature. The mistake in this approach is that ‘people’ are highly variable and non-linear, with unique success and failure modes. Those factors are first-order, not negligible factors.

We’re variable over the space of years, variable over the space of months, variable between weeks, days and even hours! So any process that sets out to eliminate or simply ignore the variability of people is doomed from the start to experience severe challenges or outright failure.

A Cautionary Tale — The Rational Unified Process (RUP)

In the late 1990s, Rational Software Corporation (later acquired by IBM) sought to unify various approaches to defining object-oriented systems. In addition to the object-oriented aspects, they added various “disciplines” in order to provide what they believed was full coverage of the lifecycle of software systems. Rational’s approach to this endeavour was to acquire companies and the tools they offered in order to augment their own existing tool suite.

The first true version of what became the Rational Unified Process (RUP) was released in 1998, and was based on 3 pillars:

  • a tailorable process;
  • tools that assisted and automated the application of the process;
  • services that augmented adoption of the process.

While none of this is bad, and it indeed created an excellent business model for Rational, there were issues from the start.

The primary problem was the first point above — that the process was tailorable. As a consumer of RUP you were given everything possible from a process perspective to cover every possible contingency in every possible context. You were then coached to remove what you didn’t need. The problem was that pretty much no one ever removed nearly as much as they could have, and a bloated process was the result.

With respect to people, RUP described the roles required within its framework. There were so many that they had Role Sets, each containing multiple roles, with a total of 26 different roles defined. There was also an Additional Role Set for anything that didn’t fit under the Analyst, Developer, Tester or Manager Role sets. One of the roles in the Manager Role Set was the the Process Engineer, whose job was to manage the process for the benefit of everyone involved. That alone is a hint that there’s probably too much process!

While the RUP documentation explained that a role doesn’t necessarily equate to one individual, it also didn’t spend much time explaining that you didn’t need every role every time for every project. Again, this promoted the bloating of teams to handle the sheer number of roles that were defined.

While RUP enjoyed a number of years of popularity, the burden of its weight resulted implementations that were challenged and even outright failures. Today, in 2019, it’s effectively dead.

How Do We Fix This?

What would it look like if we started with… nothing?

Much of what we know about software delivery process is derived from what we learned about manufacturing. That first, flawed assumption saw building software as identical to assembling widgets on an assembly line. Spoiler alert! It’s not. Software delivery is knowledge work, which is equivalent to how the widgets were designed, not how they’re assembled.

But let’s go even deeper and challenge a more fundamental assumption — that we need a process at all. What would it look like if we started with… nothing? What if we reduced our activities around delivering software to just the bare minimum required.

In my experience over many years and numerous business domains, there are really only two key activities that are needed to deliver software successfully:

  1. Ship something!
  2. Reflect honestly on how you shipped in order to improve.

There are innumerable ways to perform these activities, but they are the essence of successful software delivery.

In future posts in this series, we’ll explore the activities of Ship and Reflect and how to use them to build just enough process to work for you in your current context. We’ll also explore when it makes sense to add or remove from your process.

The ultimate goal isn’t to create a canned approach that you can apply everywhere in all domains, but rather provide you with the thinking tools and guides to allow you to identify just what you need to be effective. That may not sell many 2-day certification courses, but I’d much rather that the software industry was simply more effective at shipping software especially given our huge and increasing dependence on it.

The next instalment in this series is #NoBigProcesses — Two Key Activities.

--

--

Dave Rooney
The Startup

Veteran Agile/Lean Coach, Manager & Software Developer. I’ve never met an assumption that I didn’t challenge!