A system for debugging organisations and processes

Ben Gracewood
WE ARE LIGHTSPEED
Published in
6 min readJun 6, 2015

--

This post is an addendum to the talk I’m giving at Agile Australia on June 18.

While updating Vend’s career guides, I was reviewing Rent the Runway’s engineering ladder. One thing stood out to me under the “technical skill” category for VP of Engineering

Greatest technical strength is debugging organizations and processes

I’d like to dig into this a bit, and introduce a system for debugging organisations and processes that can work for any VP of Engineering.

Heavy process is a great way to get average results

We already have access to a number of ideas about organisation and process optimisation. If you want to drain the all the energy and empowerment from your teams, you could go down the route of SAFe or PRINCE2. Before you do, I recommend reading up on Dave Snowden’s work on Cynefin and understanding why heavy process is a great way to get average results.

At a more meta-level, concepts like Conway’s Law and Dunbar’s Number can inform us about the limits of self-organising team structures, and the ways in which we might want to shepherd this self-organisation. These higher level concepts are often reflected in the exemplar organisations that we see frequently in discussion: Spotify, Netflix, Github, and the rest. The common factors across all great software organisations are small, focussed, loosely coupled teams with high alignment.

Small things, loosely coupled.

If this were software we’d immediately start talking about bounded context, microservices, and connascence. If you haven’t learned about connascence, I highly recommend you watch Josh Robb’s video from Codemania 2015 before you read on, because I’m going to do a terrible job of it.

Basically connascence is a better way to describe the “tight-coupling” problem. It’s not a rulebook, and it’s not black and white, but it does give us a system for reasoning about the complexity introduced by co-dependent components. Sometimes it’s great (or at least necessary) for components to be connascent, sometimes it’s terrible — and this changes depending on how distant the components are. By observing and categorising connascence, we can triage and prioritise areas of the system where we should concentrate on improvement.

Co-dependence. Areas for improvement. Triage and categorisation. Hmmm.

There’s no parent here to pick up those dirty socks

People are hard — this is a strong truth, but it strikes me that we accept people and team problems as “difficult” and counter this by implementing process. We do this without spending time to observe, analyse and improve the small-scale interactions that end up resulting in macro problems. How often have you observed a failure or slippage not as the fault of the specific product team, but instead as a result of the context given to that team, and the external influences on the team? Pretty much always.

What if we had a set of tools that allowed us to reason about the complexity caused by context and coupling of teams?

My observation is we can take things like bounded context and connascence from software, and apply them to teams and interactions with considerable success. Let me describe how.

Bounded Context

In product land, it’s common for a team to be considered as the sum of its people and deliverables. “The inventory team” or “the ecommerce team” are a couple of ours. This belies the complexity of a team context, and drives us to make simplistic decisions like moving people between teams based on the priority of features. In reality, a team’s context is far more complex, especially when you consider teams further from the front-line.

We’ve found some success in creating team bounded contexts by deliberately documenting the context of teams. In a rapidly evolving startup, having a bounded context is analogous to Cynefin’s enabling constraints: removing just enough uncertainty to allow teams to innovate in an uncertain environment.

It’s early days, but we’ve been running “Is/Is not” exercises with teams to help them understand what they should and shouldn’t be concerned about. These exercises bring clarity and tend to help the “dirty socks” situation — there’s no parent here to pick up those dirty socks you leave in the corner: they’re in your context, so no one else is going to help — the team needs to create ways to cover every requirement in their context.

Having a clear bounded context elevates the teams to consider things that were traditionally driven by process and management decisions. Decisions on annual leave and team size (within the bounded context of budget) sit within the teams now, because they know exactly who they need to get shit done. Additionally, the overall cognitive load is reduced when you can clearly say “no, that’s not our problem” and know that it will be covered elsewhere, rather than worry and rove around finding an owner.

Coupling and Connascence

Each time we run a post-mortem to assess a breakage or a poor product decision, we inevitably find that the best of intentions have been influenced by multiple interconnected tendrils of … stuff. An inter-team promise caused one team’s deadline to impact multiple other teams. A delivery decided months ago was built without reviewing the changing market between decision and release. None of this is terrible, but it all comes about because of team dependencies. Team connascences.

Assuming you’ve got a great bounded context for a team, what things can influence that context? Are there experts in the team that are called on by other teams for advice and implementation? Is the team reliant on another team for tooling or APIs? Are teams well connected to customers and to the market such that they can detect and respond to changing requirements?

This sounds very familiar. There are some connections that are stronger than others. There are some that we prefer, and others that we want to reduce or remove. Defining these connascences is a work in progress: we’re still discovering more. I’d love to hear your ideas on this, but here’s what we’ve come up with:

Static Team Connascences (things that can be detected and changed at “compile time”, in-place in a quantifiable way):

  • Platform (languages, databases, devops, machines shared between teams)
  • Location (co-located teams)
  • People (the same person required on multiple teams)

Dynamic Team Connascences (things that are influenced externally and only become apparent at “runtime”):

  • Vision (shared vision between teams — weakest but most important?)
  • Scope (teams relying on each other for in-scope requirements)
  • Deadline (one team relying on another’s deadline; strong, bad connascence — we need to avoid this one)

So, like connascence in software, we can prioritise by identifying strong connascence (e.g. shared deadline), and working on ways to reduce this to a weaker connascence (shared vision).

A system for debugging organisations and processes

Connascence is not a perfect fit for team interactions. What drew me to it was the clarity with which it provides a system for reasoning about the complexity introduced by co-dependent components. My assertion is that there is value in a similar system for teams, organisations, and processes. A taxonomy with which we can discuss the common and recurrent failures inherent in fast-scaling agile engineering teams.

Let’s move from “oh wow yeah we fucked that one up” to “Aha, we’ve seen one of those before — it’s a deadline coupling and we can fix it in these ways…”. Let’s build a system for debugging organisations and processes in a way that empowers teams and individuals, instead of crushing them with rules and processes.

My strongest hope is that we end up in a place where — at an organisational level — instead of manipulating features and sprints and deadlines, we discuss solutions (jobs to be done!), people, and interactions.

NB: Keep an eye out for a followup post. We have some early success stories from applying this model at Vend that I will publish once I’ve shared with the Agile Australia conference.

--

--