Unlocking Predictability: Navigating Little’s Law Assumptions (#3 In Series)

Matthew Croker
7 min readNov 1, 2023

--

Too Long? Here’s a summary

Previously in this series we have adventured into gaining better understanding of the assumptions behind Little’s Law, to which above table presents a summary. Next comes the question:

“How does my team’s flow behave in terms of each of LL’s assumptions?”

This article delves into the practical application of Little’s Law (LL) assumptions. Our goal is to empower teams and organizations to enhance process stability and predictability. We’ll explore how LL assumptions can be translated into actionable rules and applied to real-world data. By doing so, we aim to provide a framework for coaching and improvement, fostering a multi-dimensional, predictable, and reliable organizational culture.

My hypothesis is that by joining theory with real life observations, a team or organization should then have a better understanding on how to make their processes more stable, and eventually more predictable.

In the sections that follow, you will discover my efforts to translate these assumptions into numerical rules, which I subsequently used to validate and evaluate my dataset.

From Assumptions to Rules

Here is my reasoning:

  1. Average Arrival Rate = Average Departure Rate — this one was easy, as it is practically a direct interpretation from English to math.
  2. Number of Jobs Assigned to Abandoned State = 0 — I have added this rule following a discussion with Prateek Singh about the experiment being described here. One way of capturing jobs that have gotten lost or never departed from the system is by having the jobs marked as abandoned. Among the many ways this can be done, tagging a job as “Cancelled” or “Aborted” is one of them. In this rule, “Cancelled” and “Aborted” would be the Abandoned States.
  3. There are no current jobs whose Work Item Age > 99% of Historical CT** — The double asterisks next to the 99% are there to show that the 99 is almost an arbitrary number. The logic behind this rule is that if there is a job that is currently aging so fast that has reached the limits of my system’s known behavior, than that job has a high risk of being silently abandoned. Many of us have seen at least one job in their career which a team or organization has simply given up trying to chase or understand. These jobs would keep occupying the board, but nothing else will be happening about them.
  4. Average Weekly WIP is Constant Across Data Set Observed** — Here we want to ensure that the amount of work started is stable throughout the period of observation. In #1 we are already checking the synchronicity between the arrival and departure rates. This can be seen like a side-effect to that check. We require the amount of work in the system to remain constant so that we do not have clogging or starvation. ** Weekly is an arbitrary frequency.
  5. Average Weekly Work Item Age is Constant Across Data Set Observed** — In this last assumption we would like to observe the aging trends for the jobs started. Operationally, this is typically monitored and managed using an aging chart. If we have to evaluate the data quality over a period of time, then, we can rework the job aging in retrospect and see if there were any fluctuations. Our base assumption is that for a system to be considered as stable, this needs to be constant. ** Weekly is an arbitrary frequency.

Applying LL Assumptions to Data

With the assumptions turned into mathematical rules, my next step was to apply them to real life data. This called for another opportunity of revving up a Google Spreadsheet.

Over the years I have developed a tool that works out flow metrics from a JIRA connection into a Google Sheet. For this experiment I customized a version of this tool so that it does the LL analysis. Below is a the user flow I implemented to perform this analysis.

Starting from the end, the screen that will unlock the biggest value of this feature will be the one below. This screen lists down the assumptions together with a clear Red / Green signal of whether the data breaks the assumption, or whether its patterns fall under acceptable parameters

Mathematical rules are useful because clear when they are broken. For instance, if the Average Departure Rate is 1, and the Average Arrival Rate is 2, then it is clear that the rule has been broken. This applies to all rules above.

My intention when devising these rules, however, was not to judge the data-set or the state of flow, but to assist teams and organizations to spot opportunities for improvement in their workflows. I need this tool to be a channel for coaching, not yet another status report.

To achieve this I decided to give control to the tool owner over the definition of variance. For instance, the first rule is clear: Arrival Rate needs to be equal to the Departure Rate. Anything short of equality will be deemed as breaking the rule, unless those using the tool can explain to the system an acceptable margin of error.

As I built this screen I imagined team members huddled up looking at their data, celebrating their achievement of reaching the acceptable margin of error and then choosing their next challenge. If this ever happens, the dream behind this tool has been fulfilled.

Triggering Fruitful Team Discussions

Here is a hypothetical scenario of how this tool can be used to trigger fruitful team discussions for continuous improvement.

  1. Open up the tool and see which of the LL assumptions your current data set is breaking
  2. Adjacent to every evaluation is the degree by which the rules are broken. Choose any of the broken assumptions, and dig deeper
  3. What are the reasons behind the mismatch between the arrival and departure rates? What is an acceptable level of difference for the team? What tangible steps can we take in order to bring them closer together?
  4. How many tickets did we cancel, reject, or abandon in any way? If these are numerous, cluster them as a team and analyse the reasons behind each cluster. If there are few abandoned tickets, take a close look at each case and try to deduce common patterns.
  5. What are the reasons behind the fluctuation in our ongoing commitments? How can we establish boundaries which allow us to be more efficient, predictable and effective?
  6. How do we govern our ongoing work? How do we bring the aging work to our awareness in our operational meetings (Dailies, Planning, …) so we can act upon in due time?

Even if your current data set shows you that you are breaking all the assumptions, take your improvement journey one step at a time. At each step, analyse your hypothesis, bring in real results and leverage your team work in order to improve your system.

Conclusion — Introducing Actionable Agile Metrics for Predictability Vol II

Our journey through Little’s Law (LL) assumptions has equipped us to enhance process stability and predictability. While managing work in progress is vital, we must also focus on variability to create multi-dimensional, reliable organizations. Our goal is to fulfill commitments and avoid unfinished work.

This series was planned before Daniel Vacanti published his latest book, Actionable Agile Metrics for Predictability Volume II. In his book, Vacanti focuses on the different types of variation and how it can be monitored using Process Behavior Charts.

In my next article for this series, therefore, I will gel together the ideas from Vacanti’s book with those that I have shared with you so far. Specifically, I will use the Process Behavior Charts instead of the rules I presented in this article to analyse the data set.

Until then, it’s your turn. Apply LL assumptions, explore further insights, and take action. Embrace a new dimension of organizational success and become a champion of process stability and predictability. The path to improvement awaits your footsteps.

--

--

Matthew Croker

Team Process & Data Coach | Co-Creator of Decision Espresso | Creator of Story Ristretto