Continuous Improvement the Lean Startup way

Lalo Martins
Yesterday’s Cool, Today’s Lean
6 min readDec 4, 2015

Continuous Improvement, or Kaizen, is possibly the main tenet of Agile and Lean. And yet, I’ve seen too many teams do it on an ad-hoc basis, often counting on guesswork to determine success.

How about we get a little more scientific about it? What I’ve been trying can be described as a variation of the Lean Startup method.

Quick Lean Startup refresher

The Lean Startup is a “scientific process” for building startups, created by Eric Ries, combining Ries’ own entrepreneurship experience, Steve Blank’s Customer Development, and Lean Manufacturing. Its principles are:

  • A startup hinges on getting three things right: a customer segment, a problem that needs solving, and a solution for that problem.
  • For getting there, use an iterative build-mesure-learn cycle.
  • Establish (and write down) a formal hypothesis.
  • Write down the metrics that will determine if the hypothesis is correct.
  • Build the minimum necessary to test it.
  • Measure the results.
  • Iterate.

In my own work, I prefer to phrase the cycle as “learn-build-measure”, because that better describes the process above. (In fact, the clock overlay on the article’s banner image is something I made for a t-shirt a couple of years ago.)

Measure and Learn continuously

If you’re a coach, Scrum Master, “Agile Master”, manager, or, in an environment without coach/SM-type roles, a team lead or even team member, it’s your job to find problems and improvement opportunities. If you’re not, it’s still important that you participate in this process.

Don’t wait until the retrospective. The “learn” part should be always on: keep an eye out for impediments, waste, and frustration. Write it down. Spend a few hours trying to make sure and/or refining the idea. Then bring it to your coach/manager if you’re not it, and if you are, to a peer or whatever is appropriate in you organization.

Retrospectives are for discussing these findings with the whole team, and for finding those that wouldn’t be found by individuals, problems that only come up when everybody is together looking back at “how well we did it”, and suddenly a blip becomes visible.

Next, if you’re the coach/SM/manager/etc, switch to “measure” mode. Spend some time (possibly with a peer or superior) figuring out sensible metrics to validate the problem; then spend some more time collecting the necessary data. If you do retrospectives, bring the hypothesis and its data to the next one; if you don’t, you probably want to call a meeting about it, although it usually doesn’t require the entire team.

(As an aside, I strongly, strongly recommend you do hold retrospectives, even if you’re “pure Kanban”. Of course they won’t be one-per-sprint, but like other things in proper Kanban, find a cadence that works for you and then stick to it.)

Treat improvement work like a regular work item

This is actually more like a parenthetical in the article, since it’s tangential to the main point; it’s already part of formal Kanban, and a widely agreed upon Scrum best practice. However, I’ve seen many organizations neglect to do it, so I think it bears repeating, since it’s essential to getting the rest of the ideas in this article to work.

Any process improvement action needs to be treated like any other work — technically, after all, process improvement and product improvement aren’t that different, right? So, write an item / story / ticket for it. Put it in your Scrum or Kanban board. Make it flow through the pipeline. There are, however, two rules:

  • It should always be pulled as soon as possible. It goes in the top of the queue for Kanban, and the top of the backlog for Scrum.
  • If you’re using “classes of service” (which I recommend), and process improvement items are its own class of service, they can’t have lower priority. It’s of course ok if there’s an “urgent” (or the more recommended name “expedite”) class with higher priority, but the kaizen class should have the same priority as standard items, or higher.

Also, but this is more a tip than a rule, don’t put it in a separate class of service only because you think it requires a different flow. True, maybe most items won’t need, say, UI design; but those that don’t can just skip over those columns — back-end changes don’t go through UI either, right?

Write down your hypotheses

Don’t go in blindly. A number of cognitive biases (confirmation bias, sunk cost fallacy, …) make us tend to declare experiments a success more often than we should.

Lean Startup gets around that by making you write down the hypothesis beforehand, as well as what metrics will validate it — not only which data will be collected, but also where the success-failure line will be.

Create and perform experiments

Figure out what needs to be done in order to collect this data. This often involves a small prototype (keep it as small as possible while still enough for the data to be valid), adding data collection systems, or sometimes, if you’re lucky, simply compiling and processing data from the recent past.

If you’re not an expert in data processing, make sure to check if your organization has some. At Ableton, I was surprised to find the Controlling team was not only willing to help me process our bug density data, but actually considered it part of their mandate. (Later they went on to do the same for a number of other improvement initiatives.)

Also very important: give your experiment a hard timebox. Under no circumstances continue it until you think you have enough data (although stopping early is ok if the outcome is obvious). If, at the end of the timebox, the data is still inconclusive, don’t just extend it: run a new experiment — meaning, go through the whole experiment design process again, ideally with a peer (and a data processing expert if you have one).

Some really small experiments can go with an abbreviation of this process. Want to try using timebox devices in the stand-up? You probably don’t need to discuss that with a peer, or call a meeting. Just open the next stand-up with “I want to try using timeboxes” and proceed with the experiment. Still, set yourself some success criteria, and write down both the goal and criteria.

Iterate mercilessly

Once you have the data, don’t sugarcoat it. If it says the hypothesis was correct, move on to the next step (if this was a problem test, move on to the solution). If it failed, formulate a new one. If it says there was no problem or opportunity to begin with, inform the team and archive the whole thing.

Be advised, though, the majority of times it will be the second option — at least for the first hypothesis of each problem/opportunity. If you get it right too often, that more likely doesn’t mean you’re really good; it probably means you’re playing too safe, and getting less improvement than you could.

When Ries describes Lean Startup as a “scientific method”, I don’t believe he means so much that other startup methods are unscientific (although I suppose many startups have no method at all, which would qualify as unscientific). What I understand by that is that it is, itself, an application of “The” scientific method: formulate hypothesis, attempt to disprove it, measure results, repeat until you have something useful. And while one big merit of what I’m proposing in this article is that you don’t waste effort implementing improvements that don’t work, the real gain comes when you develop a strong skill at coming up with new, better hypothesis based on the previous experiment’s data.

Validate problem, then solution

In meetings, you’ve probably learned not to jump straight to solution space without making sure the problem is well understood.

The same is true for this method. Before even trying to devise a solution, perform an experiment to verify the problem (or opportunity). Even if it seems obvious, a couple of hours looking through hard data can reveal important details, and result in much greater gains than you’d have otherwise.

Don’t go overboard

After reading all this, you may have come off with the impression that you’ll be running these experiments for weeks. Well, no. That wouldn’t be very continuous, would it? It’s just a pseudo-agile version of analysis paralysis. Make your experiments as short as strictly necessary; sometimes only a few minutes will be enough. As a rule, the length will be proportional to the cost of the change in question: if the cost (or risk) is higher, you need to be very sure there will be gains to make up for it.

What is really important is:

  • Write down the hypothesis so you can’t lie to yourself later
  • Make a conscious, written decision about what (hard-data-based) will qualify as confirmation
  • Commit to making decisions based on experiment data

And as with anything, know when to stop.

--

--