Lean Drupal Development — Finish your Drupal projects in half the time

(2016 Update: I completely rewrote the presentation for Drupal North 2016, and we decided to record it as a screencast over at the Floe offices.

Mathieu Helie
Floe design + technologies


Continue to the screencast: https://youtu.be/8rr9MH4S9X8.

(This is adapted from my talk at Drupal North 2015.)

Today I want to show you a different side of project management. We are going to look at our inventory of work, and why it matters. We need to do that because taking an honest look at inventory levels is how lean manufacturing revolutionized industry, and is starting to revolutionize software. We are going to learn how to control inventories so they stay lean, why it pays to keep inventories lean and what kind of project management methodology we can use to take advantage of lean techniques.

The impact of scale on workflow

First some context. Have you ever wondered what the role of the conductor in an orchestra is? It turns out the conductor is keying the different instruments’ volume. That’s right, an orchestra takes so much physical space that there is a volume difference between different sections, and it’s someone’s job to harmonize the whole thing by telling people when to start playing, and to take it down or bring it up using a stick.

Hello, hello.

Great bands like U2 don’t need conductors, though they do have a great lead singer. When you are that close to each other, you don’t need a conductor.

Many Drupalers start out, like me, as one-man bands.

The one-man band does not need a conductor like an orchestra, because one man has full control over all the instruments. That allows him the luxury of using gut-feeling and common sense to make all development decisions. He can improvise or adjust anything necessary to keep the flow going. The one-man band is incredibly effective and responsive, but since he relies on intuition and common sense his agility does not scale up to many members undertaking a large project. That’s when we need something called project management.

An inspiration for many web design agencies

Have you seen this picture? This is a classic web design project from the point of view of the designer and the client, and as you can see it has a very linear flow. This the U2 of web projects. There is a defined role for each instrument, each taking their turn in the tempo. The band follows a defined score that’s been practiced before, with minimal improvisation. And for simple projects this works fine. But what happens when you scale up this model to projects lasting more than a year, with large teams? You start to notice that the parts maybe don’t fit together too well, and maybe a conductor giving regular feedback on the volume would be helpful.

Why is scale a problem? Inventories necessary to achieve the scale grow larger and start causing waste. Take this favorite example of lean manufacturing: sending out 100 Christmas cards. How should we process these? The most efficient, and counter-intuitive way, is to finish one at a time. Write it, put it in the envelope, stamp it, seal it.

Then, start the next one.

We will prove it to you

This result has been showed to baffle people who think there are time savings involved in putting all 100 stamps in one batch. There may be small time savings of repeating a task over and over, but there are also multiple system-level drawbacks:
1st — There needs to be a pile of work-in-process 100 envelopes tall. The pile needs to be shuffled and maintained in between each process.
2nd — If a mistake is made and detected only at the last stage, it will have been repeated up to 100 times.
3rd — The first Christmas card to be completely finished will have waited until the 100th Christmas card was started and partly completed, when it could have been finished and mailed long before.

In the Christmas cards example, the number of cards we move from one task to another is called the batch size (either 1 or 100), and the number of cards ready to move to the next task is called the work-in-process inventory.

This is Eiji Toyoda. This man changed the world one Corolla at a time.

The most important discovery made at Toyota was that inventories were full of hidden waste. Little was gained by building them up besides making factory managers look effective, and a lot of them spoiled and had to be written off as a loss. Cutting down inventories, going lean on inventories, by reconsidering every batch transfer in the production process was how the company began its ascension to the world’s largest-scale automobile manufacturer. The definition of lean manufacturing is a production process that runs on the smallest possible level of inventory.

Implied in the simple website project flowchart is the belief that all tasks in a particular stage should be finished before the work is worth moving to the next stage. This is only true up to a certain size, because each increase in the scale of the project stretches each bubble out a little longer. This assumption is the one that causes delayed start in later phases, delayed feedback in detecting defective work, and the reason the model fails to scale. In lean manufacturing it was discovered that the opposite assumption was more reliable, that moving work one piece at a time through stages is optimal. This is an approach called one-piece-flow.

In web development our pieces are very fluid and divisible. We can consider a whole website to be one integral piece, like in the simple process. But we can also break it up into parts that can be worked on and completed independently. The agile method of writing user stories is an example of this. There may be no lower limit to how we can split and detail work, and thus finish parts of the project sooner by making work inventories and batches smaller.

The Costs of Software Inventories

Physical good inventories tend be obvious. When there are too many cars on the lot, the company is paying depreciation and storage, as well as interest on the capital spent to produce those cars. It is time to have a sale.

But what about software? It takes no physical space, and making more copies costs nothing. In what way does software constitute an inventory?

Joel Spolsky (the CEO of Stack Overflow and Trello) wrote a great article in 2012 on software inventories and their cost. He explains why Trello was designed to be intentionally difficult to use when too many cards accumulate in one column, giving users an incentive to reduce their inventory of work or move it forward.

Think of product ideas as the raw material. Depending on your process, product ideas may go through several assembly line points before they are delivered as finished features to the customer:

1. A decision-making process (should we implement this feature?)
2. A design process (specs, whiteboards, mockups, etc)
3. An implementation process (writing code)
4. A testing process (finding bugs)
5. A debugging process (fixing bugs)
6. A deployment process (sending code to customers, putting it on web server, etc)

(PS No, this is not “waterfall.” No it isn’t. Is not. Shut up.)

In between each of these stages, inventory can pile up. For example, when a programmer finishes implementing their code (stage 3) they give it to a tester to check (stage 4). At any given time, there is a certain amount of code waiting to be tested. That code is inventory.

Joel Spolsky describes the software production process as starting from an inventory of bare-bone ideas and finishing with an inventory of releasable code, much like a bread production process starts with flour and yeast and ends with warm loaves.

The most basic definition of a work-in-process inventory is any spent effort that has not yet delivered its value. The longer we work on something before releasing it to users, the bigger our inventory becomes. The shorter the time between our effort and the release to the users, the less interest we pay on this effort.

We’re going to look at our different work inventories starting from the most valuable, which is production-ready code, then to untested code, to designs that are waiting in development, and conclude with much rawer materials, ideas and work tickets that are the usual subjects of project management. This is the opposite direction from which projects are usually imagined, but that is the way the software value chain actually works. Users demand and value production releases, not ideas. This is why lean manufacturing is designed around the principle of inventory pull.

Release and testing in Drupal core

Drupal core is the largest, riskiest project in all of the Drupal ecosystem. It not only attempts multiple refactorings with each release, but it also relies on a worldwide distributed team of developers and designers working over several years to see a release. We are far from the Dries one-man band of the first version. The reason new core versions are possible at all is that the community employs sophisticated techniques to track progress and to speed up feedback, one of which is automated testing, another one is tracking the level of inventory in issue queues.

Imagine, with all new features and bug fixes being developed simultaneously for Drupal, if we left testing as one big batch of work at the end of the development cycle. The testing phase would last months, if not years, as we tried whack-a-moleing all the different regressions. Thankfully we added automated testing in Drupal 7, meaning we now get instant feedback if we introduce a breaking change to a project with tests. Automated testing is the first example of batch size reduction we probably experience. Each single piece of code is considered ready to go through a full testing cycle.

It’s important to consider how batch size and transaction costs relate when trying to go lean on inventories. Automated testing makes sending a batch to testing essentially free, but when testing was an expensive process we needed to wait to have accumulated a much larger inventory of changes before we could afford a test cycle. At Toyota it was necessary to invent whole new industrial techniques to make the cost of working on one car at a time viable. So going lean on inventories is not free — it may require a substantial investment before small batches become economic. It may even require changing your market.

You may also have noticed that no one bothers to estimate how long the Drupal core project is going to take to be released. Instead we use an alternative technique to predict the release, monitoring the trends in issue queues. Here was the current estimated release date for Drupal 8 when I prepared this presentation. This is an example of monitoring and tracking queues to control a project.

Sometimes November, sometimes September, who really knows? drupalreleasedate.com

I think Drupal 8's development cycle shows how a large batch size can explode into late releases, not only in the number of bugs that are introduced and the major issues that block the release, but also in the number of features that have been completed a long time ago but are not yet released. These features are spent effort that are not producing any value. We should have had a responsive Drupal years ago, but because it was batched with the Symfony core, and the configuration management initiative, it is still waiting for a full release. It would have been preferable, from a lean perspective, to break up the different initiatives and release them one at a time.

The implementation queue — our largest inventory

After testing and release, our next most valuable inventory is the development inventory, the information we use to determine the code that needs to be written, and this one is typically the biggest (but not the most expensive — which is why it can be good to have a large one). Here we introduce the concept of a queue, which is simply a representation of all the different things that are waiting to be worked on. In Drupal, this is known as issue queues, and this is where we stash our work-in-process when we’re not actively working on it.

When are large inventories unavoidable, or just plain cost-efficient to hold?

If our work arrives in a single ticket every hour, and it takes us exactly one hour to finish it every time, we will never need an inventory, and we will never experience a queue. There is no reason to take on more work since we will never actually have the available time to do it. This means it is better to turn down extra work, and always get the freshest demand.

Things get trickier if we can’t predict how long it will take us to finish the job. On the one hand, if it takes us 10 minutes, we will wait idle for 50 minutes. If it takes us an hour and a half, the next arriving ticket will wait for a half hour, and this waiting time will carry over to the second next arriving ticket until we receive a job that finishes in half an hour or less.

And here is a situation that’s even more typical of web development: a customer calls in proposing a project that will break down into hundreds of tasks that will take months to complete. Our work inventory suddenly explodes to hundreds of tasks. This obviously is very variable, but is how many agencies function, because that is how customers prefer to purchase their services.

Extremely variable demand and random service times create a FIFO queue at the Apple WWDC

This wait gets even worse if the work time and the arrival time are both unpredictable. For instance, let’s say we are working a ticket and suddenly a customer calls in with an urgent bug report that preempts all our other tasks. Until we resolve the bug, more and more tickets will build up in the queue.

So the inventory is needed because both supply and demand are highly variable. No amount of padding estimates and breaking down requirements is really going to change that. They might even make the project take longer. In the presence of variability, inventories keep the work flowing.

But how busy should we really be? What gets measured gets managed, and what agencies typically measure is how busy their people (“human” resources) are. This can have extremely harmful consequences on issue service times, called lead times in manufacturing, and introduce a lot of waste in our processes.

Statistical models show that because of variability the relationship between capacity utilization and time spent waiting in queue is linear. This means that if your developers are busy 95% of the time, their tasks can spend on average 95% of their time waiting in the issue queue not being given any attention. (You can also measure how busy your team is by how long it takes them to start a task.) You may have noticed that for tasks that have high urgency, such as system administrations tasks, we allow much more slack in the queue. This is because a sysadmin who has five high-priority tasks to handle cannot get to the fifth task before some kind of catastrophe has happened. The solution is to have a spare sysadmin around, who spends most of his time idling. Spare capacity is also how telephone lines, restaurant tables and public restrooms are allocated. We don’t want them to be efficiently utilized with long queues, we want to spend no time waiting when we need them.

Having idle people available will speed up your project by preventing queues from growing out of control. Whether or not having someone around who works only 50% of the time is worth it depends on the cost of waiting involved in your project.

For proof of all the previous claims, look at A Dash of Queuing Theory. It’s full of interactive charts.

Practice queuing discipline to reduce the cost of the inventory you’re holding

So our project just finished its kickoff meeting. Now we have a hundred tickets in our JIRA board. Which one should we work on first? In some situations, it is better to use the inventory than to try to shrink it. Unlike manufacturing, where inventory occupies physical space, our inventories occupy virtual space in issue queues. This means we may be fooled in thinking that they do not cost anything to hold (besides the effort we spent writing the tickets), but they are full of potential waste. This is shown by practicing queuing discipline, which is how we select what part of the inventory should move forward to the next process.

Imagine a hospital emergency room. If it is mostly empty, patients will be treated in the same order as they arrive. If it is overflowing, patients with cardiac arrest or chainsaw injuries will go first, and patients with broken ankles will wait until the queue resolves. That is because we consider higher perishability to be more costly. Doing it first-in-first-out or in random sequence would obviously have vastly different outcomes. These different outcomes are the benefits of queuing discipline.

There are three very useful ways to optimize work queues:

  1. Highest cost priority, such as the emergency room, which we usually practice by putting priority ratings on work tickets. High priority, critical bugs get worked on before low-priority bugs. Features that customers or clients value more get worked on before those they value less.
  2. Riskiest task priority, what the Lean Startup approach is about, which allows us to eliminate accessory effort when it has no chance of success, and creates the learning we need to move the project ahead. We’ll get to this in a moment.
  3. Fastest task priority, which creates benefit from shrinking the weight of the queue itself, and initiating the next processes sooner. I have two examples of this that comes from household chores.

Picture a large pile of unsorted socks coming out of the dryer. The pile is your work inventory, and the batch size is one matching pair of socks. Which pair should you fold first? The answer is the most obvious one, the fastest one you can identify. Removing that pair shrinks the entanglement of the pile of socks, making all other pairs a little easier to identify.

The same applies when a large inventory of dirty dishes has accumulated in the sink. Usually I wash dirty dishes one at a time, without letting it pile up in the sink, in true one-piece-flow fashion. Try it, it’s very liberating. Otherwise, the best way to clean up the sink is to clean the cleanest item in it first. This opens up space in the sink to wash the heavier jobs with fewer obstacles to your movement, so you finish those faster.

A lot of the work that piles up in long Drupal projects tends to be very ambiguous. We think that this or that module can do the job, but sometimes the job itself isn’t very clear, or the project as a whole is blurry. Taking on the shortest task first is a good way of learning more about the details of the whole project, and so it actually shrinks the length of the remaining tasks. It also lets you test your release cycle sooner. Taking on the hardest task at the end means that you have the maximum preparation and experience to handle the challenge. It’s actually negative depreciation, or appreciation, of the remaining inventory of work.

Lean Startup Techniques

Automated testing and agile engineering allows us to significantly improve the quality of the software we produce. But what about the quality of our other deliverables? How agile are they? Could we be wasting our time writing great code? This is what Eric Ries discovered when he launched his startup, IMVU. His lesson became the Lean Startup.

As the legend goes, he was the CTO in charge of engineering a highly sophisticated plugin for instant messaging applications like AIM and MSN messenger (remember those?) so that the users of his product could invite their friends and fuel the company’s viral growth.

What’s potentially uncool about this?

When he delivered this brilliant piece of engineering using all the best practices for engineering teams, he found out his users would not use it! They didn’t want to risk their friendships on such a potentially uncool experience of virtual avatars. However, after interviewing users, the team learned that users had no problem installing a new app in order to make new friends.

It turns out nobody had verified whether the users would actually want the product and engage with it. Through no fault of the engineering team, their entire work was thrown out.

Eric Ries realized that by measuring this result, the knowledge gained about the users, against the necessary effort to obtain it, he could identify and eliminate all of his engineering work as waste — the only engineering needed was a minimal viable product to test if the users would behave how they had assumed. Building and learning on top of a minimal viable product is a process he termed continuous innovation.

Measuring the Necessary Effort Measures the Waste

Once the obvious waste has been eliminated, the remaining waste we suffer in complex processes is hidden in the intricacies of the process, such as the inventory between stages. This is why an honest measurement of final results and necessary effort is needed to identify it. A factory producing cars at the fastest pace in the world might appear to be a miracle, but if those cars are the wrong color for the consumers’ tastes, they will stay in the showrooms depreciating until someone triggers the alarm about the out-of-control accumulation. A wrong color is just as bad as a broken engine — the car is wasted until repaired. A wrong color, just like a wrong feature or a design mockup that can’t be implemented, is a defective inventory. This cost can be avoided if we measure the true, final value against the necessary effort to achieve it, then work to improve our processes based on these lessons. Lean manufacturers now protect against defects such as these with pull systems — they only increase an inventory that has been valued by a customer action. When nothing is known about what the customer will do, such as in a startup, we should pull the smallest project that will create these actions.

Lean software development isn’t about automated testing, although it can be a useful part of it. It is about getting the earliest possible feedback about inventories before the waste they carry grows larger.

Incorrect assumptions waste time, so fail fast

Imagine that you have a request for a feature, and you guess that it can be done with a view and some row templates. You eagerly set out to build the view, writing your templates, styling the css, then when it comes time to add the fancy relationship needed to finish the feature, you find out the Views module doesn’t have that kind of relationship available.

Oops. Gotta throw out all of your work.

If you measure the results against the necessary effort, the only productive result you achieved was learning what views can or cannot do, and the more efficient way to have obtained that result would have been to test the risky relationship before doing any other work that depended on it.

You should have failed fast instead of late. And the emphasis here is very important: it’s on the fast, not the fail. There is nothing valuable about failure in the abstract.

Here’s the same problem presented with some math. Imagine you have two codependent tasks, one with a hundred percent chance of success and another with 10 percent. Which should you do first? The one with ten percent. This is because the expected cost of the work is lower in the sequence where you keep the option to cancel a task that depends on a failed task. In both scenarios you only learn what Views cannot do, but in the one where you cancel the rest of the work you expended less effort.

This allows you to make many more attempts for the same amount of time available. Plan B fails, on to plan C, then D, then all the letters of the alphabet, some numbers, and finally just commit hashes. Something will eventually work.

The lesson: fail as cheaply as possible. Only where you cannot falsify the assumption should you take on expanded risk.

Failing to Win with High Risk Inventories

Failing can sometimes be a winning strategy. Suppose we are playing twenty questions. You have to guess what I am thinking of. The best strategy is to ask questions whose failure provide as much insight as success. For example, guessing Napoleon and getting it wrong provides less insight than guessing a human and getting it wrong, since by guessing a human we have eliminated more possibilities. A bad guess can tell me as much about the answer as a good guess! Computer scientists will recognize this as the strategy behind binary search algorithms.

When considered broadly, the early stages of web development, specification and design, are actually the ones where most of the risk is tested, and that is because they are the cheapest tests to produce, and the cheapest inventories to throw away. Simply describing a user story and asking honestly how valuable that story will be is a good way to falsify the value produced by such a feature. The same way, creating a mockup and testing the mockup for fitness to the requirements is a cheap way to prevent useless development work from taking place.

And sometimes the cheapest way to test an assumption is to just write the code and see if it sticks. If it doesn’t, we will at least know what the less or more sticky parts are.

In software and web development we are always trying to do something for the first time, so practically all of our ideas only have a probability of being valuable, they are not certain to be. If something has less than 100% chance of being a success, it’s more efficient to invest in a few cheap tests to filter out the failures instead of investing a full development and release effort into everything.

Continuous migration —continuous delivery for website redesigns

Surely a big website relaunch has to take place in one large batch at the end of the project? Here’s an idea: use Varnish to serve content from the old and new site simultaneously, updating the routes to the new site as they become ready for release. Instead of continuous integration, we can deliver the new website with continuous migration.

I didn’t know where this part belonged in my story, but it illustrates how we can open our minds about batch sizes when we reconsider our transfer processes, so here it is as a short interlude before taking on heavy project management questions.

Going lean on project management and meetings

Lean Estimates

How much time should we spend estimating? The answer depends on how much time we can save with estimation, which means knowing what decisions we can make using estimates. Will making an estimate help us finish the project sooner, or are we making it longer by wasting our time estimating the time we will spend on tasks?

The first kind of estimate we make is typically a project scale estimate, which is where we tell our client or manager whether the goal is realistic within the given budget or timeline. It’s an application of fast failure meant to prevent projects that are too ambitious to see a successful release from getting started at all.

The second kind of estimate is the prioritization and team optimization estimate, where different team members share their perspective on the complexity of a task in order to select which team member should take on the work, and whether it has enough value for its difficulty to be a priority. Sometimes this is done with a planning poker game.

The last kind is a release estimate. How do we estimate the duration of time of a batch process that takes variable time to complete? We actually solved this problem in software by inventing the progress bar. It turns out that a release estimate is just the negative of a progress estimate. Software installers make estimates of time remaining simply by measuring the time elapsed and the percentage of tasks completed.

We’re making good progress here.

If you can only make one kind of estimate, make it an estimate of the number of tasks in a project, which means measuring the length of the queue. Even if every task takes wildly varying lengths of time to finish, you will have better perspective on the progress of your project if you know what remains to be done.

It becomes dangerous to use a progress bar to make predictions when one task is orders of magnitude larger than other ones. It may appear that your progress has slowed to a crawl or is stuck, and then the user might try to abort the whole thing. Here the agile practice of estimating story points is again useful. Simply by describing one task as having the weight of 20 baseline tasks, we can adjust the progress shown on our progress bar by making it move 20 points.

Of course we have other dimensions to deal with in software development, such as the fact that the total work in a project tends to always increase, thus the progress bar gets longer as it makes progress. Work can also be in more states than the simple done or not done of the progress bar. This is what a cumulative flow diagram resolves. It’s essentially a two-dimensional progress bar, maybe two-and-a-half dimensions.

Any additional effort spent estimating tasks is probably a waste, as more precision or accuracy doesn’t allow us to make better decisions or finish anything faster. The important thing to keep in mind is that estimating tasks based on timelines, such as telling people that 8 story points is worth one day of work, adds no additional information, but will make estimation harder and make the team reluctant to participate in estimation. Stick to pure scale estimates and don’t stretch the meeting any longer.

Waterfalls and Agility

What it essentially means to be agile is to send information back about emerging obstacles and opportunities early enough that a change in direction is still possible. The stereotypical waterfall project has an information flow that goes in only one direction: the top-down. This situation is common because it is the most expedient way to manage a project, in the absence of team-client experience.

In my experience, I’ve observed that in such projects one large batch of feedback builds up at the release cycle of the project, where painful reconciliation and negotiation must take place between the team and the client. Sometimes the project is just declared a failure and written off, something that could have happened much earlier with fast feedback by testing the riskiest assumptions first. Everyone could have saved time and moved on to more productive work.

You will at this point realize that lean and agile are two facets of the same idea, which is having a responsive production process. For this reason many agile methodologies contain lean techniques. Let’s go over the lean techniques in the two most common methodologies, Kanban and Scrum.


Kanban was actually invented at Toyota to solve the problem of sending feedback up the production line when inventories accumulated. The idea is to represent space in an inventory with a token, the kanban, and physically block new work from starting and entering the inventory until the kanban returns from the next process.

If a slowdown occurs or a problem is detected at the end of the chain, the absence of the tokens immediately stops work on all previous stages of the production chain. This frees up people to intervene to repair the defect, and stops costly inventory from accumulating to an extent that will never be corrected.

Someone’s high-tech kanban board

Kanban in software development is typically represented as post-it notes on a whiteboard. When there is no more room on the board for another post-it work stops and intervention has to occur. The space available for post-its is what enforces inventory limits. This is actually harder to do if your issue tracker is virtual because trackers are typically designed to hold unlimited amounts of inventory. You have to carefully monitor the queues instead of waiting for the space to fill out and people to stop their work. Trello’s devious trick of hiding overflowing tasks is an interesting twist on the concept.


Scrum implements batch-size reduction and fast feedback through two techniques for organizing meetings: timeboxes and cadence. Scrum defines the necessary meetings of a software project as:

  • the backlog grooming session, which replaces the requirements planning meeting of traditional software development,
  • the sprint planning meeting, which replaces the architectural design and project management stage,
  • the daily standup which replaces constant interruptions by team members,
  • the sprint review which replaces customer acceptance and finally
  • the sprint retrospective which replaces the project post-mortem and root-cause analysis meeting.

Scrum is an attempt to compress the whole process of developing software into a single, defined time interval. The genius of scrum is that it has a cadence, that means that all of the meetings happen at the same, regular times on a predictable schedule. This eliminates the transaction cost of scheduling a meeting and persuading every participant to attend. The fact that meetings happen at regular intervals also means that we don’t wait until a large amount of work has accumulated before organizing a meeting. (Remember the tradeoff between testing cost and test batch size? It’s the same principle at work here.) There is an immediate presence and attention on work inventories, and the amount of work moved from one stage to another is limited by the timebox — no more than can actually be discussed in the time scheduled for the meeting.

Finally, the daily stand-up scrum is a scheduled interruption, what operating systems practice on processes to virtualize multitasking. Scheduled interruption allows early detection and intervention on tasks that are going to take infinite time and block the whole production process (fast failure).

The major problem with scrum is that it was specifically intended to solve the problem of large-scale IT projects that grew out of control, not web design or consulting projects. It doesn’t fit those very well, but no law says we can’t design our own production process by stealing its good parts.


If we imagine that Lean is a language, Kanban a library and Scrum a framework, then Scrumban is the headless form of Scrum with Kanban as the front-end. The idea is rather simple; instead of doing traditional sprints just keep the cadence principle in scrum, with all the meetings at regular intervals, and use a kanban board to track the work. When space opens up on the board, take advantage of the meeting to fill it up. Stop the meeting when the board is full.

Scrumban opens the way for more varied processes than what scrum traditionally features, for example an iterated simple website project on a 3-day cadence.

There’s more to learn

So I hope everyone’s eyes were a little bit opened on this fascinating subject of lean inventories in web development. We learned that what makes complex processes fail to scale was the accumulation of inventories, that inventories in software take shape in queues and that controlling queue sizes and applying queuing discipline is a highly effective way to cut down on the time duration of a project.

Read this book at the beach this summer

If you want to learn more about the mathematics and economics of lean software development, the reference is The Principles of Product Development Flow, Second Generation Lean Product Development, by Donald Reinertsen. Be warned that this is the kind of book that needs to be read iteratively to be fully grasped.

For another fascinating look at how inventories matter in technology industries, The Lean Startup by Eric Ries is a book that deserves all the praise it has earned. And you should all look up Joel Spolsky’s blog post on software inventories and read that if nothing else.

My name is Mathieu Hélie. I am a developer for Drupal North sponsor Floe Design + Technologies in Montreal. I also cofounded a boutique agency specialized in live event webcasting, and I had to learn many of the preceding lessons the painful way. Find me @mathieuhelie on Twitter and Drupal.org



Mathieu Helie
Floe design + technologies

Entrepreneur, Web Developer, Emergent Urbanism and Complexity Science