Why Project Planning Fails
Tempering your irrational optimism with research, spikes, and tracer bullets
You show up to your 9:30 planning meeting on Monday morning, excited. Nothing is as promising as a plan.
Each time you get a few weeks into a new project at your company, of course, something always comes up. Your deadline ends up pushed back, the scope of the project gets cut. You think back ruefully on how certain success felt at one point. Never mind about that.
This time, you’ve cracked the code! The team’s produced a plan that takes into account all of the potential unknowns. It’s a weather-proof plan, invulnerable to delays and PM politics. Naturally, you’re excited.
This is how the irrational optimism of project planning works.
Picture a feature getting built in two different worlds, identical in every way but one — in one world, your team chooses to confidently put together an all-encompassing plan from Day 1 (the world of irrational optimism).
In the other, your team knows that no plan is going to hold up to the stress of reality.
World 1: Planning with irrational optimism
In the first version of this scenario, your Day 1 planning meeting leads to a project design that tries to account for every variable that could possibly go wrong.
You think 6 months would be a healthy timeframe, but you don’t expect to get 6 months. That’s why this project design is built to be executed, one step after the other, without any dramatic mid-play audibles — for momentum’s sake. You don’t know if everything is going to work, but you must keep going. The PM insists the project can be done in 3 months — or at least their boss, the head of product, does.
When you hit 3 months, your lead developer estimates that you’re about 70% done. You ask them how long it will take to get the project done, and they shrug. You tack on another month. After a month, you ask again. They sound even less sure this time.
Slowly but surely, your team’s optimism and enthusiasm is replaced with frustration. Morale slips as the head of product starts asking when the project will finally wrap. You’re no longer sure the project is even viable, but it’s a little late to switch to Plan B.
World 2: Planning with rational realism
In another world, you start Day 1 with the recognition that your plan sucks.
Libraries will conflict. Frameworks will not work the way you need them to. Your tools will underperform. Your assumptions will unravel before your eyes and you will need to adjust.
You will need a Plan B, and a Plan C, so you put those together in addition to your “Plan A.”
You know there are holes in your knowledge, and you know you’ll encounter more. Rather than approaching your plan like a preset plan, you approach your plan like a roadmap.
At each hole along the way, you use the most efficient process possible to figure out what you should do — and which way you should go. Sometimes that means going back to the drawing board and doing some research. Sometimes it means crafting a hyper-narrow implementation of your idea as a throwaway test. Sometimes it means building a very basic, end-to-end version of it to provide a basic foundation for more work.
Sometimes it means ditching Plan A entirely.
You systematically break a large goal down into many smaller tasks — each a mini project in and of itself — and that lets you stay flexible and get your project done in more resilient, anti-fragile way.
This second method of project planning is something we’ve been forced to learn here at RankScience, and today, we swear by it.
The research-spike-tracer bullet cycle
When people plan projects advance, they usually fail to take into account 90% of the issues that will pop up along the way. In reality, the majority of projects are highly uncertain endeavors.
We just act as if they’re not — as if everything will always go according to plan.
While it’s human nature to try to make plans, but at RankScience we’ve found that the best plan is made up of just a goal, and a loose outline of how it might be achieved.
There are always going to be gaps — whether you write them down or not — and they’re best tackled with the conscious application of three techniques:
- Research: Preliminary study of the problem, meant to flesh out your different solutions and assign each one value re: technical pros and cons as well as your principles as an organization; 100% about information-gathering
- Spike: A basic end-to-end but throwaway experiment, not necessarily within your architecture, designed to reveal hidden contours of the problem and help you acquire more knowledge about the domain
- Tracer Bullet: A narrowly-scoped implementation of the idea, meant to be kept, and to demonstrate whether or not your solution will work in practice, but designed with the future in mind — you can build on top of a tracer bullet
Research, spikes, and tracer bullets are nothing new. But the secret to these tools, we found, is that you don’t treat them as either-or. You don’t debate the relative merits of tracer bullets vs. spikes, as some do on the internet. You use them as part of one cycle of uncertainty optimization — because each addresses a very different stage of uncertainty.
On any given project —especially those that involve more technical elements — you will (with all likelihood) move through these three stages in sequence. You start off with the most uncertainty, which means you begin by doing research. At this point, you might only have the vaguest idea of what you should be doing. In the spike stage, you might know what you should do in theory, but have no clue as to the actual implementation. And in the tracer bullet stage, you know what you need to do to solve the problem, but you’re not sure what it will look like or how long it will take.
These three techniques together can help you get far more feedback faster, and that’s essential. Normal project planning happens in a bubble, and you need feedback loops if you’re going to learn and push on through that uncertainty.
1. Research
Research is the first step you take and the one you always take when you’re dealing with the greatest amount of uncertainty.
The research stage is when you know you have some kind of problem but you have zero idea of how to fix it. Your deliverable isn’t code at this point. It’s most likely just a list of pros and cons and other aspects of that design decision that you want to consider.
The research work stream should provide you with a list of different possible solutions, as well as the possible benefits and negatives of each one. These shouldn’t be purely technical. They should also be tied to your values as an organization. At RankScience, for example, we want to know how every decision we make on the product is going to affect:
- Our operational burden: Will this move increase our workload or lessen it?
- Our bus count: Will this move make our company more or less vulnerable to sudden shocks?
- Our team velocity: Will this move allow us to stay flexible and moving relatively quickly?
We want our research work streams to tell us how our different options will affect those factors we care about in the long-term, and which ones will really help us hit our goals.
2. Spike
Once you understand your problem and you’ve picked a possible solution that works for your business, you’ve cleared up one layer of uncertainty.
The spike helps you figure out how to actually implement that idea.
A spike is a prototype you throw out there, a quick, dirty and usually time-boxed implementation (“I’m only going to spend a week on this”) that is always thrown away afterward. It is there purely to test a certain kind of solution. It might not even be written in the same language or to the same architecture dictated by your research.
Your only goal is to learn something — to figure out how what you want to do works.
The spike catches occasional criticism in the agile community, mostly because the thought of spending your time on “disposable work” is anathema to moving quickly. But a spike is a great way to take a domain you don’t understand, where you don’t feel much confidence in your design abilities, and start chipping away at it effectively.
The key to a successful spike is that it should cross through the “full stack” of your domain — from end to end, but in a very thin way. It could be as simple as a “Hello World!” program, if it tests everything from the user interface to the database to the core business logic of your program and you’re simply trying to test any implementation.
If the core of your problem is scaling, and you’re trying to understand how your DB will handle 50 million records, then start by writing 50 million dummy records to it to see how it holds up. If the core of your problem is figuring out what your UI should look like, then open Sketch or a pad of paper and make some quick and dirty mockups you can show a potential user.
As Norman Carpenter put it, a spike is the “simplest thing you can program to convince yourself that you’re on the right track.”
3. Tracer bullet
Tracer bullets are usually loaded as every fifth round inside a machine gun. When shot, they leave a bright streak in the air where the bullet crossed over, illuminating its path. They first became common during World War II, when aircrafts needed to be able to quickly identify and fire upon enemies under cover of night. Unable to make lengthy adjustments to their firing path based on the wind and their speed, gunner crews could adjust their aim in real-time by simply observing the path their bullets took.
The notion that the tracer bullet could be applied to software development was born in The Pragmatic Programmer by Andrew Hunt and David Thomas. The idea of the tracer bullet in development is of a skeleton solution that helps provide clarity to the development process — it’s what you try when you know your problem, know what you’re going to try to do to fix it, but don’t yet know the scope or timeframe of the solution.
They sound a bit like spikes, but they’re different. For one, tracer bullets aren’t disposable — they’re meant to be kept around. For another, they use your architecture and programming language of choice. You don’t build tracer bullets purely to find knowledge; you build them to make tangible progress towards your solution.
They are, however, narrowly-scoped solutions. You try to put them into practice as quickly as possible so that the next iteration you create can be adjusted slightly (based off what you have seen), and rinse and repeat.
When designing architecture, for example, you might create a fully functioning but minimal “Hello World!” example that uses the new architecture from end-to-end. Maybe you show that it can connect to a client, query the database for some dummy data, and then push that back to the user. It’s a tiny self-contained unit of computation, but it proves out (in an important way) whether the whole implementation is what you really want. You write tests, and you write for keeps.
The best-laid plans
At RankScience, we use research, spikes, and tracer bullets to generate a high velocity of feedback loops during each stage of our product development process. We use each tool based on the type of dilemma we run into during the holes in our project planning process:
- When we don’t know what to do at all: Research
- When we find out what we should do, but don’t know how to do it: Spike
- When we know how to do it, but not how long it’ll take or what it should look like: Tracer bullet
We’ve found that uncertainty is the biggest, most important, and often last considered variable when it comes to project management.
When you know your environment well, you can move fast and break things with ease. Do some research, design some mockups, pass your designs off to your developer — you’re done.
When you’re working on any kind of uncertain project, you might not even know what you should be doing research on. You can waste a lot of time, money and morale shooting in the dark.
There are less established patterns around what you’re doing, no Stack Overflow threads to help you along the way, and a lot of fear.
To cut through uncertainty in planning, you have to acknowledge that you don’t have all of the answers from Day 1. You have to take smaller, more methodical steps — and you have to optimize for your own uncertainty.