Here’s Why All Your Projects Are Always Late — and What to Do About It
Whether it’s a giant infrastructure plan or a humble kitchen renovation, it’ll inevitably take way too long and cost way too much. That’s because you suffer from the “planning fallacy.” (You also have an “optimism bias” and a bad case of overconfidence.) But don’t worry: We’ve got the solution.
In 1968, the governor of New York, Nelson Rockefeller, received a proposal he’d commissioned. It addressed the mass transit needs of the New York City area. One centerpiece of the plan was a new subway line that would run from lower Manhattan up the East Side and into the Bronx. It was called the Second Avenue Subway.
Four years later, Rockefeller and New York City Mayor John Lindsay held a ground-breaking ceremony for the Second Avenue Subway. But not long afterward, the project was shelved because of a fiscal crisis. Years later, a new governor, Mario Cuomo, tried to restart it. Once again, the budget would not allow — and back it went on the shelf. By then, the Second Avenue Subway had become a punchline. A New Yorker would promise to pay back a loan “once the Second Avenue Subway was built.” It came to be known as “the most famous thing that’s never been built in New York City.”
The story of the Second Avenue Subway is a particularly grotesque example of a blown deadline. But surely you can identify. Surely you’ve been involved in something — maybe a work project or a home renovation, even writing a paper — that was also grotesquely late. And painful. And expensive.
Today on Freakonomics Radio: Why are we so bad at finishing projects on time? And what are we supposed to do about it?
Roger Buehler is a professor of psychology at Wilfrid Laurier University in Waterloo, Ontario. He studies social cognition and decision-making. Buehler has long wanted to know why we’re so bad at managing projects. His interest began during grad school with a personal puzzle. Every night, as Buehler left his office, he would pack up his briefcase with work for the night — but more often than not, he would return to work the next morning with all of it untouched.
“[E]very night, as I packed up that briefcase, I was sure that my plans were realistic,” Buehler says. “So that was the puzzle: Why wouldn’t I learn from experience and get more realistic in my estimates?”
This phenomenon has a name, courtesy of psychologists Daniel Kahneman and Amos Tversky. It’s called the “planning fallacy.” Buehler defines it as the “tendency to underestimate the time it will take to complete a project while knowing that similar projects have typically taken longer in the past.”
Buehler and some colleagues set out to measure the planning fallacy. Their first experiment used honors students who were working on their thesis projects. The researchers asked each student to predict when they’d submit their thesis. These students predicted, on average, that their theses would take 33.9 days to finish. They actually took 55.5 days. That’s a 64% overage.
Buehler and other researchers have found similar evidence of the planning fallacy among stockbrokers and electrical engineers and doctors. They also found it in everyday activities like Christmas shopping, doing taxes, and even waiting in line for gas.
Why is there such a gap between our intentions and our behavior? Buehler points to two factors. The first is our failure to account for all the possible paths a project may take.
The second factor is people’s tendencies to see the future in rosy terms. There’s a name for this phenomenon as well. It’s called the “optimism bias.” The optimism bias is not all bad. In fact, it’s described as “a wonderful thing” by Tali Sharot, a cognitive neuroscientist at University College London.
“[I]t’s a good thing because it kind of drives us forward. It gives us motivation. It makes us explore different things,” Sharot says. “It’s related to better health — both physical and mental health — because if you expect positive things, then stress and anxiety are reduced.”
Sharot believes the optimism bias is rooted in neuroscience. Her experiments have repeatedly shown that the brain tends to process positive information about the future more readily than negative information. It’s easy to see how that could feed the planning fallacy.
But Sharot has also found that the optimism bias is flexible; it changes in response to the environment. She and her colleagues run experiments in which they ask different kinds of people — firefighters, for instance — to assess the likelihood of bad things happening to them: getting divorced, or being in a car crash, or getting diagnosed with cancer. These are basically wild guesses.
Then Sharot and her colleagues give the firefighters information about the statistical likelihood of these events actually happening and have them guess again. But there was another twist: The firefighters were also asked the question in two different environments—days when they’d been fighting fires and days when they hadn’t.
“What we found was that when the firefighters were under stress, they learned more from this negative information,” Sharot explains. “The more stressed they were, the more anxious they were, the more likely they were to take in any kind of negative information that we gave them, whether it’s about cancer or a divorce or being in a car accident.”
Based on these results, Sharot argues that human optimism is both adaptive and mutable. So the optimism bias may be a sort of evolutionary insurance policy against hopelessness and depression.
But still, wouldn’t it be nice to also figure out how to get projects done on time and on budget? Katherine Milkman, a professor at the University of Pennsylvania, has been thinking about this for years. Milkman’s PhD is in computer science and business, but she studied operations research as an undergrad.
To that end, Milkman has spent a lot of time examining the planning fallacy. In her research, Milkman has found that when groups work together on a project, a number of factors collude to form the planning fallacy. One of those factors is overconfidence on the part of individual team members. But with large projects, there’s another factor called “coordination neglect.”
“When you staff a bigger team on a project, you focus on all the benefits associated with specialization,” Milkman says. “And what you neglect is to think about how challenging it is to get that work all back together into a single whole. So this engineer now has to talk to that engineer about how to combine their outputs into one integrated system.”
When psychologists Daniel Kahneman and Amos Tversky started theorizing about how to correct for the planning fallacy, they identified what they thought was a key factor. When people estimate how long a project will take, they focus too much on the individual quirks of that project and not enough on how long similar projects took. This second approach is called “reference-class forecasting.”
To succeed at reference-class forecasting, it’s best, to some degree, to ignore the project you’re currently planning, says Yael Grushka-Cockayne, who teaches project management and decision-making at the University of Virginia.
“Don’t think about it too much… Look back at all the projects you’ve done that are similar to this new project X, and look historically at how well those projects performed in terms of their plan versus their actual,” Grushka-Cockayne explains. “See how accurate you were, and then use that shift or use that uplift to adjust your new project that you’re about to start. “
Grushka-Cockayne has been studying the planning fallacy in governments as well as in private firms. She sees more and more companies improving overall performance. She believes that further improvement lies in a stronger embrace of — no surprise here — data. Grushka-Cockayne argues that tracking historical forecasts and actual outcomes is an important first step in overcoming the planning fallacy.
The trend of tracking and scoring the difference between forecasts and outcomes owes a lot to a man named Bent Flyvbjerg, a professor at Oxford University’s Saïd Business School. Flyvbjerg is an economic geographer who, years ago, became fascinated with infrastructure megaprojects when a project in his native Denmark went terribly wrong.
Flyvbjerg found that such big projects rarely go as planned: Between 80% and 90% of projects with a budget of $1 billion or more have cost and schedule overruns. And there are a lot of megaprojects like this going on, with a total global budget of $6 trillion to $9 trillion — about 8% of global GDP.
This data led Flyvbjerg to establish what he calls “the iron law of megaprojects: over budget, over time, under benefits, over and over again.” This is a long-standing trend. Flyvbjerg’s data goes back 100 years; he says schedule and cost overruns have been constant throughout this time period.
Why does this happen? The first theory Flyvbjerg embraced is called “strategic misrepresentation,” which is essentially a fancy way of saying that you lie in order to get what you want. Project planners, for example, have told Flyvbjerg and his colleagues that they deliberately misrepresent the business cases for their projects.
“[T]hey wanted their projects to look good on paper to increase their chances of getting funded and getting approval for their projects,” Flyvbjerg explains. “And they said, ‘We do this by underestimating the cost and overestimating the benefits, because that gives us a nice high benefit-cost ratio so that we actually get chosen.’”
Between strategic misrepresentation and the optimism bias, what are you supposed to do if you’re on the commissioning end of a megaproject? Consider what the British government did with its official Green Book, which tracks public spending. Flyvbjerg worked with the U.K. Treasury and the Department of Transport to develop a methodology for estimating the costs of large projects in the U.K. The methodology is now mandatory; Denmark has also adopted the method.
And what is this mandatory method? Basically, it’s strategic misrepresentation in the opposite direction. The country maintains a database that tracks typical cost and schedule overruns for different types of projects.
“[L]et’s say that, on average, projects go 40% over budget,” Flyvbjerg says. “You’d add 40% to the budget for your planned project. And then you would have a much more accurate budget.” The contractors are incentivized with additional profits if they meet their targets and penalties if they don’t.
This system has been in place only since 2004, and big infrastructure projects have long timelines. But a preliminary analysis done by outside researchers has found the projections to be reasonably accurate and the cost overruns to be reasonably small — about 7% from the planning stages of a transportation project to completion. All of which suggests that pricing in the optimism bias and using reference-class forecasting are truly useful tools to fight the planning fallacy.
Flyvbjerg’s method of estimating costs came too late for the Second Avenue Subway project, which was finally completed on the last day of 2016. To date, the project has cost $4.5 billion, making it, per mile, one of the most expensive mass transit projects in history. It also came in about $700 million over budget, and all that money and time went into building just two miles of tunnel and three new stations — not the 8.5 miles and 15 stations in the original plan. Those are still to come.
Stephen J. Dubner is co-author of the Freakonomics books and host of Freakonomics Radio.
About this PODCAST