Behavioral Economics and Corporate Strategy

Dan Lovallo and Olivier Sibony are Professors of Strategy who are no strangers to the corporate world. Both study executive decision making, work as consultants, and regularly write for McKinsey. Lovallo is a longtime collaborator with Daniel Kahneman, winner of the Nobel prize in Economic Science. They recently made a very interesting observation. In this post we’ll attempt to summarize and elaborate on some of their recent arguments and findings.

First, about that interesting observation. Behavioral economics is the application of cognitive psychology to economic decision making. Lovallo and Sibony state that its value is now widely recognized…except it seems among corporate strategists. In finance or marketing, they observe, behavioral economics entails spotlighting the bias of other people. With corporate strategy, however, it requires us to look in the mirror. A lot of executives might not want to.

Lovallo shares the story of a Hollywood executive bluntly telling him yes, he gets it, this could improve decisions over time, but people are only in this role for a few years and he really doesn’t want data suggesting he’s made bad decisions. In short, executives want their company to be successful, but they also want to be perceived as successful executives. Sometimes these things don’t exactly line up. Resolving this tension between process and reputation requires us to change how we think about decisions. We would need to stop focusing on decision makers and their perceived prestige, and instead focus on the process itself. This won’t be a popular move in cultures that love focusing on personalities.

Ignoring this leaves us in the trap many corporate strategists are stuck in. They still assume that good analysis makes for good decisions (Lovallo & Sibony, 2010a). It doesn’t. Whether explicit or not, every org has a decision process, and a process cannot offset biases it is not designed to account for. As Lovallo and Sibony highlight, a good process will weed out poor analysis, but good analysis leaves us blind to bad process.

To raise awareness about this, we need to start having collective conversations about bias, about which are likely prevalent in our cultures and among leadership. Lovallo and Sibony share the ones they’ve found to be the most prevalent and pervasive, clumped under the categories of saliency- or pattern-recognition biases, action-oriented biases, stability biases, interest biases, and social biases. Here is a quick look at each.

Saliency-based biases

Executives often base decisions on prior experience. What stands out to them, however, is often misleading. Similar to the confirmation bias (or “positive test strategy”) is the fallacy of “narratizing.” We tend to overweight data when arranged in a story that seemingly connects the dots, that seems to make things gel, even if the narrative we’ve concocted fails to capture the most important patterns at play in the decision environment. As Tyrion said in last night’s Game of Thrones finale, nothing is more powerful than a good story.

This may be a problem whenever funding decisions are based on advocacy. Those who prove the most persuasive may not have the best ideas. Lovallo and Sibony call this the “champion bias.” To fight it, they advocate manipulating the “angles of vision.” What alternative hypotheses might explain the same set of facts? What are multiple, alternative ways to interpret the same situation? What pattern recognition has been triggered in the group? What experiences are influencing those present?

These should be explicitly discussed, brought to light, and explored. Consider the pros and cons of each story shared. Leverage research to increase the angles of vision. Make field visits, spend time with customers, and explore techniques that change how meetings are typically run. Explore alterative frames, experiment with assumption reversals, and always seek to surface the narratives people are running in their heads. Offset potentially misrepresentative narratives by purposefully fleshing out explicit counter-narratives.

Action-oriented biases

Executives often stress the need to take action, to speed things up, to become more “efficient.” We typically end up incentivized to focus on output, sans any meaningful pulsing of its conversion to value. Designers often find themselves in a predicament here. They’re often the sole voice inquiring how value will be created, what problems will be solved, and digging into the why. They’re typically repaid with handwaving and lots of sturm und drang about “slowing things down.” (The message received is crystal: Stop asking difficult questions when all we have to do is throw more money on the next trash fire.)

I recently saw another interesting example: a company boasting about how many patents it’s sitting on. Take IBM, bragging it filed a record 9,100 patents in 2018. So what? Compare such an output statement to more meaningful metrics, such as the ratio of R&D spending to new product sales (over a three or five-year period), and the ratio of gross margin to new product sales (Aase, Roth, & Swaminathan, 2018). Number of patents per year tells us nothing about the value being created (or not). It’s a vanity metric.

The need to “take action” is usually coupled with gross overconfidence and a failure to take uncertainty in any way seriously. Indeed, in many orgs, exuding brash confidence is key to getting a plan funded in the first place. This is unfortunate. Overconfidence, after all, doesn’t mean I’m right — it just means I’m overconfident. What we’re left with is a tendency for overly-optimistic action and the ignoring of actual outcomes.

Techniques for offsetting this bias include premortems, decision trees, and scenario planning. A premortem is basically an upfront postmortem. Ask a team to imagine they’re looking into a crystal ball, “seeing” that the project has failed. For an allotted amount of time, have them individually write down what they “see” in the crystal ball — why did the project fail? Have everyone share one reason the project failed, capturing them on a whiteboard. After three rounds, start capturing ways the team could mitigate the issues discussed. How could these ideas then be used to strengthen the current plan? Klein, Koller, and Lovallo (2019) argue this technique offsets overconfidence more effectively than other risk-analysis methods.

Scenario planning is to create a “reference set” of similar endeavors, complete with their strategies and outcomes. The aim is to elaborate viewpoints at odds with senior leadership, thereby countering optimism bias. This was the approach Pierre Wack famously took at Royal Dutch Shell, applying concepts from futurist Herman Kahn to business strategy. Lovallo likes the example of Colonel Kalev Sepp, who innovated policy in Iraq with a reference set of 53 similar counterinsurgencies (which he created by himself, in just a few days). Though the set of scenarios must be agreed on without knowing whether they’re the “right” ones, without them decisions will be overly anchored to far fewer upfront narratives.

Stability biases

In a famous study, a group was asked if Gandhi was younger than 9 when he died. Another was asked if he was older than 140 (clearly impossible). Both were then asked how old Gandhi was when he died. The average for the first group was 50. For the second it was 67. Even when the “anchors” provided are clearly nonsense, as was the case here, the bias of anchoring and adjustment still has a dramatic effect. When the anchors available are not irrelevant, as is the case with last year’s budget, the effect is even stronger. (This is called “endowed anchoring.”)

Say we go through a laborious budgeting process. At the end, what we’ve decided on pretty much matches the numbers we got from the BUs, which matches last year’s budget. Lovallo and Sibony argue this is probably more due to anchoring than good budgeting. Because of this, business leaders often believe their plans are changing over time more than they really are. As with offsetting narratives, here one must fight anchors with anchors (Koller, Lovallo, & Sibony, 2018). One approach is to use regression to create a model serving as an alternative anchor. If there are areas where the model and last year’s numbers are close, that makes it easy. Where they are very different, those should be longer conversations.

Or take the example of “zero-based budgeting.” Have a group of executives individually look at a set of opportunities for the coming year. For half the execs, that’s all you show them. The other half can also see how resources were allocated across units the previous year. How different are the decisions between groups? Lovallo argues low performers remain 99% the same as the prior year.

Another stability bias, loss aversion, is when we weight losses avoided greater that gains of the same amount. An example is given in Lovallo and Sibony (2006). An executive decided not to recommend an investment that had a 50–50 chance of losing $2m or of returning $10m. He was worried about the damage to his reputation if the investment failed. To the extent that he was wise to do so, his organization is guilty of omission bias. The bet, after all, is “worth” $4m (.5x-2m+.5x10m). The organization should be seeking as many such bets as possible.

Loss aversion often combines with the fallacy of “honoring sunk cost.” This is when we “throw good many after bad,” basing investment decisions partially on the irretrievable cost of prior decisions, which should be treated as informationally irrelevant. Another variant is not doing the right thing because of work already done. Consider project teams that don’t want to bother users and stakeholders “until they actually have something to show them.” As Erika Hall points out, this usually means waiting until there is enough sunk cost that the team isn’t going to change direction much regardless of what the feedback is.

Stability biases cause leaders to cling to prior investments that should be let go. This causes us to keep projects alive that should be killed. Every org is burdened by such “zombie projects,” as Janice Fraser calls them, those cash-hungry, zero-value projects that amble along like the living dead. One approach to countering them is to shift the burden of proof. Instead of regularly asking which projects should be killed, look at every live project and justify why it should continue. Roadmaps can be made “contingent” by placing forced decision forks throughout, requiring that plan be augmented based on the actual outcomes obtained along the way (Courtney, Koller, & Lovallo, 2019).

The default attitude and position should not be that projects just continue. Weeds should continually be separated from fruitful seeds. Pruning should not be a dramatic one-off. Big and rare decisions not following a smart decision process are likely themselves rife with cognitive bias. If the people displaced by such pruning, however, are not then placed on projects of higher value, if they instead get “redeployed” and move to projects of comparable value, then what was the point?

Perhaps the real challenge here is to get better at such regular pruning without losing talent. Pruning should not equal layoffs. It should be a regular, ongoing, and healthy thing. It is a mistake to devalue doers by treating them as interchangeable cogs in a machine. When we outsource doers we mistakenly assume their value is confined to a very narrow set of technical skills, ignoring all the value laden in their rich knowledge of context, process, and culture.

Interest and social biases

Different parties will have different and often competing interests. Sometimes the individual interests of various orgs or BUs are not in the best interest of the company, involving issues Lovallo and Sibony call “inappropriate attachments” and “misaligned incentives.” Executives should seek to shut down practices that benefit individuals at the expense of smart decision process. An example is when people schedule one-on-ones prior to the larger decision meeting to gain buy-in beforehand. The debate so needed in the actual meeting has been sabotaged.

As Komisar (2010) advises, executives should “balance the bias” by gathering a diverse group of people with conflicting perspectives and letting them debate. Discuss an opportunity by having people individually list the plusses and minuses without writing down conclusions. If vigorous debate cannot happen, if it is not a psychologically-safe environment, there will be a disagreement deficit. Groupthink will be high.

If everyone is scared of challenging the Highest-Paid Person’s Opinion (the “HiPPO”), then groupthink is the natural outcome. (By the way, for those who find the acronym “HiPPO” offensive, Lovallo and Sibony offer an alternative: “Sunflower management.” This refers to our inclination to try and match the perceived views of executives, whether expressed or not, similar to how sunflowers always turn to face the sun.)

Conclusion

Corporate cultures sometimes have cognitive biases baked into them, making them worse. An easy way to start countering this is to hold conversations on what biases might be at play in a given process. Only then can we start incorporating explicit counter-bias techniques. If we don’t and leave it up to our instincts to detect when bias may be influencing decisions, then we’re leaving it up to our instincts to tell us when our instincts need checking. Not a smart move.

Encourage people to share the narratives and experiences triggered by the discussion, surfacing what stories may be influencing their decision making. Share raw data so others can try to detect alternative patterns. If a meeting will be full of people with one view, populate it with people who hold an opposing view. Push back against action-oriented bias and the sentiment that, “We just need to make a decision!” No matter how well-intended, we do not in fact need to just make a thoughtless, suboptimal decision — often decisions should be made at the last responsible moment.

As Lovallo and Sibony (2010b) point out, it’s also important to start differentiating between types of meetings. Meetings called to make a decision should not be run like meetings about implementing decisions. The former should highlight uncertainty, encourage black hatting, and require dissent. This would be non-value-adding in the latter.

And, finally, whatever criteria is used to evaluate a decision should be locked down ahead of time. This makes it harder for execs to change the terms to favor particular interests.

In summary, the focus should be on the intelligence of the process itself, and this means no longer focusing on individual prestige or reputation.

References

Aase, G., Roth, E., & Swaminathan, S. (2018). Taking the measure of innovation. McKinsey Quarterly. Retrieved on May 14, 2019 from: https://www.mckinsey.com/business-functions/strategy-and-corporate-finance/our-insights/taking-the-measure-of-innovation.

Courtney, H., Koller, T., & Lovallo, D. (2019). McKinsey Quarterly. Retrieved on May 14, 2019 from: https://www.mckinsey.com/business-functions/strategy-and-corporate-finance/our-insights/bias-busters-up-front-contingency-planning.

Klein, G., Koller, T., & Lovallo, D. (2019). Premortem: Being smart at the start. McKinsey Quarterly. Retrieved on May 14, 2019 from: https://www.mckinsey.com/business-functions/strategy-and-corporate-finance/our-insights/bias-busters-premortems-being-smart-at-the-start.

Koller, T., Lovallo, D., & Sibony, O. (2018). Bias busters: Being objective about budgets. McKinsey Quarterly. Retrieved on May 14, 2019 from: https://www.mckinsey.com/business-functions/strategy-and-corporate-finance/our-insights/bias-busters-being-objective-about-budgets.

Komisar, R. (2010). Kleiner Perkins’ Randy Komisar: ‘Balance out biases.’ McKinsey Quarterly. Retrieved on May 6, 2019 from: https://www.mckinsey.com/business-functions/strategy-and-corporate-finance/our-insights/how-we-do-it-three-executives-reflect-on-strategic-decision-making.

Lovallo, D. & Sibony, O. (2010a). The case for behavioral strategy. McKinsey Quarterly. Retrieved on May 6, 2019 from: https://www.mckinsey.com/business-functions/strategy-and-corporate-finance/our-insights/the-case-for-behavioral-strategy.

Lovallo, D. & Sibony, O. (2010b). Taking the bias out of meetings. McKinsey Quarterly. Retrieved on May 6, 2019 from: https://www.mckinsey.com/business-functions/strategy-and-corporate-finance/our-insights/taking-the-bias-out-of-meetings.

Lovallo, D. & Sibony, O. (2006). Distortions and deceptions in strategic decisions. McKinsey Quarterly. Retrieved on May 14, 2019 from: https://www.mckinsey.com/business-functions/strategy-and-corporate-finance/our-insights/distortions-and-deceptions-in-strategic-decisions.