The problem with analyzing policy decisions in hindsight
Why bad things happen to good decisions
If you want a superb example of outcome-biased thinking, consider evaluating the Swedish government’s COVID-19 decisions.
In psychology, outcome bias is the error of evaluating a decision using information that the decision-maker couldn’t have known at the time.
The difference between the general public and decision professionals is that the (outcome-biased) public will judge Sweden based on “how things turn out” while decision professionals will not. Though decision scientists may differ in our appraisals of the situation, we will only judge Sweden based on what was known at the time the decisions were made.
Outcome bias is a socially-acceptable form of mass irrationality.
Many of you will be surprised to hear this, since outcome bias is a socially-acceptable form of mass irrationality. You might even feel the beginnings of rage, especially if you’ve been brought up to believe that “how things turn out” is exactly the way to evaluate decision* quality.
That is an illusion.
Why bad things happen to good decisions
When life forces you to make decisions under incomplete information, a good decision can have a bad outcome.
For example, you might choose to drive safely (good decision) and still find yourself in a surprise collision with a reckless idiot (bad outcome). Or you might choose to drive like a reckless idiot (bad decision) and get to your destination without a scratch (good outcome).*
Getting a bad outcome does not mean you made a bad decision.
What’s the point in evaluating a decision after you’ve observed the outcome? You can’t change the past. The best you can hope for is to learn something that helps you in the future, for example:
- Is there a problem with the way I approach decisions? Should I change my decision process next time?
- Is the person in charge incompetent? Should I replace the decision-maker with someone more skilled?
Great! In that case, decision analysis is the academic field for you… and the first rule of decision analysis is: avoid outcome bias. Don’t let hindsight mess with you! The quality of a decision should be evaluated using only the information available to the decision-maker at the time the decision was made.
Don’t let hindsight mess with you! If you evaluate the quality of a decision by its outcome, you’ll learn the wrong lessons.
If you evaluate the quality of a decision by its outcome, you’ll learn the wrong lessons. Lessons like, “Next time, I should drive like a reckless idiot.” Or perhaps you’ll vote to fire a skilled decision-maker whose good decisions led to a bad outcome, thereby shrinking society’s supply of competent leaders. (I’ve written up a detailed explanation here for those who need convincing.)
Evaluating coronavirus policy
Since the correct way to evaluate decision quality is to look only at the information available to decision-makers at the time they acted, there’s good news for COVID-19 policy buffs: if you want to analyze the quality of major decisions made in March 2020, there’s no need to wait!
There’s no need to wait!
You can get started right away. What you need to find out is:
- What did those leaders know at the time they made their decisions?
- What were their objectives?
- Did they do their homework? Did they put in an appropriate amount of effort to gather information?
- How did they reason about their data and assumptions?
In other words, you only need to know about things that happened in the past.
If you wait to see how “things turn out” before evaluating decision quality, you’re essentially punishing leaders for not having a crystal ball.
Here comes the bad news: you probably won’t be able to get your hands on these data. The closest thing available to most of us is information committed to the public record at the time of the decision, which rarely gives you a truthful picture of how the decision was really made.
More bad news
If you’re reading this in 2023, you live in a society that tolerates outcome bias. That means your society does not force its decision-makers to pre-commit to decision strategies (e.g. “If the number of COVID-19 cases per day in our jurisdiction is x, we’ll do …”) or to keep/share records of how they really made some of their most important decisions. (After all, if those records don’t exist, how will you hold anyone accountable?)
If those records don’t exist, it’s hard to hold anyone accountable.
Once outcomes show up, leaders can do exactly what we see people doing in psychology lab experiments: take credit for good outcomes and blame bad outcomes on bad luck or good scapegoats. (This is called self-serving bias.) Savvy leaders will always find data to back up a made-up story. All seasoned data scientists know this secret: if you can’t torture exploratory data until it confesses, hand it over to a pro so you can watch and learn, kiddo.
“If you torture the data long enough, it will confess.” -Ronald Coase
So, if you’re interested in pondering a policy or studying a strategy, don’t wait until you see how things turn out; start right now! It gets harder to evaluate undocumented decision quality the longer you wait.
Getting a true account from those who didn’t document their decision process becomes less likely with each passing minute.
Don’t forget to add a date next to your scribbles. Otherwise, you’ll fall for hindsight bias (the one where you adjust your memory after learning the answer, saying “I knew it all along” although you didn’t).
Don’t be naïve
Let’s not be naïve enough to suppose that people will tell us more than they have to.
I’m not holding my breath for the day that all important decisions are documented at a standard that makes decision analysis professors cheerful.
“Outcome bias threatens society’s ability to promote and retain competent leaders.”
However, some of you are in a position to influence decision-makers. Perhaps you’re looking to grow your own decision-making skills. Or maybe you’re a leader who has the power to promote or demote subordinate decision-makers. In both cases, you have a strong incentive to insist that decision processes are meticulously documented.
Many of you have a strong incentive to insist that decision processes are meticulously documented.
Human memory is a leaky bucket and even folks with the best intentions can’t be trusted to remember how they made some dusty past decision. That’s what pencils and disks are for. To nip hindsight bias in the bud and avoid stubbing your toe on outcome bias, follow this simple rule:
Keep a date-stamped record of the decision process.
What can you learn from outcomes?
Am I saying that outcomes can’t teach you anything? Not quite, but be careful not to learn the wrong lessons.
Each new moment is an opportunity for a new decision based on the information available to you.
Don’t get me wrong. I’m not telling you to ignore outcomes entirely, since that would give you an unrealistic view of the present. I’m telling you to use present outcome data for present/future decisions, while avoiding outcome bias: an unrealistic view of the past and an incorrect appraisal of decision-making skills.
Going along with yesterday’s good decision might be today’s bad decision.
Each new moment is an opportunity for a new decision based on the information available to you. In light of a new outcome, going along with yesterday’s good decision might be today’s bad decision. Outcome data that precedes your new decision is fair game — just be sure you get your tenses right. Never penalize a decision-maker for lacking a crystal ball.
The trick is to learn from the past (and not the future).
In future articles, I’ll tell you more about how to use outcomes without falling victim to outcome bias, covering some of these topic:
- Outcomes as data for future decisions.
- Iterative decisions versus one-off decisions.
- Inaction, indecision, and negative responsibility.
- Recency bias, availability bias, and learning too much from a single outcome.
- Comparing actual outcomes with expected outcomes and the range of potential outcomes.
- The role of surprise in exposing blind spots.
- Surprising outcomes versus surprising data sources.
- The danger of putting too much weight on recent data.
- Why it’s hard to evaluate the quality of your own decision.
- Collaborative evaluation.
- How to avoid hindsight bias and counterproductive sense-making.
Footnotes
*Bad decisions or badly-made decisions?
Let’s not get tangled up in language. If your idiom makes a distinction between a bad decision and a decision that is made badly, then you’re using your words a different way from decision analysts. Those terms are synonyms for us. In that case, what you’d call a “bad decision” is what we refer to as a bad outcome. To make sense of this article, you’ll have to translate from our language to yours. In our lexicon:
- Bad decision = badly-made decision = bad process at the time of choosing = display of shoddy decision-making skills
- Bad outcome = bad result after decision was executed = unpleasant real-world consequences of the decision
And now for something completely different…
Thanks for reading! If you had fun here and you’re curious about AI, here’s a beginner-friendly intro I made for your amusement:
Connect with Cassie Kozyrkov
Let’s be friends! You can find me on Twitter, YouTube, and LinkedIn. Interested in having me speak at your event? Use this form to get in touch.