Avoid Expensive Mistakes

Or, know your base rate, know your odds

There are several biases that lead us humans to make bad decisions. The sunk cost fallacy: thinking having dug yourself into a hole means you may as well keep digging. Group Think: rationality by majority. Hyperbolic Discounting: eating the marshmallow (you should wait for the researcher to give you the second marshmallow and then eat two). The fundamental attribution error: something went wrong because someone is a bad person.

These biases lead us to make expensive mistakes. The mistake itself can be expensive. More subversively, we’re likely to keep making them. Add up the sum of avoidable mistakes over a lifetime and you are facing a tidy sum, without accounting for compound interest. Decisions tend to have higher stakes as one advances in life and work, raising the costs of not learning over time.

One nifty tool to avoid many of these biases is to know your base rates. Thinking back on six years of economics education, this is probably my takeaway: in the long run, on average, and all other things being equal, you are a statistic.

You are a statistic.

I am really sorry for how that sounds. They do call it the Dismal Science, after all. It’s very handy though, once you get over the stark dissolution of not being a special snowflake (or maybe just switch to believing we’re all special in ways well predicted by Bayesian models — it works for me).

Step 1: Know your base rate.

What is a base rate? The underlying probability that The Event will happen. The Event is whatever you’re trying to decide about. The product will ship by Q4. The stock will rise by more than the market. The avalanche will avalanche.

This base probability starts off with your data set. What is the history you have, with this team, of shipping products like that on time? What does the historical stock market data say about individual stocks beating the index? How often do avalanches happen?

Given the historical occurrence of this event, how often is it expected to occur? The answer is the base rate.

Be careful with small data sets. In the case of a small data set, you should be very conscious of the fact that you likely have no idea what you’re doing: you’ve simply not seen enough cases to confidently extrapolate from the cases you have seen to the next one.

Step 2: adjust, based on relevant information.

Easier said than done, but this is essentially what Bayesian modelling is. Take the base rate and tweak it based on this special case. Is it extra warm and melty? Increase the risk of avalanche. Is the product bigger in scope, or with a less tested team than usual? Increase the probability of shipping late. Is the stock one you might know less about than the average wall street analyst? Decrease your estimate that it’ll beat the market.

Note — I have the pessimistic cases here. We humans tend to be optimistic when evaluating our own skill. Try to compensate for your own opinion of your situation: most people think they’re better than average.

Step 3: Act based on this information

The more common case is more likely to happen than the exceptional case. Acting rationally means acting like you think that’s true.