Decision making: risk, uncertainty or a black swan?

Mattia Ferrini
8 min readMar 28, 2022

--

Utagawa Kokunimasa (1874–1944), 1896

I’ve seen plenty of online resources that confuse risk and uncertainty, and that inappropriately use the concept of black swan. A clear distinction between the three ideas, and their appropriate use in the decision making process affects dramatically the quality of its outcome.

Setting up the right framework

It is estimated that an adult makes in average 35,000 decisions per day. This is not surprising! Think about all the decisions we have to make in a day in our private as well as in our professional life: what to wear on a special occasion; whether we should reach our holidays destination by car or by train; whether to accept or not a job offer; whether your company should expand its manufacturing capacity by building a new production plant.

Some decisions matter more than others: a career change as opposed to the color of a tie; the decision to move and settle in a new city as opposed the choice of the restaurant where you will dine on Friday; a large acquisition as opposed to the venue of a company retreat. It is definitely worth to focus our attention on the most impactful decisions. We might argue that our focus is often misplaced, and that sometimes big decisions are taken without the necessary awareness [2]. “Can it be that I allowed Napoleon to get as far as Moscow? When was it decided?” wonders General Kutuzov in War and Piece. Let’s assume nevertheless that we can identify critical decisions, and we can dedicate enough time to their analysis.

One we have identified an impactful decision, it is then time to ponder about what to do. What makes decisions hard? Not knowing what will happen makes things harder, sure thing. But what does “not knowing what will happen” mean really mean?

There are known knowns, known unknowns and unknown unknowns (K3U3). Often attributed to US Secretary of Defense Donald Rumsfeld, this framework taps into the work of Joseph Luft and Harrington Ingham, two psychologists, who coined the idea of unknown unknowns in the 1950s. In Donald Rumsfeld’s own words: There are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns — the ones we don’t know we don’t know. And if one looks throughout the history of our country and other free countries, it is the latter category that tends to be the difficult ones [Source]

The K3U3 framework is extremely powerful and can be applied in any context, from warfare to flight safety. Unfortunately, however, as it is, the framework still fails to address a key methodological question: what does “knowing” mean?

Decision making and games of chance

The development of today’s decision making methodological tools lies its foundations on the work done during the Renaissance by mathematicians such as Girolamo Cardano (1501–1576). Cardano, for the first time in history, recognized the relation between the probability of winning and the proportion of favorable outcomes among all the combinations of events that could occur (sample space). Cardano was a physician, an amateur mathematician and a gambler. Decision sciences have their roots in gambling and games of chances. Why games of chance? Gaining an edge in a games of chance is a rather straightforward way to make a quick buck. On top of that, it is easy to gather data. For example, there’s a long history of statisticians duly and patiently collecting coin tossing data: Georges-Louis Leclerc (1707–1788), tossed a coin 4,040 times. Karl Pearson (1857–1936) went to the great extent of tossing a coin 24,000 times and duly noting down the outcomes [3].

By learning from gambling and games of chance, decision scientists developed the so called lottery ticket approach: the optimal decision is the option that maximizes the expected value, or, in other words, the payoff weighted by the probability associated to the option. In other words, if we frame a decision making problem as a lottery ticket problem, “knowing” means three things: knowing the options, their payoff, and probabilities. If probabilities cannot be estimated, it is still possible to base the decision on the opinion of experts (subjective probabilities, beliefs). If we can at least trust our beliefs, we still have all the ingredients to frame our problem as a lottery ticket. Similarly, we can tackle a decision as a lottery ticket even if we don’t know exactly the payoff of each scenario, but we can express the payoff as a probability distribution.

Decision making under uncertainty

In the 80s, during the Falklands/Malvinas war, RAND Corporation attempted to support the British military forces with decision supports tools. RAND quickly ran into a struggle: while it was possible to simulate military combat scenarios, it was impossible to estimate their probability of occurrence with confidence. RAND was faced with the need to deal with known unknowns: the impossibility to quantify uncertainty as a probability distribution. Risk represents situations in which information is available, in the form of probability distributions. RAND was left with an unquantifiable source of uncertainty. Uncertainty represents, in fact, a situation in which no probabilistic information is available to the decision maker [Etn09].

The inability to quantify the probability associated to different scenarios is also commonly experienced in other domains. For example, in energy policy, where conventional engineering approaches only work for short term planning. In fact, for short time horizons, we can produce a robust enough forecast of demand as well as a reliable estimate of its prediction interval. For strategic, long term planning decisions, on the other hand, it is impossible to produce any scenario probability. There are in fact many factors at interplay that might dramatically affect the retail and industrial demand for electricity, such as the adoption of electric vehicles, or changes in the consumer behavior in response to global warming risks.

The impossibility to estimate probabilities from data, or to at least guess them (subjective probabilities) is bad news. The good news is that there are approaches to decision making, alternative to the lottery ticket approach, that still enable us to make good decisions. We can, for example, just go for the strategy that gives us the best worst case, as it was suggested by Abraham Wald. In order to implement this approach, we don’t need an estimation of the scenario probability.

Black swans

So far, we covered games of chance (known knowns) and decision making under uncertainty (known unknowns). The K3U3 helps us realize that, no matter how hard we try, we might overlook sources of uncertainty, and therefore we might fail to take into account possible scenarios. In other words, there are unknown unknowns. Unkown unkowns are black swans.

The concept of black swans has been made popular by Nassim Nicholas Taleb in his 2007 eponymous book [Tal07]. The idea of black swans has its origins in the belief that, before the discovery in Australia, people in the Old World shared the unassailable belief that all swans were white. A belief undeniably confirmed by empirical evidence. Taleb defines a black swan as an event characterized by three characteristics: rarity, extreme impact, and retrospective (though not prospective) predictability. While the first two characteristics are rather straight-forward, the third, retrospective predictability, is the most often misunderstood. What Taleb means with retrospective predictability is that black swans, such as the rise of Hitler, Internet, Harry Potter and the Beatles, cannot be foreseen.

We are inclined to believe that black swans can be anticipated. This is because, in retrospect, black swans, such as the fall of the Soviet union, seem to make sense. Our misplaced belief that black swans can be anticipated because we can retrospectively explain them is compounded by what Taleb calls narrative fallacy. According to Taleb, we all like stories and we have an innate tendency to simplify historical accounts in order to fit in a more straight forward narrative. In other words, we are all vulnerable to believe that it is trivial to foresee black swans because their causes of black swans are obvious, manifest and unconcealed.

Following the publication of Taleb’s book, the idea of black swans has become capillary popular. Black swan is now a popular term in business jargon. However, rare events are often inappropriately classified as black swan. This is the case of COVID19. Hundred of articles treat the 2019 flu pandemic as a black swan event. However, historical accounts on pandemics date back to as early as 430 BC (Typhoid fever in Athens) and 165AD (Antonine plague). Pandemics have been a recurring threat over the course of history. Pandemics are something neither new or extremely rare. As such, we can argue that they are not only retrospectively predictable but also prospectively.

Are we falling in a narrative fallacy? Perhaps. However, the frequency of pandemics might hint that we can safely exclude pandemics from the list of black swans. What black swans should concern us today? Unfortunately, we don’t know. And, without knowing what we should expect, it is impossible to design any countermeasure. According to Taleb, our only hope is to deploy a strategy that is antifragile [Tal12]. It is hard to summarize Taleb’s idea of antifragile in a few words, but we can think of it as a combination of resilience (an ability to withstand unfavorable events), and simplicity and agility (an ability to quickly adapt in order to take advantage of favorable events, a strategic approach that allows a company or individual to benefit asymmetrically from favorable as oppose to unfavorable events).

K3U3 approach to decision modelling

The K3U3 approach to decision modelling helps address two issues:

  1. While modelling a decision, we need to make a clear distinction between risks and uncertainties, and then tackle the decision making problem with the right methodological approach
  2. whenever we tackle a decision making problem, we might still want to introduce criteria that trade-off performance for robustness and anti-fragility.

This blog post was originally published on my LinkedIn account. Follow me or connect with me if you’re interested in discussing data science and decision science.

References

[Etn09] Etner, Johanna, Meglena Jeleva, and Jean-Marc Tallon. “Decision theory under uncertainty.” (2009).

[Wal39] Wald, A. (1939). Contributions to the theory of statistical estimation and testing hypotheses. The Annals of Mathematics, 10(4), 299–326.

[Tal07] Taleb, Nassim Nicholas. The black swan: The impact of the highly improbable. Vol. 2. Random house, 2007.

[Tal12] Taleb, Nassim Nicholas. Antifragile: Things that gain from disorder. Vol. 3. Random House, 2012.

--

--