# How Should You Deal With Randomness?

Apr 14 · 8 min read

You’re visiting the mighty city of Saint Petersburg, and an old woman offers you to play “a game worth playing“. She explains: “I will flip a coin (mine or yours, it’s your choice). If it lands heads up, you win two rubles, and the game continues. If it lands tails up, then you lose, and the game is over. The coin is tossed again, and for each tossing, I’ll double the amount of the previous round, but at the first tail, the game is over.“

## Question is: how much would you pay to play this game?

The standard framework used in that kind of situation, where you need to anticipate value for a given investment, is expected value. The expected value is calculated by multiplying each of the possible outcomes (payout) by the likelihood (probability) each outcome will occur and then summing all of those values.

What is the expected value of that game?

First, let’s begin by discovering the probabilities. The probability that a fair coin lands heads up is 1/2. Each coin toss is an independent event and so we multiply probabilities possibly with the use of a tree diagram. The probability of the first heads is 1/2. The probability of two heads in a row is (1/2)² = 1/4. The probability of n heads in a row is then (1/2)^n

Second, let’s figure out the payouts. If you have a head in the first round you win two rubles for that round. If there is a head in the second round you win four rubles in that round. The payout for a given n round is then 2^n.

Then the expected value is pretty straightforward: it is the sum for every n between 1 and the infinite of (1/2)²*2^n, which gives an infinite sum of 1, then the infinite.

So when we consider the expected value of this game, you should jump at the chance, no matter what the cost is to play. However, no rational individual would accept this. For a simple reason: the expected value is a terrible tool that we fool ourselves using in our daily lives.

## What Is This Mistake That We All Make?

I’ve read recently an amazing book from Sylvestre Frezal called “Quand les statistiques minent la finance et la société” (which means “When statistics undermine finance and society”). He affirms that statistics in a situation of uncertainty are a bit like alcohol: it gives courage, helps make a decision and allows us to find an excuse if things go wrong. But like alcohol, they decrease the soundness of the analysis and the quality of the decision.

More precisely, the author shows that tools for assessing risk and anticipating payoff designed for specific conditions are widely misused, and thus often lead us to suboptimal outcomes.

He distinguishes two very different states of uncertainty:

1. a situation of randomness, where the event will occur only once, and you have different possible outcomes. You can think of playing Russian roulette.

The thing is, for a given phenomenon, you can be in one position or in the other depending on your perspective.

Let’s take an example to illustrate this point. The perspectives of the oncologist and a patient with cancer are entirely different.

Let’s assume that the survival rate for that specific cancer is 95/100. Let’s also assume that there are two treatments, the first one is standard and has low toxicity for the patients (meaning that their health won’t be damaged by the treatment itself) and leads to a 95/10 survival rate. The other leads to a 99/100 survival rate but has a very high level of toxicity. For the oncologist, the prefered prescription will be undoubtedly the first treatment. It is the case because he is in a situation of repetition under uncertainty. He knows that a given patient will be cured with the first treatment, without causing damage. For the patient, the story is totally different. Since it is a situation of randomness, he may be willing to experience a degradation of his health for the additional 4% chance of not dying. He may not care about the average but focus on the consequence of the undesired scenarios (it may not be a case of “irrationality“ or “bias”, that many economists like to label when the behaviour doesn’t match their modelization of reality).

In fact, every time someone is able to be in a situation of plurality, he can use expected value and make good decisions. Under randomness, it would be foolish to aim at the average.

Let’s take another example: let’s assume that a house cost 1m and that 1/1000 houses fully burn every year. It works perfectly fine that everybody pays 1000€ per year in insurance (a bit more to make the insurance profitable) so that the company can pay back the victim. Yet it would be foolish for the owner, to just spare 1000€ in a dedicated bank account, because it’s expected cost is 1000€ per year. If the house burns, he will never have enough to rebuild his house. If the house doesn’t he will just freeze 1000€ per year for nothing.

As a direct consequence, you should be very cautious in a situation of randomness and avoid using blindly tools designed for plurality.

## The Fallacy of Data Objectivity

What particularly interesting is that people relying upon statistics are sometimes aware that it does not work properly for their situation. But probability has a big advantage: it feels objective and serious.

But statistics are far from being objective. Let’s borrow once again form Sylvestre’s book. Regarding insurance, people demand more and more to pay the price tailored to their personal risk. They ask for their individual risk, objectively. But when you dig deeper into that question, it doesn’t make sense since risk cannot be objectively revealed. Try to figure out what should be the price of a ride in a shared-car. What should base rase should you take? The expected cost of a driver for the insurance? We can do better. The expected cost for an average, 45, male driver, who didn’t lose any point on his driver licence? We can do better. You can factor the time (how crowded are the roads? what the weather will be like?), the location (how dangerous it is? are the roads in good shape?). You could even include data on the driver: when was his last holiday? Is schedule very busy? Does he has stressful events in the near future or did he had stressful events recently? The parameters can be endless. But the choice of parameters has a strong impact on the expected costs. The access of certain data depends on regulation, technological maturity, personal and collective usages, practices (do we have the habits of looking at particular parameters that could make a difference in the result). Statistics will differ whether a large chunk of the population gives access to their data or not. Even more, paradoxically your specific expected return/cost will depend on the behaviour of your peers (otherwise there wouldn’t be any statistics). So there is no such thing as an objective estimation. And the worst thing is: there is no notion of progress toward an increasingly accurate estimation since new parameters could influence the results in both directions (you can discover that his eating habits lower his risk while his practice of sport increase his risks, so there is no trend nor any hope for a convergence toward the right result).

## Consequence for assessment, responsibility & decision-making

As we’ve just shown, data are not objective nor a token of truth. Yet people are still largely relying on data to justify their decision. They say it was a necessary decision given the parameters. This assertion protects them since it removes their responsibility — decisions are mechanical with the right data and the right modelization of reality. And the more precise it is the more accurate they say their decision is. But precision does not guarantee accuracy. You can have a very complex mathematical model leading to false prediction (the assumptions or functions just don’t fit with how reality works).

But in a situation of randomness, the decision-maker should not hide under tools such as expected value or similar statistical approach. Not to say that looking at base rate cannot help. It’s just that the decision-maker must take a subjective decision of how probable and improbable future events are. The real question is not what the average or what the exact probability of occurrence is, the question is do you believe in this occurrence or not. The decision maker must divide the different scenarios between probable and negligible, and for that (s)he must interpret the reality and make a decision about what the credible future paths are.

Then, for Sylvestre Frezal a good decision process looks like the following:

1. Identify all the possibles events. This task can be partially delegated by experts that will widen the frame of thoughts and look for clues and base rates, but the eventual decision-maker needs to be included in that step, by checking if the framing is too narrow and by checking that people have the right elements in mind. These outcomes should be grouped within coherent clusters to be able to analyse their implications (it would be too costly, in terms of time, resources and cognitive power to dig deep into every possible track, however close could they be).

2. Reject all the too-improbable events. This step is highly subjective and demands to take responsibility. Labelling an event too improbable means to accept ignoring it, which means being comfortable taking the risk of their occurrence. (S)he will stop considering these events.

3. Focus on the consequences of the remaining events. Now the decision-making should focus on the probable events. (S)he should act in the manner that maximizes the outcome of the probable events (s)he believes the most while being comfortable with the consequence of the other options and/or taking measures to mitigate the outcomes of the other probable events, in which the decision-makers believe less.

Is that what you do? Do you intend to change how you handle statistics? Do you intend to change your decision-making process?

Looking forward to having the discussion!