Risk Management — Keeping Up Appearances

Christian Schitton
Analytics Vidhya
Published in
7 min readSep 30, 2021

The Myth of a 3-Sigma Event

Before we start, Ms Hyacinth Bucket (pronounced ‘bu:ke:’ and not ‘baket’) of the same-named TV series may forgive me of using the same title for this article.

Nevertheless, risk management frameworks may lure you into the false impression that anything is under control by keeping up the appearances of a well tuned risk coverage.

Risks are identified, thoroughly analysed, evaluated and finally incorporated into the workflow of the management information system in order to assure an ongoing monitoring and timely reporting of risk exposures to the relevant decision makers.

And then — the unthinkable happens. A risk event triggers which was so out of any probable scope that nobody wasted any time or effort to set up appropriate defence measures.

When a company is lucky, this highly unlikely event does not cause too much harm. But when it does, the inglorious Black Swan is in with a capability to financially destroy an organisation.

In any case when facing such a situation, the anthem of the “3-sigma” event emerges. After every sharper crisis (e.g. the stock exchange crash in 1987 or the big financial crisis in 2007–09) there is this ongoing discussion how incredibly implausible such a development was.

As an example, let me quote an article found on the Nasdaq homepage:

“According to general statistical principles, a 4-sigma event is to be expected about every 31,560 days, or about 1 trading day in 126 years. And a 5-sigma event is to be expected every 3,483,046 days, or about 1 day every 13,932 years.”

Odds of 1 day to every 13,932 years — so why to even bother?

Because those 3-sigma, 4-sigma, 5-sigma or even more-sigma events are happening quite regularly!

But why?

Be Aware of the Risk Assumptions

Risk evaluation is done with a certain bunch of assumptions about how the data at hand might behave.

The problem is that in most cases the available data are just a snapshot, a current manifestation, of what is really going on in the backyard. And exactly this snapshot (i.e. the sample) might not accurately reveal the behaviour of the overall population.

How to deal with this kind of problem, you may see here.

Another major source of surprise is coming from the fact that a lot of risk management models are acting on the assumption that data are normally distributed (normal distribution, also called Gaussian).

What does this mean?

Normally distributed data assembles around an expected value (the mean of the distribution) in their majority of observations. And the more far one gets from this expected value the more unlikely it gets that an observation appears (this distance from expected value to observation is expressed in standard deviations).

Issue is that the probability decreases heavily with rather short distances. See the following graph:

image by author

Here, the expected value amounts to 0. Within the distance of 1 standard deviation (see the dashed lines) from the expected value, around 68 % of all observations occur. The distance of 2 standard deviations covers already around 96 % of all the observations.

Therefore the probability of a 3-sigma event (or higher) under these “normally distributed” assumptions is less than 0.3 % — highly improbable one would say.

And this is exactly the problem. In a lot of circumstances, real life data do not follow a normal distribution.

An Example: Share Prices

Risk models which run on normal distribution assumptions may neglect the tail area of the data. Though, this is the territory where outliers respectively extreme values are hiding.

And this is why some really heavy events are expected like “1 trading day in 126 years” but are happening much more frequently.

In order to illustrate this, let’s have a look at (real estate) share prices.

Recently I read on LinkedIn that in general share prices show quite low volatility and daily price differences often oscillate around zero.

I cannot talk about all of the asset classes but a sample of real estate shares traded in Vienna and Frankfurt confirms this picture. Here, an excerpt of daily share price differences of several public traded real estate companies:

image by author

As a consequence, trying to approximate the development of daily share price differences of one of those companies (grey area) with a normal distribution (red line) turns out to be not such a burner. S Immo AG as an example:

image by author

What we see in the upper graph is that even the normal distribution puts too less stress on the centre, i.e. where there are almost no price differences per day.

Empirical observations have it that in 60 % of the cases the daily price differences are within +/- 1%. The normal approximation on the other hand allows for a 37 % chance of daily price differences within this range which is significantly lower.

The lower graph shows the tail behaviour of the empiric data compared to the normally distributed approximation.

In this case, the normal distribution is “fading” out too early and neglects extreme events, i.e. price differences of more than 10 % in each direction. Empirically, daily price jumps of more than 10 % have a probability of 0.5 %. Contrary to that, the normal distribution would give us a chance of 0.0001 %

Does not sound so spectacular, does it?

Though staying with the “sigma event” terminology, our risk model which is based on a normal distribution would have underestimated the occurrence of a 5-sigma event by 5,000 times.

Five thousand times! This is where the “once-in-a-lifetime“ surprises come from and why they are occurring much more frequently.

If an organisation starts to react on this 0.5 % chance is another story and will mainly depend on the impact the potential event will have on this very organisation.

Another Perspective

Let’s suppose, we incorporated a risk model which assumes a normal distribution of the relevant data set.

If we would randomly generate data with the model (e.g. 1,000 trials with 1,000 observations per trial), we would get the following results:

image by author

The bulk of the randomly generated observations (the grey data points) are within 2 sigmas of the expected value (red line). Remember, in a normal distribution around 96 % of data lie within a distance of 2 standard deviations from the mean.

Anything seems stable and 3-sigma events (or higher) are an absolute rare event.

But what if your real-term risk framework behaves as follows:

image by author

The data points (grey area) are much more spread and there is much more “life” in the tails (i.e. sigma = |3| or more). As a consequence, the average value is more unstable than in the normally distributed case.

If you try to cover this real-term environment by a risk model where a 3-sigma event is already extremely rare, you might face some unpleasant surprises.

Finally, let’s have a look at a really unstable risk exposure scenario:

image by author

Don’t get confused by the bulky grey area in the graph. The majority of observations assembles within a distance of 10 to 15 sigmas (!) from the average value (red line). Provided this, the average value is pretty unstable and the frequency of shock events is quite high.

In this scenario, the data framework follows a Cauchy distribution which belongs to the class of extreme value distributions.

Anyway, try to cover this environment by a risk model backed on normally distributed assumptions…

Conclusion

As Nassim Nicholas Taleb, the grand seigneur in risk and randomness, said

When someone tells you it was a 10 sigma event, meaning it is 10 standard deviations and it is Gaussian; unless the information came from God, you can reject the Gaussian distribution for that domain.

In other words, taking the wrong assumptions about the observed data and approximating them with an inappropriate probability distribution (especially with the normal distribution) can lead to completely false conclusions in the evaluation of risk exposures of a company.

As a decision maker, you have to be aware of the background upon which the management information system is deriving its risk conclusions.

Otherwise, you could be lured into the false impression of a well formed risk exposure while a potential disaster is waiting around the corner — far from being noticed by the way.

A well-tuned risk modelling, in front of all a tightly organised stochastic modelling, is key for a company to cover its risk exposures in an appropriate way.

References

How Many Sigmas Was the Flash Correction Plunge? by Kevin Cook at nasdaq.com/articles / March 4, 2020

Statistical Consequences of Fat Tails by Nassim Nicholas Taleb/ 2020

short lecture by Nassim Nicholas Taleb on 10-sigma events: https://www.youtube.com/watch?v=k_lYeNuBTE8&ab_channel=NNTaleb%27sProbabilityMoocs

--

--

Christian Schitton
Analytics Vidhya

Combining Real Estate Investment & Finance expertise with advanced predictive analytics modelling. Created risk algorithms introducing data driven investing.