Rethinking Risk. Maybe we’re getting it all wrong…

In one of my previous articles, I introduced the ‘Utopia of Free Thinking’ — a wondrous place where we’re free from the belief that we succeed or fail, or win or lose. In the Utopia of Free Thinking we see safety as infinite and therefore never ending. We know that the final whistle will never blow and we’ll never claim victory or defeat. Safety will never be finished and our mantra and goal is a simple one: to make meaningful progress. The kind of progress that’ll take us to new, never seen before levels of greatness and achievement.

When we stride into the Utopia of Free thinking, we’re free to explore. Free to follow hunches that lead to experiments. Experiments start small, they never introduce danger and they never relax controls, but equally, they never fail. Instead, experiments prove or disprove theories. When theories are disproven…that’s ok…we celebrate and we find another way. When they’re proven…that’s ok too…we scale up and expand the trial.

Yep. I know what you’re thinking, but stick with me and together we’ll find a way to transform safety. Our meandering might just lead us to a next practice that will bring meaningful progress in the short and the long term. An what’s more…if you’ve read this far, pardon me, but I think we already know each other.

If I may;

You’re a fellow explorer. Clad in ludicrous red trousers (hi-vis of course) we wade together through pools of default thinking, wrestle crocodiles of reactiveness and span ravines of tired cliches. So now let us pull out the magnifying glass and together explore a little deeper. Here we choose not just the path less travelled, but instead we hack our way through the dense jungle of change and find a whole new path. **Yay!**

But. Before all that…let me address a potential issue:

When it comes to protecting people from injury, we can’t afford to follow hunches or experiment, because if things go wrong, harm (potentially terrible harm) might occur, right? And this is a real worry. In fact it’s such a worry that unless we’re careful, it takes away all innovation. It stops us being creative — not recklessly creative — but positively creative. We don’t innovate due to a fear of failure.

A fear of failure.

Put another way — our internal motivation to make the world a safer place is eroded because we’re subjected to an external motivator that threatens that our actions could lead to a failure to keep someone safe. So we default to the default and we stand still.

In his book, DRIVE: The Surprising Truth About What Motivates Us, Daniel H Pink takes us on a exploration of extrinsic and intrinsic motivation. Starting with Harry F Harlow in 1949, psychological science has proven that for tasks which require more thinking than just robotically following steps 1–3, external motivation — whether it be through reward or through punishment — actually makes results significantly less spectacular.

For tasks which require innovation, creativity and complex thinking (like keeping people safe), the traditional approach of incentive / reward / disciplinary / performance management is counter-productive. In safety, this is what risk does. ‘Risk’ provides an extrinsic motivation that actually makes the need to avoid failure take over from the goal of finding forward-looking solutions that drive us to a whole new level.

This is a key reason why safety has plateaued in developed markets. Extrinsic motivation — the carrot and stick — has been used for years to achieve a certain level of predictable performance. But it’s intrinsic motivation — that internal belief in change and the need for it that goes beyond external reward or punishment — which takes us to levels of greatness.

So how do we overcome the fear of failure?

**spoiler alert….we’re going to hack through that jungle I mentioned earlier**

Risk is generally broken down into 2 types — objective and subjective. Objective risk is used by insurers, financial institutions, casinos and the like. This type of risk uses the Law of Large Numbers to prove, over a statistically huge number of exposures, that the reality of the result will match the statistical calculation of the risk.

Think of a coin toss. Each toss has the statistical chance of landing on heads 50% of the time. However, after only 10 times, it may be that heads has only landed three times (or equally, it might be that heads came up 8 times. Who knows?!?). This is because the sample size is very small…but, as the number of tosses is increased — to say 1 million — the reality will become extremely close to 50%. The greater the number of repeats the more the objective risk meets the statistical chance.

While objective risk is relevant for the statistical probability of something like an electronic guard failing, where we’re trying to assess a risk of harm, that therefore includes human interaction (and in safety, everything includes a human interaction), there are too many variables and too few exposures to apply the Law of Large Numbers and follow the principles of objective risk. What’s more, where the outcome is injury, rather than financial loss, our tolerance for the variable has to be less. Safety risk isn’t like the toss of a coin, which can be calculated through an algorithm — safety risk is subjective.

Assessing subjective risk is like asking a group to agree on their favourite colour — everyone has a different view based on their experience, upbringing, personal taste and cognitive bias. And in traditional risk assessment, the concept of likelihood is especially troublesome.

Everyone struggles with likelihood — precisely because it’s so subjective. At best likelihood is an indication of probability, at worst, it’s a misleading indication. My view of the likelihood of an event may be different to yours because of my life experience. Even if we compromise and agree on the likelihood, the trouble with subjective probability is that no matter how ‘improbable’ it is, events, scenarios and situations can conspire so that the ‘impossible’ event happens today. The Fukushima nuclear reactor’s outer wall was breached by a once in 150-year Tsunami wave. Incredibly unlikely — which is interesting — but frankly irrelevant after it struck.

I’m going to suggest that there’s another way. We should focus on vulnerability, rather than risk. By considering vulnerability we’re able to understand where our people are exposed to harm, but with out having to ‘best guess’ the likelihood. What’s more, we do this while applying high emotional intelligence in the Utopia of Free Thinking.

Risk is cold and removed — perfect for the toss of a coin. Vulnerability is personal and internal — perfect for keeping people safe.

Rather than reducing risk, we should seek to identify where we’re vulnerable and then turn that vulnerability resilient. Now (and this is super important) — we still apply robust controls — but instead of assessing and trying to reduce likelihood (which is subjective), we focus on 4 areas:

  • How accessible is the hazard?
  • How many individual acts have to be taken to bypass the controls?
  • How obvious is the Hazard?
  • What is the worst possible harm?

By focussing on these 4 areas (which we’ll delve into join more detail in future articles), and moving away from the perils of likelihood, we’ll make meaningful progress to having resilient controls that are able to withstand the pressure of being tested.

One final issue with risk — if the situation significantly and unexpectedly changes, the likelihood of an occurrence can dramatically change as well. This can transform something that was once deemed ‘unlikely’ to suddenly being ‘likely’, and in so doing our original assumptions are no longer valid. If we use the 4 factors above, then no matter the stressors or the situation, the level of vulnerability remains consistent, accurate and relevant.

Oh hey — I’ll be conducting more research into this topic over the coming months. If you would like to take part in the research project (no people, environments, egos or animals will be harmed), please drop me a line.

Gordon
www.gordonbedford.com
gordon@gordonbedford.com

(**picture credit — www.history.co.ukMay 26, 1975: Evel Knievel crashes on the landing ramp at Wembley Stadium, London”)

Show your support

Clapping shows how much you appreciated Gordon Bedford’s story.