‘All models are wrong, but some are useful’

George Box & Norman Draper, 1987.

If you go to any safety conference, chances are that at least once in each session you will hear someone talk about the “Swiss Cheese” accident causation model, initially proposed by Professor James Reason. It’s a powerful model, easy to understand, and has provided much improvement in safety performance, allowing people to grasp why we want to continue to maintain multiple diverse layers of protection. It is an entry-level approach into the complexities of instrumented protective systems, although the standards applied in practice do recognise the limitations of the approach.

In some ways, the general engineering community has picked up on this as a good model to use, but may have lost sight of the significance of the word “model”. In this context, a model can be defined as

“a simplified description, especially a mathematical one, of a system or process, to assist calculations and predictions.”

As Box and Draper said in their work on statistical models, the accident causation model, is, like many simplified representations both wrong and useful.

Where it is useful is in allowing focus on individual parts of a safety system and providing simple ways to improve the reliability of individual layers. However, experts in the aviation industry were researching why improving system reliability had not had as big an impact as expected on safety performance, and realised some of the ways where the model falls down. It can lead people to think that accidents happen by random fluctuations, and that you need to be “unlucky” to have an accident if you have maintained your barrier health. In practice, however, looking at many serious accidents the root causes more often point to a serious degradation in the underlying system, meaning that the barriers are either ineffective or overwhelmed, or expose common faults in multiple systems making their failure a lot more likely than initially modelled. Modern safety systems are often extremely complex making it difficult to assess, or even understand, the many ways that they could fail in a dangerous way.

Nancy Leveson, Professor of Aeronautics and Astronautics at MIT, has proposed an alternative model for accidents based on system theory called “STAMP” (System Theoretic Accident Model and Processes). This is based not on barriers, but on the similar concept of constraints: these may be enforced by a layer of protection, but are thought of differently in that the designer is asked to understand what the barrier does to constrain a dynamic system into a safe state, and be explicit under what circumstances this might not work. Again, this is also a model and a simplified representation of reality, but by focusing on system degradation, dynamic system response, and the system and organisational impact on design, the model provides a powerful way of aligning engineering, ergonomics and behavioural safety to improve design and operation.

We see much promise in this approach, and are looking at extending the way it is applied within airplane and space-system design into other high hazard areas. In places like petroleum drilling operations where the major risks are controlled by a combination of continual system testing, there are multiple barriers to unconstrained hydrocarbon flow but extreme consequences when things go wrong. We see it as a powerful way to get people to overcome the optimism biases of being generally “lucky” to avoid hazards and really consider how things might go wrong. We’re working with our clients to pilot this approach in upstream oil and gas, as we all want our clients and fellow workers to continue to go home safely. One of our clients sees this as a way towards “pre-emptive accident investigation” — and we’re working hard to make the accidents we investigate only ever happen on paper.

I am a Fellow of the Institute of Chemical Engineers, and lead the Atkins Process Safety Team in Aberdeen. I am presenting on STAMP at the IChemE Hazards 27 Symposium on the 11th of May.
One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.