Normal Accidents

This is part of a larger project to look at the books that are useful for understanding complexity. I do this also as part of the series exploring my own Booktree — my network of how the books I’ve read are all interconnected with each other.

Normal Accidents is #18 of books with the highest eigenvector centrality scores in my network. It has a score of 0.017.

It is linked to:

1. Future Perfect (#2, 0.041)

2. Thinking in Systems (#14, 0.019)

3. Checklist Manifesto (outside)

4. The Black Swan (#15, 0.018)

5. Collapse of Complex Societies (#12, 0.020)

6. Out of Control (outside)

7. Chaos (#5, 0.025)

I encountered Charles Perrow while taking what’s called a Science, Technology and Society (STS) studies module as a sociology major.

In Normal Accidents, Charles Perrow lays out a framework for thinking about the technological complexity we live with today.

For Perrow, there are two sets of things that we need to know about the technologies we live with. Does the technology involve transformative processes? Does it transmuting states of matter? Or are there far simpler processes involved? The second thing to consider is whether or not the processes involved are tightly coupled, or loosely coupled. Are the steps in the processes highly dependent on one another, or are they not?

With these things in mind, one can begin to plot things out. The following diagram comes from Perrow’s book.

I can’t remember what page it is…

For Perrow, the dangers of the nuclear plants designed in his time were that nuclear reactors involved highly transformative processes — elements were being changed, releasing large amounts of energy in the process. Nuclear fission had to be tightly controlled and involved all kinds of gauges and pipes all dependent on each other.

The three nuclear disasters that we know of today — Chernobyl and Three Mile Island — involved internal processes failing, giving plant operators a false sense of control. Coupled with deficient engineering design — all of these gave rise to the accidents. The Fukushima failure was due more to false assumptions, but even then it illustrates Perrow’s point — that processes involving transformative and tightly coupled processes will at some point, fail catastrophically. This point is so breathtakingly simple and obvious and so powerful all at the same time.

Note that this is simply different from Murphy’s Law. Perrow’s framework tells you the kinds of domains in which such disasters will happen.

This framework is also quite generative. It tells us that fleets of autonomous cars will lead to catastrophes. Imagine if millions of cars are networked together. A failure that can spread across the network will imperil these millions of cars all at once, leading to all kinds of catastrophes. This is something regulators and carmakers ought to think about.

And with this we come to the strength of Perrow’s framework — it can be prescriptive too. It tells us that we should decouple some processes to make catastrophes less likely. Or one can reduce the level of transformation. Make cars rely more on local networks or create decentralisation autonomy. Make every car its own autonomous platform, decoupling it from a central network. Or have different levels of autonomy so human drivers can enter the driving loop again.

Charles Perrow’s Normal Accidents is a MUST-READ for everyone trying to understand how technological systems can go really wrong.

If you were a Patreon for this project, you would have seen this first on Friday night! To get these posts fresh, do consider contributing to the Patreon here: