The Swiss cheese model of success

Pau
theuxblog.com
Published in
4 min readJul 29, 2016

--

The other night I was rolling around in my bed trying helplessly to fall asleep; the heat of the summer and the last chapter I read from The design of everyday things didn’t help. It’s difficult to fall sleep when your brain is forming new ideas, it’s a selfish organ which does not like to be interrupted.

The chapter talks about how we often miss the bigger picture when we classify an error as human.

When major accidents occur, official courts of inquiry are set up to assess the blame. More and more often the blame is attributed to “human error.” The person involved can be fined, punished, or fired. Maybe training procedures are revised. The law rests comfortably. But in my experience, human error usually is a result of poor design: it should be called system error. Humans err continually; it is an intrinsic part of our nature. System design should take this into account. Pinning the blame on the person may be a comfortable way to proceed, but why was the system ever designed so that a single act by a single person could cause calamity? Worse, blaming the person without fixing the root, underlying cause does not fix the problem: the same error is likely to be repeated by someone else. — The design of everyday things by Don Norman

Blaming those who err is as dangerous as praising those who succeed. When an organization achieves something remarkable we applaud and flatter its leader. Most of the times, the same as with errors, is not the person who made it happen but the conditions and processes around them.

One example at Redbooth is how we achieved 100% availability of our service for the last 9 months. In 2015 our customers complained that the service was not always available. Most of those issues were due to a human error. The team didn’t blame the person responsible for each mistake, most of the time it didn’t even pay off to find one! Instead, they started writing retrospectives with detailed action items to avoid that human error to happen again. It was the team’s led process that allowed them to succeed. By recognizing this instead of praising the manager (me), we easily replicated the same process and success in other areas that needed improvement.

Going back to the book, that chapter also explained the Swiss cheese model of errors: by adding security layers one can decrease the chances of accidents happening. A common example is how this model is used in aviation to avoid a distraction turning into an aerial disaster: All flights use a “pre-flight checklist” that is run through by pilot and copilot before taking off. This extra set of steps decrease the chances of a pilot missing one important security check. In the Swiss cheese metaphor, slices of cheese are the security layers while holes are the potential risks or issues. Ideally, you want slices without holes, but in the real world, everything is prone to failure.

However, adding those layers comes at a cost. In software engineering, for instance, it’s common to cover a product with tests and have a dedicated quality team. The implication is that product cycles slow down and more engineers need to be hired and coordinated. The thing is, even with those investments some errors find their way, layer after layer, to the customer’s hands.

If human attribution of errors have a success counterpart, does the Swiss cheese model of errors have one too? I think so, but it works the other way around. Instead of risks penetrating, you have opportunities trying to bubble up. In an organization, those “cheese slices” are usually management hierarchy levels and the “holes” are their skills and aptitudes. How many times did you feel frustrated in your workplace because a great idea ended up hitting a management wall?

I call this the Swiss cheese model of success and it’s a great way to assess whether a given hire is going to become the bottleneck of your organization, specially as you start adding middle-management. It’s also important for those organizations that try to radically change one of their core competencies. It’s not enough to change or add some of the management layers. You must drill those holes on the upper levels or the results will remain the same.

One last takeaway is that trying to keep the hierarchies as flat as possible, while a herculean task, may be the easiest way to ensure those holes let ideas bubble up. To prevent errors you want as many thick slices of cheese without holes as possible. To promote success, the opposite - The fewer slices and the bigger holes the better.

If you liked this post don’t forget to recommend it and/or follow me on twitter 😊

--

--