Alberta Wildland Firefighters, YouTube Image

For Want of a Nail: Learning from High Reliability Organizations

For Want of a Nail

For want of a nail the shoe was lost.

For want of a shoe the horse was lost.

For want of a horse the rider was lost.

For want of a rider the message was lost.

For want of a message the battle was lost.

For want of a battle the kingdom was lost.

And all for the want of a horseshoe nail.

The old parable about the kingdom lost because of a thrown horseshoe reminds us that seemingly unimportant acts of omission can have disastrous and unintended consequences. The story also has its parallel in many normal accidents: the initiating event is often, taken by itself, seemingly quite trivial. Because of a system’s complexity and tight coupling, however, initial trivial events can cascade out of control to create a catastrophic outcome.

Who can you depend on? How do you know you are safe when you travel or go into the hospital for surgery? When you board an airplane, what gives you any confidence that you’ll land safely? If you live near a nuclear plant, how confident are you that there won’t be an accident with life-threatening consequences? In a world full of uncertainty and complexity, who or what has your back? The answer to these questions may come from the work and principles of High Reliability Organizations, or HROs.

In 1984, Charles Perrow, a Yale sociologist, began looking at “high-risk” organizations. He wrote a book called, “Normal Accidents: Living with High Risk Technologies,” At that time, “high risk” was a term which encompasses risks “for the operators, passengers, innocent bystanders, and for future generations.” He applied it to “enterprises [that] have catastrophic potential, the ability to take the lives of hundreds of people in one blow, or to shorten or cripple the lives of thousands or millions more.”

Two prominent organizational approaches to safety, Normal Accident Theory and High Reliability Organizations, have focused attention on a variety of industries that deal with hazardous situations, developed concepts to explicate organizational structure and culture, and debated whether accidents are inevitable in complex systems. To that end, understanding the social psychology of people in groups and how they interact is important to the success of High Reliability Organizations, or HROs.

Based on previous research by Charles Perrow (1984), Weick and Roberts (1993), and other work by Dr. Karleen Roberts in the 1990s, HROs can be defined as organizations that have succeeded in avoiding “normal accidents,” where there was high expectation of accidents due to risk factors and intense complexity.

Examples of High Reliability Organizations

HROs operate and manage processes with the high potential to adversely affect human life or the environment.

Examples include aircraft carriers and flight decks, nuclear power stations, airport security, healthcare systems, air travel, inmate transportation, hostage negotiation teams and wildland firefighting crews, just to name a few.

The Sociology of Risk and Accidents

As a sociologist, Charles Perrow contributed to a sociological perspective organizational analysis and organizational behavior studies which had been heavily influenced by psychology. He studied decision making in centralized vs. decentralized organizations and presented a sociological view of the human-machine interface, particularly for decision making under varying abilities and demands. In 1981, Perrow wrote an article called “Normal Accident at Three Mile Island.” In 1984, in “Normal Accidents,” he talked about how people interact in complex systems to create a whole, unitary system. It is in the interactions between people in complex environments that create the susceptibility of accidents, making accidents inevitable.

What are Normal Accidents?

The Normal Accident has four characteristics:

1. Signals only noticed in retrospect;

2. Multiple design and equipment failures;

3. Some type of operator error which is not considered error until the accident is understood;

4. “Negative synergy” where the sum of equipment, design, and operator errors is far greater than he consequences of each singly.

In 1984, Perrow investigated “normal accidents.” He concluded that while all organizations would eventually have accidents because of their complexity and interdependence, some organizations were remarkably adept at avoiding them.

The Question

The question that Roberts sought to answer in her stream of research is why do some organizations not have as many failures as others?

Two Essential Characteristics of HROs

From the above question grew the definition and characteristics of HROs. At this point in its development, the research has identified some key characteristics of HROs. More specifically, HROs actively seek to know what they don’t know, design systems to make available all knowledge that relates to a problem to everyone in the organization, learn in a quick and efficient manner, aggressively avoid organizational hubris, train organizational staff to recognize and respond to system abnormalities, empower staff to act, and design redundant systems to catch problems early.

High-reliability organizations, or HROs, share two essential characteristics:

1. They constantly confront the unexpected and;

2. They operate with remarkable consistency and effectiveness.

Additionally, HRO’s are successful because they focus on mindfulness as key characteristics of the model. Weick and Sutcliffe in their 2007 text, “Managing the Unexpected: Resilient Performance in the Age of Uncertainty,” defines mindfulness in a high reliability organization as a culture built on a “rich awareness of discriminatory details.” With mindfulness at the heart of HRO’s the focus is on the prevention of disruptive unexpected events, while in at the same time, preventing unwanted outcomes after unexpected events have occurred.

Before I go into the five principles of High Reliability Organizations, the following videos may be helpful in understanding the principles in context.

John Nance on High Reliability Organizational Principles in Aviation

High Reliability Organization Principles in Healthcare

The Framework of Mindfulness: High Reliability Organizations: in Healthcare

Five Principles of High Reliability Organizations

Texas Tech School of Engineering and Center for Excellence for High Reliability Organizations and Processes

There are 5 principles of High Reliability Organizations that have been identified by Drs. Weick & Sutcliff in their book, “Managing the Unexpected: Resilient performance in the Age of Uncertainty.” The Texas Tech College of Engineering Center of Excellence for High Reliability Organizations and Processes (CEHROP) describe the five principles below:

Preoccupied with failure:

First, HROs are preoccupied with failure. Don’t be tricked by your success. In HROs, failures are embraced, even weak signals, in order to take action to stop further damage from occurring, to learn why it happened, and to know how prevent the failure from happening again. HRO strategies spell out mistakes that are unlikely but possible due to the human aspect in HROs. They look relentlessly for symptoms of malfunctioning as they may be a clue to additional failures elsewhere in the system. They are incredibly sensitive to their own lapses and errors, which serve as windows into their system’s vulnerability. They pick up on small deviations. And they react early and quickly to anything that doesn’t fit with their expectations. They are suspicious of quiet periods and obsessed with success liabilities, such as overconfidence.

Keith Hammonds from Fast Company, provides a great example: “ Navy aviators often talk about “leemers,” a gut feeling that something isn’t right. A pilot feels puzzled, agitated, or anxious. Even though she doesn’t know exactly what’s wrong, she knows that she needs to abort the mission. Typically, those leemers turn out to be good intuitions: Something, in fact, is wrong. HROs create climates where people feel safe trusting their leemers. They question assumptions and report problems. They quickly review unexpected events, no matter how inconsequential. They encourage members to be wary of success, suspicious of quiet periods, and concerned about stability and lack of variety, both of which can lead to carelessness and errors.”

Reluctant to simplify:

Second, HROs are reluctant to simplify. Although categories are unavoidable, they are carried lightly. HROs simplify slowly, reluctantly, and mindfully. They create more complex pictures of situations, while encouraging spanning of boundaries, negotiating, skepticism, and differences in opinions. Due to their reluctance, the details preserved and the needs for simplification are reduced.

Sensitive to operations:

Third, HROs are responsive to the messy reality inside most systems. That is, they look at what the organization is actually doing regardless of what they were supposed to do based on intentions, designs, and plans. They offer attentiveness to those on the front line and acknowledge that an accident is often not the result of a single active error. Rather HROs see that accidents are caused by errors lying latent in the system.

Commitment to resilience

Fourth, with a commitment to resilience HROs are able to identify, control, and recover from errors. They correct them before they worsen and cause more serious harm; therefore the system continues to operate despite failures. HROs practice worst case scenarios and learn from failures. They know they have not experienced all possible failures, so they must be continually wary of failures. In 1949, the Mann Gulch fire killed 13 smoke jumpers. The foreman, Wagner Dodge, on the spot created an escape fire by burning all the brush around him and his team, leaving an area where the larger fire could not burn. He was being resilient with a calm head. When a high risk event happens, it is easy to revert back to simple answers as your stress level rises and you may develop tunnel vision and miss critical cues. HROs are reluctant to embrace simple answers and embrace complexity, the unknowable and the unpredictable.

Deference to expertise

Fifth, In HROs expertise is not necessarily matched with a chain of command. In fact, some decisions are made on the front line. Listen to your experts — the people on the front line. HROs make an effort to see what people on their frontlines know and encourage communication of expertise from all levels. “In a macho world, asking for help or admitting that you’re in over your head, is frowned upon. Good HROs see it as a sign of strength to know when you’ve reached your limits of your knowledge and know enough to ask for help.” Your front line knows the work best and have a fuller picture of the strengths and weakness of an organization, even more than the established hierarchy.


Incorporating the principles of high reliability organizations into the homeland security ecosystem empowers everyone in the ecosystem to be accountable and resilient. The homeland security ecosystem should be reluctant to rely on simple answers and embrace complexity, the unknowable and the unpredictable. Of all the security environments, the homeland security one has “high potential to adversely affect human life or the environment.” It has all the makings for a fertile environment to implement the five principles of HROs.

Angi English has a Master’s in Security Studies from the Naval Postgraduate School’s Center for Homeland Defense and Security and a Master’s in Educational Psychology from Baylor University. She is a certified Remote Unmanned Aerial Systems Pilot. She is also a Licensed Professional Counselor and Licensed Marriage and Family Therapist in Texas. She lives in Austin.