Fire departments are response models, not production models

Eric Saylors
elitecommandtraining
8 min readJan 1, 2017

You are not meant to maximize efficiencies; you are meant to be resilient. Efficiency results in fragility; resilience results in survival.

Refinery fire. A rare event that needs a resilient response model

The metaphor

Imagine your family doctor suggests the eradication of your white blood cells since you have not been sick for a year. The logic is reasonable when the doctor compares the production of your white blood cells to your red blood cells, which are at 98% capacity when measured by oxygen saturation. However, your white blood cells have done minimal work over the last year when measured by time spent fighting disease. The conclusion is your red blood cells are at maximum efficiency while your white blood cells are a waste of “excess capacity.” Prudence suggests the elimination of your white blood cells for more efficient red blood cells.

Fortunately, for the survival of all humans, your doctor’s logic is deeply flawed. Obviously, red blood cells are not like white blood cells; one is a production model, and one is a response model. And response models cannot be measured as production models.

However, metrics meant for production models are frequently applied to response models such as the fire service. Consider the IBM report on the San Jose Fire Department in 2012. The report claims fires have dropped by 45% in San Jose since 1999, resulting in a “significant level of excess capacity.” The report concludes that due to this excess capacity, the fire department could increase efficiency by reducing staffing and fire stations.

Excess Capacity by fire engine from the 2012 IBM report“City of San Jose Operations Efficiency Diagnostics” P 73

This argument is the same as your doctor claiming excess capacity in your white blood cells since you have not contracted a cold recently. But fire departments and white blood cells are response models, and we cannot measure their “capacity” by traditional metrics of production, such as efficiency.

Efficiency

The definition of “technical efficiency”[1] is “Performing or functioning in the best possible manner with the least waste of time and effort[2] .” In other words, efficiency maximizes the output of a system with minimal input. (Maximum productivity = max. Output/ min. Input). For example, the “technical efficiency” of an automobile is miles per gallon (MPG). MPG is simply the output of the vehicle in miles traveled over the input of resources in gallons of fuel.

Measuring efficiency

The definition of efficiency poses a problem for the fire service instantly if one considers the equation of maximizing Output vs Input. The input of the fire service is easy to quantify; the annual budget. But how is the output of the fire service quantified? Lives saved? Property saved? Catastrophes mitigated? In fact, the challenge of quantifying the output of the fire service has a validated methodology[3] ; unfortunately, only the most advanced fire departments use it. Most fire departments only focuses on lowering input by cutting their budgets.

Simply lowering input is not increasing efficiency

Only lowering the input of a system without considering the output may decrease the efficiency. Consider the automobile example. If I offered to sell you a car that uses 40 gallons of gasoline per month, you might protest that it is not efficient enough. Will you be content if I simply change the metric so that the car only uses 20 gallons of gasoline per month?

I hope not; because I have concealed the most important part of the metric. How many miles will the car travel a month? What if in the “40 gallons a month” scenario the car travels 400 miles, resulting in an efficiency of 10 MPG. And in the “20 gallons a month” scenario, the car only travels 100 miles, resulting in an efficiency of 5 MPG. By just lowering input without considering output you have decreased efficiency.

While measuring efficiency is a challenge in fire service, maximizing efficiency breeds something much more sinister. Fragility.

Efficiency breeds fragility

Efficient systems are not designed to be resilient or redundant, but rather to be profitable, optimized, and low cost. And systems stretched to the max become fragile. In the words of the scientist Ted Lewis, “by eliminating surge capacity that allows a system to deal with any overload, optimized systems have evolved into fragile, error-prone systems. We have reaped the benefits of short-term efficiency but now are suffering from it[4] .” In other words, efficient systems don’t handle shocks or unexpected events well. And when shocks come, efficient systems unravel like a string being pulling from a sweater.

Think of a truss roof; an incredibility efficient system of cords, webs, and ties designed to use the minimal materials for the maximum effect. Every component of the truss is optimized to handle as much force as possible. But truss roofs are fragile; remove one element, and the entire roof falls like a house of cards.

The catastrophic collapse of a truss roof during a fire

In contrast, older roofs that used heavy timber had a lot of extra material in them, making them just as effective but much less efficient than a truss roof. However, heavy-timber roofs are very resilient; remove one member and the force just shifts to the adjacent members. Heavy-timber roofs can withstand a lot of punishment before a failure happens, and a single failure does not cause a cascading failure of the entire roof.

The science behind fragile, optimized systems

In 1987 the physicist Per Bak published “The sand pile experiment,” an illuminating study of the fragility of optimized systems in which he coined a theory called “self-organized criticality,” or “SOC[5] .” SOC describes a process where a system, be it a city, business process, software program, or a sand pile grows until it eventually collapses. In SOC, as the overall system expands, the minor parts of the system shift like grains of sand in a sand pile to accommodate the additional load[6] . The grains of sand continue to shift to the point of maximum capacity and optimized efficiency, reaching a critical stage when any additional stress causes a cascading failure that spreads across the entire system.

SOC is now a prevailing theory used to describe the collapses of financial markets, cascading power outages, political upheaval, and the spread of FIRES[7] .

Fire moves from building to building in a connected city

And just like the sand piles in Per Bak’s experiments, cities have self-organized in their growth, squeezing maximum utility out every inch of space.

Cities are optimized systems subject to SOC’s massive, cascading collapses due to the spread of fire. We have known this intuitively for centuries; Per Bak just proved it mathematically for us in 1987.

What the Federal Government knows about SOC, resilience, and response models

The federal government has long known that much of the nation’s infrastructure is subject to SOC and cascading collapse due to efficient systems becoming connected and optimized over time.

The Northeast blackout of 2003 affected regions from New York to Ontario. A software bug in the alarm system of a control room in Ohio started a cascading failure of the power grid leaving roughly fifty-five million people without power for two days[8] .

After a long evolution, in 2012 the Department of Homeland Security (DHS) created an overall strategy for critical infrastructure protection (CIP) called “resilience-informed decision making[9],” based on the understanding that much of our modern world is connected, efficient, fragile, and prone to collapse. DHS’s strategy is to stop the chain of destructive events and recover as soon as possible. This rapid recovery is based on a resilient “response capability; such as emergency management services, law enforcement, and firefighting capacity.” The DHS expects and relies on the fire service to be resilient[10] .

https://www.dhs.gov/sites/default/files/publications/NIPP-2013-Supplement-Incorporating-Resilience-into-CI-Projects-508.pdf

Response models need to be resilient

Efficient, optimized systems fail from unanticipated shocks. Resilient systems survive shocks because they are overbuilt and redundant. Response models need to be resilient to survive the shock that overturns the efficient system. When efficient systems fail, they rely on a fast, resilient response model to survive the collapse and mitigate the damage. Therefore, the mission of the fire service is to be resilient and fast, not necessarily efficient. The superseding quest to maximize efficiency in the fire service removes surge capacity, making the fire service fragile.

Like the white blood cells in a human body, the fire service is a response model measured by a different standard. The red blood cell is a production model, constantly churning at max capacity, transporting oxygen throughout the body. Whereas the white blood cell is a response model, waiting for the catastrophic intrusion of a virus. Without the white blood cells, exposure to the common cold is deadly. And no models of efficiency used to measure the production of the red blood cell can ever apply to the white blood cell.

Measuring efficiency in the fire service is challenging, and obtaining maximum efficiency may be counterproductive. Like the heavy-timber roof, the fire service needs to be resilient to survive the stress of cascading failures prone to an optimized society.

I am not advocating wasteful spending by not seeking efficiency. I promote sustainability. Don’t spend your communities’ money on an optimized, fragile system that unravels at the first shock. Invest in a resilient, sustainable system that will be there when your community needs it the most.

Eric Saylors teaches courses on this topic for Elite Command Training

And can be followed on Twitter at @saylorssays

https://medium.com/@esaylors/command-not-control-the-4th-generation-of-firefighting-9952322e1ee1

[1]Timothy J. Coelli et al., “An Introduction to Efficiency and Productivity Analysis” (New York: Springer Science+Business Media, 2005), 2–4

[2]Gareth Goh, “The Difference Between Effectiveness And Efficiency Explained” (Insight Squared, 2013), 1–3

[3]Saylors, Eric. “Quantifying a negative: how homeland security adds value.” PhD diss., Monterey, California: Naval Postgraduate School, 2015.

[4]Lewis, Theodore Gyle. Bak’s sand pile: Strategies for a catastrophic world. Agile Press, 2011.

[5]Bak, Per, Chao Tang, and Kurt Wiesenfeld. “Self-organized criticality: An explanation of the 1/f noise.” Physical review letters 59, no. 4 (1987): 381.Harvard

[6]Lewis, Theodore Gyle. Bak’s sand pile: Strategies for a catastrophic world. Agile Press, 2011.

[7]Lewis, Theodore Gyle. Bak’s sand pile: Strategies for a catastrophic world. Agile Press, 2011.

[8]Minkel, J. R. “The 2003 Northeast Blackout — Five Years Later.” Scientific American 13 (2008).Harvard

[9]DHS, “Incorporating Resilience Into Critical Infrastructure Projects” https://www.dhs.gov/sites/default/files/publications/NIPP-2013-Supplement-Incorporating-Resilience-into-CI-Projects-508.pdf

[10]Lewis, Ted G. Critical infrastructure protection in homeland security: defending a networked nation. John Wiley & Sons, 2014.Harvard

--

--

Eric Saylors
elitecommandtraining

Firefighter, futurist, instructor, Doctorate, and 3rd gen firefighter with a Masters degree in security studies from the Naval Post Graduate School