Networked Infrastructure: The Cinderella of Resilience?
Is networked infrastructure the Cinderella of resilience, to borrow an expression from Stephen Graham and Simon Marvin, who see them as the forgotten background of urban studies?
It may seem like an absurd question, considering the critical and vital role played by these different types of networks (transport, electricity, water, gas, TIC, hydrocarbons, sanitation, etc.) in ensuring the proper functioning of cities and efficient crisis management. For example, everyone knows that we need strong roads to allow emergency vehicles to reach those in need or help people evacuate during natural disasters. We are also well aware that electric grids power every other network in place to serve populations. For this reason, one might logically assume that networks would play a crucial role in any resilience strategy that endeavors to make a system robust enough to absorb shocks and continue operating even in a weakened state.
But think again. The third installment in our series on resilience, published by La Fabrique de la Cité, illustrates that the connection between networks and resilience is not always so clear, primarily because it runs up against three main setbacks: actual vulnerability, which tends to increase over time; an array of existing networks that must be renovated if they are to become more resilient (as opposed to new infrastructure, which can be built according to more resilient principles); finally, the fact that these networks were designed and built to expand and accommodate growing volumes, which does not always align with the needs of today’s world. This situation has set the stage for a new kind of crisis, one that is invisible, deleterious and spans longer periods of time.
Robust but vulnerable?
When Hurricane Irma recently devastated the islands of Saint-Martin and Saint-Barthelemy, it also underscored the importance of reestablishing an efficient telecommunications network in the immediate wake of a disaster. The French association HAND (Hackers Against Natural Disasters), founded by Gaël Musquet, former chairman of OpenStreetMap France, took the initiative to send the IT materials needed to restore communication on the islands, enabling locals to share information about their situation on the ground and help emergency services coordinate their operations.
France’s internal security code also reflects this need to ensure service continuity for major infrastructure. Decree no. 2007–1400 of 28 September 2007 requires network operators to “maintain priority resources in a satisfactory state”. This obligation contains preventive measures that limit the vulnerability of networks to risk, measures that serve to ensure a minimum level of service and restore normal operation in the event of a crisis, as well as complementary measures defining corrective actions to remedy deficiencies observed during the crisis. This legislation ensures that networks benefit from tight security and maintain their reputation as particularly robust in the face of stress.
Despite this, several natural disasters have recently laid bare the vulnerability of these networks to risk and the subsequent impact on cities and businesses. This is especially evident in some of the more spectacular and dramatic natural disasters of late, whose massive scale has placed severe stress on all security and safety systems, such as the 2004 Indian Ocean tsunami. Many communities still have not managed to fully restore their infrastructure after this event.
A similar situation occurred with Hurricane Sandy in October 2012. The storm left 8.5 million people without electricity, including many in the chic neighborhoods of Lower Manhattan, and caused 65 billion dollars in damage. Sandy brought many to the cruel realization that, since September 11, 2001, most efforts to secure infrastructure have focused primarily on human-caused risks rather than climate risks; in addition, that the United States has typically prioritized crisis management over investing in prevention. Examples of this include the total absence of federal regulations on building codes, and the fact that urban development policy — one of the main factors in a region’s vulnerability — is left up to the individual states. In the case of Sandy, there was no plan in place to stormproof the area before the hurricane. But while the upstream side of things did little to boost the region’s resilience, the downstream phase — or crisis management — picked up the slack through the widely applauded efficiency of emergency and recovery efforts. These actions showed that the country had learned and implemented several lessons from the poor response to Hurricane Katrina. Information concerning the energy resources available and the recovery efforts in place was made widely available, while more than 60,000 electric grid specialists from all across the country came in to help restore power.
In January 2009, Cyclone Klaus became one of the most destructive storms to hit mainland France, offering another particularly interesting case study for urban resilience. Klaus made landfall in southwest France with sustained winds over 170 km/h. Memories of Cyclones Lothar and Martin were still fresh in many minds. Though they hit in December 1999, few had forgotten the sheer destructive force of these storms, and the notable failure of the national alert system, which made their toll even worse. France worked hard to improve its warning system in the intervening time: when Klaus came along in 2009, more than a dozen departments were placed on red alert, for the first time since the system was created in 2000, allowing emergency services to gear up for action in advance. And yet, even with such an efficient system in place, 1.7 million people still lost electricity, some for a period of six days; several towns were left with no fixed and mobile telephone network; and rail traffic was disrupted across 3,000 kilometers of track. To make matters even worse, the electricity outage caused water pumps to shut off, leaving 140,000 people and two hospitals without clean water for several days, while also shuttering several gas stations, making it difficult or impossible to power the backup generators needed to palliate grid outages. On top of it all, a downed power line sparked a forest fire that reduced 1,000 hectares of woods to ash. All told, the storm caused 5 billion euros in damages, including 3 billion euros from the forest fire and 1.7 billion euros of insured losses among residents. Cyclone Klaus delivered three lessons: first, it showed the cascading impact of network failure. Second, that when it comes to networks, the notions of vulnerability and resilience apply to both the physical infrastructure and the associated service: when infrastructure fails, so does the service it helps to provide. For that reason, the concept of an acceptable failure rate should play a key role in any talk of resilience. Finally, it showed how difficult it can be to coordinate all the different parties involved in managing a crisis impacting networks, as they each have their own mindset. In the case of Cyclone Klaus, the mindset of officials and prefects ran counter to the mindset of network operators, with the former hoping to restore service in priority areas first and the latter shooting for quantitative targets (restoring network service for the largest number of consumers).
Recent events have brought the vulnerability of networked infrastructure to light, but this vulnerability is only expected to increase:
- With climate change, first of all, due to the growing number of extreme climate events (hurricanes, storms, floods, fires, drought, etc.), as well as less catastrophic weather conditions that can still place heavy stress on networked infrastructure and service continuity (heat waves, snow accumulations, freezing rain, etc.). Snowfalls in the Île-de-France region in January 2018 delivered ample proof of this fact: snowfall paralyzed the Paris bus system, caused massive traffic jams and left motorists stranded overnight on snowy highways. The winter storm revealed the vulnerability of the road network, which was designed to operate under “normal” conditions. Compounding matters, public perception made this vulnerability appear even greater, since the public has a low acceptability threshold for this type of phenomenon. Since the public does not view this type of incident as a “shock”, the fact that it can disrupt a service typically provided by the local infrastructure seems unimaginable…
- With the growing dependency of cities and companies on electricity and TIC, which, on one hand, makes networked infrastructure even more crucial to the proper operation of the urban system, even during times of intense stress, and, on the other hand, further reduces the acceptable rate of failure for these networks.
- With shrinking budgets and investments in preventive measures and post-crisis rebuilding efforts, which becomes a way of favoring the unknown costs of likely events over the known costs of precise investments.
Strengthening infrastructure and improving recovery capacity: the winning hand for network resilience?
In this context, how can we make networked infrastructure more resilient? We can point to two major strategies that respond to the following two questions: how can we make networks more robust in the face of a specific risk? And, if networks are still impacted by a disaster in spite of these efforts, how can we promote the most rapid recovery and return to normalcy?
The first strategy, which aims to strengthen infrastructure, consists in taking preventive measures. This may seem like a relatively obvious step: to limit the effects of risk, you have to protect yourself. Yet this ostensibly simple assertion raises several important questions:
- What risks should we protect against? Our knowledge of risks remains incomplete and geographically uneven. The aim of this question is to increase our knowledge, from mapping and modeling risks to collecting feedback from previous crises. This knowledge will serve as the basis for adapting networks to make them even more resilient, rather than simply maintaining their current state. It is also necessary to forecast currently unknown risks that will impact cities in the future.
- At what cost? The effort to adapt existing infrastructure — and maintain it in proper operating condition — to protect against probable but potentially unknown risks faces a major hurdle: it requires massive investments that may not be the top priority for officials or companies. There are two reasons for this: first of all, despite the clear increase in risk (climate change, cyberattack, etc.), as long as people are not directly impacted by these risks, they hold out hope that they will see little to no consequences from them. Furthermore, the costs of network failure remain poorly understood in many sectors; when these costs are known, they typically only include the costs faced by operators, and not costs incurred by other parties (consumers, residents, other operators, municipalities, etc.), nor the environmental costs, which are not borne by operators in any direct way. In other words, the cost of inaction is unknown, while the cost of action is known. These cost considerations play a direct role in decisions pertaining to the scale of preventive actions.
- What type of risk management? In this regard, we can distinguish three types of risk: “local” risks (high frequency, low intensity, low impact); “intermediate” risks (moderate frequency and intensity, causing disruption that requires coordinated action across the network); and “major” risks (very low frequency, very high impact with cascading effects requiring resources from the operator and the municipality). Does the entire network need greater resilience, or only a specific portion of it? What associated risks (operational, financial, social, image, etc.) are deemed acceptable based on the selected level of risk management? Is there any consensus on the acceptable level of associated risks?
- What parties play a role in protection? Several recent incidents have shown how the failure of a single network can set off a chain reaction across other networks and urban systems. This leads to a double challenge: understanding the connections between different networks and their position in relation to one another; as well as the coordinated management of efforts between different network operators and between operators and municipalities, which is the only viable avenue for responding to the systemic nature of these crises.
A clear demonstration of the need to manage crises collectively came with the electric outages on November 4, 2006. On that day, power was cut along a portion of the German grid to allow a cruise ship to pass safely across the Ems powerline, which subsequently overloaded the German network in the region. In a matter of seconds, an automated safety system kicked in and triggered selective power cuts designed to keep the entire network from overloading and causing a total blackout across the continent. Ten million European households lost power for one hour. Morocco also experienced a blackout, and had to turn to its North African neighbors for aid, which led to selective power cuts across the Tunisian network. In this case, it was the interconnection and solidarity of European networks that laid the groundwork for their resilience — just as this same Europe-wide interconnection also propagated the effects of the crisis.
The second strategy for enhancing network resilience aims to restore them to an operational state as rapidly as possible following a crisis.
Given their strategic nature, major networks should be covered by a specific process to enact in case of disruption, with a clear order of priority: getting people to safety, securing access and accessibility (notably to allow for the passage of emergency vehicles), restoring networks to maximize service continuity (even in a weakened mode) and providing replacements to meet the most urgent needs (generators, bottled water, blankets, etc.).
How does the resilience paradigm change this scenario? Resilience shifts the priority towards recovery by focusing on the long term. That leads to two major consequences: (1) emergency management and the return to equilibrium become just another step in the process — a major step, to be sure — but the ultimate goal is to transform the entire system so it becomes less vulnerable; (2) emergency management is also transformed into a process designed by all parties involved in these efforts.
(1) Crisis as opportunity? When urgency becomes a risk…
What resilience highlights is the tension, if not the antagonistic character, that can arise between the two different timeframes involved in managing crises that impact major networks: the urgency of restoring service on one hand, the gradual return to equilibrium on the other. Might this tension spring from the ambiguity surrounding the notion of “return to equilibrium”: what type of equilibrium do we mean? Do we mean the equilibrium that existed before the crisis, or something else? In other words, resilience begs us to investigate the connection between a system’s operation and the associated risk, instead of simply determining which technical improvements may reduce risk or exposure to risk. Herein lies the crux of this approach: it shifts the focus from networks to the region, from sectors to the system as a whole. Because the real challenge is to answer one question: how can we improve the resilience of the entire region?
Not only is responding to this challenge a complex matter, it can also become a source of conflict. Complex because it means relaying information between a variety of operators and a central entity that can consolidate these lessons and feedback for two main purposes: to grasp the complex web of interdependency and the resulting domino effects, and to develop coordinated action. Conflictual because the measures taken to respond to emergencies may not align with the goals of long-term resilience, as shown by Hurricanes Katrina and Sandy. Isabelle Maret and Thomas Cadoul’s study of New Orleans shed light on the fundamental attachment to the land that connects residents to their city, neighborhood and home — even when it is destroyed — thus explaining their desire to resettle in the same spot even after a disaster. Attachment to place is stronger than the memory of risk. This resilience shown by residents, though it is the driving force behind a city’s rebirth, paradoxically becomes a factor that increases long-term vulnerability when the resettling process is not accompanied by an adequate policy of risk protection. For example, after Katrina, 83% of homes on the Atlantic seaboard were not adapted to protect against the major risk of flood they face. After Sandy, the state of New Jersey allocated 10,000 dollars to all residents who decided to rebuild in the same spot after their homes were destroyed by the hurricane — without any subsequent measures taken to reinforce building codes or protect against risk… Damages caused by hurricanes that continue to hit the West Coast of the United States demonstrate that a region’s resilience strategy must include long-term efforts focusing not only on the effects of the crisis, but also on its causes, even as they relate to residents. That’s why the authors of the report “Résilience des réseaux dans le champ du MEDDE à l’égard des risques” (“Resilience of networks under the scope of MEDDE with respect to risks”) recommend the following: “Governance needs to account for the factor of time, by seizing the will to act coming from the sense of urgency and combatting the process of forgetting that comes with the passage of time, while it must also outline medium-term and long-term repairs”.
(2) Information and training. Towards a collective approach to risk management…
Any regional resilience strategy must involve local residents. However, the example of New Orleans shows that doing this means providing training in risk and promoting an understanding of the long-term challenges. This process should aim to do at least two things: to make a weakened state of operation more acceptable to the public, and to involve residents in an effective way.
Managing the urgent aspect of a crisis depends on how much disfunction is acceptable to the public: the more the public understands the challenges at hand, by receiving information about the actions put in place, the more disfunction it will be willing to accept, thereby relieving some of the pressure to take hasty actions that may have adverse effects in the long run. Recall the famous Somme flood rumor in Abbeville and the headlines it produced: “The Somme River powers a rumor mill in Abbeville”, “Rumors flood Abbeville and spark local ire”. In 2001, following a rainy winter that saturated the water table, the Somme and its tributaries gradually rose and flooded 2,800 homes, causing 1,100 people to evacuate at the height of the crisis. Though a state of emergency was declared on March 23, 2001, rumors quickly spread that waters from the Seine were diverted into the Somme to keep Paris from flooding, at a time when the Olympic Committee was scheduled to visit the city for its bid to host the 2008 Games. Despite implementing a relatively strong “technical” management of the crisis, including an efficient rehousing policy and increased monitoring to ensure proper application of flood prevention plans, the Somme flooding was poorly received by residents, who felt that they had been forgotten. At a time when instantaneous communication has reduced fact-checking and helped to propagate disinformation, we can easily understand the challenge and importance of good communication.
Good communication must begin as soon as possible, by providing risk training before a crisis occurs: among residents first, by explaining the phenomena that can increase risk and teaching the best responses to crises; next, among network operators and public authorities, so they can understand how consumers view these situations. Resilient management of the crisis and fallout will rely on the region’s ability to tie residents into the process, by empowering them to relay efforts, develop local solidarity networks and contribute to restoring the system’s equilibrium.
The blind spot of networks, or growing agrowth
-46.6% in Gdansk, -40.4% in Budapest, -15.6% in Berlin, -16.6% in Paris, -13.3% in Nantes, -4.1% in Madrid… Between 1991 and 2001, a majority of European cities saw their water consumption drop, reaching particularly exceptional rates in cities of the former Soviet bloc. In his contribution to the Cerisy colloquium on “Resilient Cities and Regions” organized in September 2017 by La Fabrique de la Cité, the Veolia Institute and Sabine Chardonnet Darmaillacq, Daniel Florentin highlighted another type of shock faced by networks: agrowth, referring to the absence of growth, or even degrowth. This is a much less spectacular type of shock than the brutal disasters mentioned above. Playing out on a slow and deleterious scale, this problem can long go overlooked, as shown in Sevilla: it was not until other economic and social crises occurred that the impact of a constant decline in water consumption (-40% since the 1990s despite steady population growth) was at last revealed in 2013 and became a matter of public debate.
Does this not leave us before a paradox? The previous article in our series showed that limiting the use of resources was a major factor in improving resilience. So why would it become a shock when applied to networks? The reason is that, while decreasing consumption promotes the resilience of the entire system by preserving hard to renew resources, when applied to a technical network that was built and scaled to meet a specific level of consumption, it becomes a vector of infrastructure vulnerability for two main reasons.
(1) Networks are scaled to operate at optimum capacity. Both too much and not enough can cause problems in operating and altering the network. For water networks in particular, underuse can pose a sanitary problem above all, as underlined by Daniel Florentin, evoking the bacteria crisis of summer 2008 in Magdeburg in eastern Germany: the combination of low consumption, water stagnating in pipes for fourteen consecutive days and high summer temperatures led to an outbreak of bacteria in water pipes that exceeded recommended levels.
(2) The economic equilibrium of networks relies on a balance between high management costs and investments on one hand, and the income generated by resource consumption on the other. Declining consumption can upset the economic balance, as underlined by Daniel Florentin, “through a gradual seesaw effect: costs go up and income goes down.”
The challenge is at once operational (what new model for managing networks?), economic (how to increase returns?) and regional (is the classic model, based on a vast national network encompassing different regions and offering the same service at the same price, still relevant? What alternative model?).
“In fact, after witnessing this phenomenon of declining consumption, we realized we had to stop building and manage what was already there to keep everything from collapsing” (interview conducted by Daniel Florentin with an engineer from Trinkwasser Magdeburg (TWM), the intraregional water operator for the Magdeburg region, January 2013)
What options are available for responding to this phenomenon? Daniel Florentin, based on the analysis of declining water networks in the Länder of eastern Germany, has outlined several:
- Pivoting to a multi-utility model for major network operators.
- Raising the price per m3 in order to recoup the lost volumes and increasing network management costs. On one hand, this solution poses the crucial issue of acceptability among the local population and the impoverishment of a portion of residents who cannot afford these rising costs; on the other, it is not enough on its own to solve the long-term problem because it does not attack the root of the problem, only some of its consequences (lost income).
- Rescaling networks to adapt to lower consumption levels. Here we run into the problem posed by the extremely high costs of adapting existing infrastructure. Not only are grant options limited for these operations, but as in the case of the water and purification networks in Germany, they are governed by the “full cost recovery” principle, which places the full cost burden on the end user. Keeping water cheap to ensure greater social and regional solidarity thus limits investment capacities, which, even when the goal is simply to maintain the network’s status quo and keep maintenance to a minimum, can create new infrastructure vulnerabilities in the long run. Solidarity vs investment? Degrowth challenges the fundamental economic model of large networks.
- Dividing and separating networks, which creates two parallel networks: a “first-class” network that is efficient, high-quality and expensive, and another standard option that would inherit the traditional network, remaining more affordable but offering a lower service quality due to limited use. That is what Simon Marvin and Stephen Graham identify in their book Splintering Urbanism, which analyzes network design in new urban areas. This trend poses a major threat to the idea of large networks as a public utility, an essential tool for organizing a region, a vector of national solidarity and a guarantor of the public interest. Instead, it prefers to adapt networks to the competing interests of various stakeholders in order to boost efficiency and transform networks into tools for differentiation (social and regional), with the associated risk of splintering (social and regional) and creating areas where infrastructure vulnerabilities compound existing social challenges.
- Creating new economies of scale by developing a geographic and pricing strategy that can increase revenue and maintain a sense of regional solidarity. This is the model adopted in Magdeburg, which led Daniel Florentin to see the city as, “a sort of laboratory for managing the degrowth of networks”. This strategy relies first of all on expanding the regional coverage of the water operator, Städtische Werke Magdeburg (SWM), which merged the networks and took over the networks of neighboring regions; next it relies on the intraregional water operator (TWM) to set up a shared governance system that favors negotiation and consensus, while enacting a solidarity price (Solidarpreis) on the regional level. The result is that Magdeburg, a region facing fewer consequences from the economic and social crisis and degrowth, pays for other more vulnerable areas in the region, but gains a more robust network by reducing the sanitary risks tied to underuse of the water network.
This last example is particularly interesting in that it sheds new light on the role played by network operators. SWM and TWM, though they are only city service companies and not elected bodies or administrations, have developed a regional vision that offers an alternative to competition and favors regional resilience by seeking to reduce a range of infrastructure and social vulnerabilities. As noted by Daniel Florentin, “infrastructure and network questions were long seen as the exclusive domain of technicians and the ‘hard’ sciences, as though they were socially or politically neutral topics. However, behind the primarily technical observations like declining water consumption, we find a great many social questions, tied to the challenges of adapting to a society based on more moderate use of several resources. Behind the questions of outsized infrastructure, we find political choices whose implications can generate major regional transformations.”
Waiting for Prince Charming? The challenge of maintenance
We have now come full circle: are large networks the Cinderella of resilience? Beyond the catchy phrasing, this question reveals a crucial point: just as Cinderella, the daughter of a king, is a pillar of her Kingdom, large networks represent strategic infrastructure that is essential to keeping the urban system in good working order; just as Cinderella, when deprived of support and attention, loses her rank and capacity to act, large networks risk defaulting in their role as the pillars of resilience if they are not held in proper esteem and correctly maintained. All the examples agree: maintenance is a crucial issue for networks. First, because a network in good condition helps fix the consequences of a disaster much faster. Next, because not only does a network in poor condition and in need of maintenance make a region more vulnerable by limiting its recovery capacity, but worse yet it creates more risk. Recall the blackout of August 14, 2003, when nearly 50 million people lost power in Ontario, Ohio, Michigan, Pennsylvania, New York, Connecticut and New Jersey. The reason: a lack of maintenance allowed overgrown trees to take down several power lines. Finally, because a new framework for action is emerging, as shown by the water networks in eastern Germany: a new framework based on maintaining existing networks instead of expanding them.
This does not make our task any easier. Because what action can we take when faced with a double bind? On one hand, public authorities invested heavily in building these networks and therefore need to increase their returns. On the other hand, how can we introduce maintenance into our project methodologies, when construction has always been the standard? Resilience opens several interesting avenues for taking action by helping to change the standards: as demonstrated by Daniel Florentin, it makes it possible to shift from a maintenance-repair model to a maintenance-transformation model. And if the path towards this new standard had a name, might it be… innovation?
 Stephen Graham, Simon Marvin (2001) Splintering Urbanism. Networked Infrastructures, Technological Mobilities and the Urban Condition, Routledge.
 CGEDD (2013) “Vulnérabilité des réseaux d’infrastructures aux risques naturels”, 92p. http://www.cgedd.developpement-durable.gouv.fr/IMG/pdf/008414- 01__rapport_cle523312.pdf
 French Ministry of Ecology, Sustainable Development and Energy (2015) Les enjeux économiques de la résilience des réseaux. Report no. 008414–02 established by Marie-Anne BACOT, Jean-Louis DURVILLE and Laurent WINTER
 French Ministry of Ecology, Sustainable Development and Energy (2015) Résilience des réseaux dans le champ du MEDDE à l’égard des risques. Etude des conditions de retour à la normale après une situation de crise affectant les grands réseaux. Report no.008414–03, established by Yvan AUJOLLET, Philippe BELLEC, Thierry GALIBERT, Gérard LEHOUX, Jean-Michel NATAF and Laurent WINTER
 MARET Isabelle, CADOUL Thomas (2008) “Résilience et reconstruction durable : que nous apprend La Nouvelle-Orléans ?”, Annales de géographie, 5 (no. 663), p. 104–124. DOI: 10.3917/ag.663.0104. URL: https://www.cairn.info/revue-annales-de-geographie-2008-5-page-104.htm
 French Ministry of Ecology, Sustainable Development and Energy (2015) Résilience des réseaux dans le champ du MEDDE à l’égard des risques. Etude des conditions de retour à la normale après une situation de crise affectant les grands réseaux. Report no.008414–03, établi par Yvan AUJOLLET, Philippe BELLEC, Thierry GALIBERT, Gérard LEHOUX, Jean-Michel NATAF and Laurent WINTER p. 56
 Libération, 10 avril 2001
 Le Monde, 11 avril 2001
 Daniel Florentin is Assistant Professor of Environment at MINES Paris Tech, with the Institut Supérieur d’Ingénierie et de Gestion de l’Environnement (ISIGE).
 Daniel Florentin (2017) “Des réseaux qui décroissent, des solidarités qui s’accroissent ? Baisse des consommations d’eau et d’énergie et nouveau contrat social et territorial”, Métropolitiques www.metropolitiques.eu/Des-reseaux-qui-decroissent-des.html
 Stephen Graham, Simon Marvin (2001)
 Daniel Florentin (2017)
 Daniel Florentin (2015) “La vulnérabilité des objets lents : les réseaux d’eau. Les enjeux des diminutions de consommation d’eau vus à travers un exemple allemand”, Les Annales de la recherche urbaine, №110, City and Vulnerabilities. pp. 152–163. www.persee.fr/doc/aru_0180-930x_2015_num_110_1_3176