The collapse of sensemaking

Pedro Portela
The HiveMind
Published in
15 min readJan 3, 2023

Lessons from aviation to social change work

Every night, my wife asks me what I’m watching. “Oh, the usual,” I reply, slightly ashamed. She knows what that means — videos about aircraft accidents. It may sound morbid, but I’m not interested in the drama. I only watch videos made by aerospace professionals, who provide technical details about the series of events that led to the accident or incident. Not all of them are fatal crashes — some are just reports of incidents that didn’t result in any injuries or fatalities, but were serious enough to be investigated.

Many people don’t realize that the aerospace industry takes every incident, whether it’s a breach of protocol or a full-blown accident, as an opportunity to make the industry safer. That’s why the odds of being involved in an airplane incident are so low. Every mishap is thoroughly investigated, analyzed, and debated by experts until the industry is confident that it knows what went wrong and can prevent a similar series of events from occurring again.

I watch these videos, and read reports and books on the subject, for several reasons. First, I love learning about the technical details of complex machines and systems. Second, I’m fascinated by how humans create and interact with these systems, especially in abnormal circumstances. And third, I believe that every incident report contains valuable lessons about humanity and the human condition that can be applied to other fields.

Recently, I watched a video by line training pilot and YouTuber Petter Hornfeldt about the Air France AF447 flight that crashed in the Atlantic Ocean off the coast of Brazil in 2009. It really affected me. As an engineer, I understood the technical errors that led to the crash. As a human being, I grieved for the loss of life. And as someone who could empathize with the pilots, I was stunned by the sheer confusion and panic they must have felt as they tried to make sense of a situation that was rapidly spiraling out of control.

The AF447 crash occurred when the aircraft’s airspeed sensors were disabled by weather conditions, causing the flight control computers to disconnect. The pilots had to take manual control of the aircraft at almost 11 km in altitude, in the middle of heavy turbulence, at 2:10am. Just four minutes later, at 2:14am, the plane crashed into the ocean, killing everyone on board.

Losing all airspeed indicators at the same time is a rare occurrence, but it shouldn’t have been a death sentence. There have been instances where pilots have successfully landed planes in similar circumstances. So why did the AF447 crew fail? The answer, as it often is in these cases, is a complex combination of human and technical factors.

There are countless lessons to be learned from the AF447 crash, and from other incidents like it. Some of these lessons relate to the design and maintenance of aircraft, while others focus on the training and decision-making of pilots. But ultimately, they all come back to the human element — our strengths and limitations, our ability to adapt and overcome challenges, and our capacity for resilience and growth. By studying these incidents, we can improve not just the safety of the aerospace industry, but also our understanding of ourselves and the world we live in.

Karl Popper, the philosopher of science, once used the metaphor of clocks to describe the difference between problems that can be solved and those that can’t. Clocks problems, he said, are those that can be solved through the application of scientific method — by gathering data, formulating hypotheses, and testing those hypotheses through experimentation. Cloud problems, on the other hand, are those that cannot be solved through the scientific method, because they involve complex systems that are too difficult to understand and predict.

In the case of the AF447 crash, we might say that the problem of how to safely land a plane with faulty airspeed indicators is a clock problem — one that can be solved through careful analysis and experimentation. Yet, for the pilots that night, it seemed like they were handling a cloud problem — a complex, multi-faceted challenge that is difficult to fully grasp and solve. By studying incidents like AF447, we can learn more about the underlying causes of these cloud problems and develop strategies for addressing them. But we must also be realistic about the limitations of our knowledge and the inherent uncertainty that surrounds complex systems.

The collapse of sensemaking

When humans encounter a sudden, intense, and unexpected stimulus, they experience the startle response, also known as amygdala hijack or the “fight or flight” reflex. This response is a leftover from our evolutionary past, when it helped us survive threats like predators. While the triggers for the startle reflex have become more complex in the modern world, it still serves the same basic function: to protect us from danger.

There are two types of startle responses: a knee-jerk reaction to a sudden surprise, and a sustained, anxiety- or panic-inducing response to a prolonged surprise. To minimize the amount of time we spend in a state of stress and fear, we have developed ways of controlling our environment and using storytelling, mental modeling, and language to understand and make sense of it. We use simulations like theater, movies, and games to model reality, and share and collaborate on these models with others.

But it’s important to remember that models are always approximations, and have limits to their applicability. On the night of the AF447 crash, first officer Bonin and relief pilot Robert were flying through storm clouds in the intertropical convergence zone off the coast of Brazil. The weather was difficult, but not unusual for that time of year. The pilots discussed the possibility of flying around or above the storms, but there were no real alternatives. Bonin, the pilot flying, was a bit tense and tired. It’s not clear if he was well-rested before the flight, and even if he was, he was on duty during the early morning hours when cognitive abilities are known to be diminished. This was not the ideal time to handle an upset.

At 2:10AM on the night of the AF447 crash, the pilots found themselves in a sudden and unexpected situation when ice formed around and inside the three redundant airspeed sensors, causing them to stop sending reliable information to the flight control computers. The autopilot disconnected and the flight controls switched to alternate law, removing all flight envelope protections. The combined knowledge and wisdom of the engineers and scientists contained in the computers was gone, and the pilots had seconds to pick up where the autopilot left off, consolidate their situational awareness, and start flying the plane in a way they likely had never done in their training.

The pilot flying input a series of commands that put the airplane into a steep climb, but no one is sure why. Because of the way the fly-by-wire systems work on Airbus planes, the pilot monitoring didn’t notice these commands and was struggling to read and communicate the status of the aircraft to the pilot flying. In less than 60 seconds, the airplane was reaching its maximum altitude, losing speed, pitching up, and entering a deep aerodynamic stall. Reading the transcript of the cockpit voice recorder, one can feel the ominous collapse of sensemaking as all of the models failed: the models used to build the speed sensors, the mathematical models running on the flight control computers, and the mental models of the pilots. The airplane was fully flyable, but due to ambiguous alerts and the pilots being boxed into their own mental models, it’s possible that they never fully understood what was happening. This is a frightening situation to be in, and highlights the importance of finding the headspace for sensemaking when operating in potentially complex environments.

Karl Weick’s (Weick 1993), (Weick and Sutcliffe 2015) work on sensemaking in organizations highlights the importance of individuals and groups being able to make sense of complex and ambiguous situations. He defines sensemaking as a process that involves gathering and interpreting information, generating hypotheses and explanations, and adapting to changing circumstances. In the case of the AF447 crash, the pilots were required to gather and interpret information about the status of the aircraft, generate hypotheses about what was happening, and adapt their actions to the changing circumstances.

However, the pilots were unable to effectively make sense of the situation, leading to confusion and ultimately a collapse of sensemaking. This can be seen in the transcript of the cockpit voice recorder, as the pilots struggle to understand what is happening and how to respond. Weick’s work on sensemaking in organizations suggests that this type of collapse can occur when there is a lack of relevant information, conflicting or contradictory data, or cognitive biases that prevent accurate interpretation of the information available. In this case, the pilots were dealing with ambiguous alerts from the aircraft and may have been limited by their own mental models and cognitive biases, leading to their inability to effectively make sense of the situation.

Increasingly, the performance indicator for teams working under potentially complex situations, is their ability for sensemaking. It’s their ability to come together with their individual perspectives on a given situation and weave together a map of shared meaning, think creatively about a course of action, decide upon it and act it out. I believe this is one step above good team work. This is a deeper, almost spiritual level of unity and harmony that is equivalent to entering a flow state as a team. A way of coming together and using the combined strength and talents of the group to achieve something greater than any one individual could achieve alone.

Dodging the bullet

There are numerous other examples out there, hidden in the archives of incident investigations, of crews that were able to “save the day” in the face of extremely unlikely yet potentially catastrophic upset events. And usually this can be explained by the creative and mindful interplay between human crews, sophisticated machinery and complex systems.

On a flight in 2018, a Malasya Airlines Airbus A330 experienced a similar issue as that of AF447 after taking off from Brisbane, Australia. Prior to the flight, all three of the aircraft’s airspeed sensors had been covered to prevent mud wasps from building nests inside them. These covers were supposed to be removed before the flight, but somehow, they were left in place and the aircraft was dispatched. Once the plane was in the air, the issue with the covered pitot tubes caused the aircraft’s autopilot to disconnect and flight control systems reverted to “alternate law,” leaving the pilots to fly the plane manually. After a brief period of confusion and miscommunication, the pilots were able to regain control of the aircraft and safely land it a few minutes later, without the passengers realizing anything was amiss.

On November 4, 2010, Qantas Flight 32, an Airbus A380, took off from Singapore Changi Airport bound for Sydney, Australia. Shortly after takeoff, one of the plane’s four engines suffered an uncontained failure, causing debris to puncture the wing and fuselage. The pilot, Captain Richard de Crespigny, and his crew were able to safely land the plane at Changi Airport about 20 minutes later. There were no fatalities or serious injuries among the 469 passengers and crew on board.

The incident was caused by a failure in a part called a turbine disk within the engine. The failure caused the turbine disk to shatter, sending debris flying in all directions and damaging the wing and fuselage of the aircraft.

The huge Rolls-Royce Trent 900 engine was destroyed. The extent of damage was unprecedented in Airbus’s history. Two heavy chunks tore through the wing, traveling at approximately two times the speed of sound. The fan blades and chunks acted like the explosive core of a hand grenade, ripping wing panels into shrapnel that sprayed like missile fragments over the fuselage as far as the massive tail sections. One chunk also ripped through the aircraft’s belly, severing hundreds of wires. Over 600 wires were cut causing almost every aircraft system to become degraded. I think one of the aircraft’s two backbone networks failed, confusing both flight warning computers. The hydraulics, electrics, brakes, fuel,flight controls and landing gear systems were all compromised. No Airbus aircraft had ever suffered so much damage to so many systems. (de Crespigny 2012)

Although the circumstances are different, in both these examples and others, pilots were able to enter a joint flow state and fly the airplane back to safety. I think there is something we can learn from these near misses. Something that hints to ways we can all prepare to deal with the unexpected when it, inevitably, knocks on our door.

Propositional knowledge

The pursuit of knowledge is a lifelong journey, one that can be both enriching and rewarding. In today’s world, we have access to an abundance of information on just about any topic imaginable. Whether we are facing personal challenges or global crises, there is likely a wealth of information available through articles, videos, courses, blog posts, podcasts, and documentaries. To ignore these resources is to miss out on valuable opportunities to learn and grow.

As you seek out knowledge, do so with purpose and discipline. If your work is in the social or humanities fields, consider the value of literature and history in shaping your understanding of the world. If you work in an organization, seek out information about technologies, project management, human behavior, governance, and decision-making. Be curious and seek out knowledge that will help you build a more accurate model of the world.

Those who excel in their fields are often passionate about the systems they work within and are eager to learn every detail. Pilots, for example, may read incident reports and recommendations to stay up-to-date on the intricacies of their aircraft and be prepared for emergency situations. The crew of QF32, who successfully navigated a major incident, were likely well-versed in the details of their aircraft’s systems. It is this attention to detail and dedication to learning that allows them to excel in their profession. In the Malaysia flight, a counterintuitive turn off of the aircraft’s air data computers allowed the pilots to fly their aircraft back to the runway using a last resort indication of safe and unsafe speeds. It is these small, yet critical, details that can make all the difference.

Embodied knowledge

“A good pilot uses his superior knowledge to avoid situations that require his superior skill.” (Palmer and Mark )

Becoming a pilot is about more than just reading books or watching YouTube videos. A significant portion of pilot training takes place on simulators, which provide a realistic learning environment that replicates real-world work and scenarios. These simulators can range from computer programs like Microsoft Flight Simulator to full-scale, full-motion devices that mimic the motion of an aircraft.

Simulators operate using models, which are contained within an artificial environment created by the simulator designer and disconnected from real-world conditions. The more complex and elaborate the model supporting a simulator, the more realistic it will be, but it will never fully replicate reality. Pilots use flight simulators to train procedures and expose themselves to as many different situations as possible, with the hope that this training will kick in during real-life emergencies.

The author, trying to make sense of an Airbus A320 flight deck.
The author, in a flight simulator trying to make sense of an Airbus A320 flight deck
The author flying in a simulated storm.

The philosophy behind simulations and their design is not limited to technical fields. Simulations and modeling can also be used to understand the collapse of societies, the spread of social behaviors, and the emergence of cooperation or segregation. In these cases, network theory may be used to represent how people coordinate, communicate, and build complex societies. While these models can be useful in understanding and replicating real-world behaviors, it is important to remember their limitations and not assume that they fully capture reality.

Simulators don’t always have to be virtual. Theater and role-playing are also forms of simulation, and there is little difference between simulators and games other than their intended use. Children engage in simulations all the time through pretend play. While this concept may be familiar to those in the social change sector, it may be helpful to think of these activities as simulators in order to improve their effectiveness.

Heuristic knowledge

In my coaching practice, I have found the concept of “aviate, navigate, and communicate” to be a useful framework for guiding my clients through difficult or unexpected situations. The idea of setting priorities and focusing on the most pressing tasks at hand can be especially helpful in times of crisis or chaos.

Knowing our priorities and acting on them in times of crisis is crucial for maintaining control and making informed decisions. By setting clear priorities and focusing on the most pressing tasks at hand, we can better navigate difficult or unexpected situations and emerge stronger and more resilient on the other side.

Heuristics are a good way to respond to chaotic situations. Heuristic knowledge, or knowledge based on experience or practical rules rather than formal proof, can be a valuable tool in navigating chaotic events. This type of knowledge is often gained through trial and error or through direct experience, and it can provide a practical framework for making decisions in situations that may be unfamiliar or unpredictable. One way that heuristic knowledge can be particularly useful in navigating chaotic events is by providing a set of mental shortcuts or “rules of thumb” that can help us make decisions quickly and effectively. These mental shortcuts can be based on past experiences or observations, and they can help us to quickly identify patterns and make informed decisions without the need for extensive analysis or data gathering.

The captain of QF32 used such a rule of thumb in the first seconds after the engine exploded: he quickly pressed the “altitude hold” button on the autopilot, effectively stopping the aircraft from climbing, reducing stress on the engines and stabilizing the workload.

The Network of Brains

The positive outcome of incidents such as the Qantas QF32 and Malaysia 134, among others, can be attributed to a crucial aspect of aviation: crew resource management (CRM). Developed in the 1970s in response to several major aviation accidents caused by poor communication and decision-making among crew members, CRM is a training program that aims to improve teamwork and communication in the pursuit of aviation safety. Initially implemented in the airline industry, CRM has since been adopted by various aviation organizations, including military, business, and general aviation. It is now widely recognized as a vital component of aviation safety and is required for all aviation professionals in many countries.

CRM is more than just teamwork; it is a culture that prioritizes the preservation of shared sensemaking, especially in emergencies. The AF447 disaster serves as a cautionary tale of the importance of CRM, as the pilots were unable to recover from the initial shock and effectively communicate and work together, resulting in the tragic loss of the aircraft. In contrast, the Qantas QF32 and Malaysia 134 flight crews were able to overcome the initial stress response and utilize standard operational procedures to successfully navigate their emergencies. The first officer of the Malaysia 134 flight even took control of the aircraft, against company regulations, due to his greater experience with that particular type of aircraft. This is a testament to the effectiveness of CRM in enabling effective teamwork and decision-making under pressure.

But the value of CRM extends beyond the aviation industry. It is crucial for the preservation of shared sensemaking in other critical fields such as ecosystem protection and peacebuilding. As we strive to improve our organizations and societies, it is vital to consider how we can utilize our best resources, skills, information, and decision-making abilities to create a society-wide collective resource management.

The recognition that the humans in the loop are the first and last line of defense against chaos, comes with a necessity to understand and conceptualize the patterns of collective cognition, sense making and decision making that takes place in a social network. In short, it highlights the importance of governance in human affairs.

Outro

I like to think of myself as having one foot in the engineering world and the other on the humanities. I have been formally trained as an engineer, worked in the aerospace sector for a decade and am finishing my other decade in the social change / peacebuilding field. What overlaps between two worlds is the dimension of human relationships, interactions around shared meanings and the constant effort of sensemaking.

In this essay, I aimed to show that aviation and social systems change, two seemingly unrelated fields, actually have much in common. At their core, both deal with the complex and interconnected nature of human organization and the ways in which we work together in challenging, high-stakes situations. Through the development of over one century of aviation, we have gained valuable insights into how to structure and improve collaboration in highly technical, safety-critical environments. It stands to reason, then, that we could apply these same principles to the pursuit of peace, transforming our societies into highly reliable ones that prioritize and strive for peaceful outcomes.

Yet, we have also recognized the paradox of ultra-safe systems: the safer the systems are, the less capable of dealing with uncertainty we become. The way aviation deals with this paradox is by investing in more advanced training and models but crucially in what happens to the sensemaking capability of the crew when something unexpected happens.

What lessons can those who work in social systems change take from this industry? What is the equivalent to CRM in the social change world and what can we do to make running our societies peacefully a safety-critical issue?

--

--

Pedro Portela
The HiveMind

System’s Thinking my way through a Complex life.