We are in an age of chaos, an era that intensely, almost violently, rejects structure. It isn’t simple instability, it’s a reality that seems to actively resist efforts to understand what the hell is going on. This current moment of political mayhem, climate disasters, and global pandemic — and so much more — vividly demonstrates the need for a way of making sense of the world, the need for a new method or tool to see the shapes this age of chaos takes. The methods we have developed over the years to recognize and respond to commonplace disruptions seem increasingly, painfully inadequate when the world appears to be falling apart. It’s hard to see the big picture when everything insists on coloring outside the lines.
There has always been uncertainty and complexity in the world, and we have devised reasonably effective systems to figure out and adapt to this everyday disorder. From weighty institutions like “law” and “religion” to habituated norms and values, even to ephemeral business models and political strategies, much of what we think of as composing “civilization” is ultimately a set of cultural implements that allow us to domesticate change. If we can make disruptive processes understandable, we hope, maybe we can keep their worst implications in check.
One of the better ways we’ve had of framing the familiar (if unsettling) dynamics of change is the “VUCA” concept. VUCA is an acronym meaning Volatile, Uncertain, Complex, and Ambiguous. The term has proven to be a useful sense-making framework for the world over recent decades. It underscores the difficulty in making good decisions in a paradigm of frequent, often jarring and confusing, changes in technology and culture.
The concept of “VUCA” appeared in the work of the US Army War College in the late 1980s, spread quickly through military leadership in the 1990s, and by the early 2000s had started to appear in books on business strategy. It’s a smart phrasing, illustrating the kind of world that emerged from an increasingly-networked, heavily digital, post-Cold War setting. By the new century, volatility, uncertainty, complexity, and ambiguity had all become commonplace concepts among people working in strategy and planning.
The kinds of tools we’ve created to manage this level of change — futures thinking and scenarios, simulations and models, sensors and transparency — are mechanisms that allow us to think and work within a VUCA environment. These tools don’t tell us what will happen, but they enable us to understand the parameters of what could happen in a volatile (uncertain, etc.) world. They are methodologies built on the need to create a structure for the indefinite.
The concept of VUCA is clear, evocative, and increasingly obsolete. We have become so thoroughly surrounded by a world of VUCA that it seems less a way to distinguish important differences than simply a depiction of our default condition. Using “VUCA” to describe reality provides diminishing insight; declaring a situation or a system to be volatile or ambiguous tells us nothing new. To borrow a concept from chemistry, there has been a phase change in the nature of our social (and political, and cultural, and technological) reality — we’re no longer happily bubbling along, the boiling has begun.
With a new paradigm we need a new language. If we set VUCA aside as insufficient, we still need a framework that makes sense of not just the present world but its ongoing consequences as well. Such a framing would allow us to illustrate the scale of the disruptions, the chaos, underway, and enable consideration of what kinds of responses would be useful. Ideally, it would serve as a platform to explore new forms of adaptive strategies. Scenarios, models, and transparency are useful handles on a VUCA world; what might be the tools that would let us understand chaos?
As a way of getting at that question, consider BANI.
An intentional parallel to VUCA, BANI — Brittle, Anxious, Nonlinear, and Incomprehensible — is a framework to articulate the increasingly commonplace situations in which simple volatility or complexity are insufficient lenses through which to understand what’s taking place. Situations in which conditions aren’t simply unstable, they’re chaotic. In which outcomes aren’t simply hard to foresee, they’re completely unpredictable. Or, to use the particular language of these frameworks, situations where what happens isn’t simply ambiguous, it’s incomprehensible.
BANI is a way to better frame, and respond to, the current state of the world. Some of the changes we see happening to our politics, our environment, our society, and our technologies are familiar — stressful in their own way, perhaps, but of a kind that we’ve seen and dealt with before. But so many of the upheavals now underway are not familiar, they’re surprising and completely disorienting. They manifest in ways that don’t just add to the stress we experience, they multiply that stress.
Let’s drill down a bit on what each of the words in the BANI framework mean.
“B” is for Brittle.
When something is brittle, it’s susceptible to sudden and catastrophic failure. Things that are brittle look strong, may even be strong, until they hit a breaking point, then everything falls apart. Brittle systems are solid until they’re not. Brittleness is illusory strength. Things that are brittle are non-resilient, sometimes even anti-resilient — they can make resilience more difficult. A brittle system in a BANI world may be signaling all along that it’s good, it’s strong, it’s able to continue, even as it’s on the precipice of collapse.
Brittle systems do not fail gracefully, they shatter. Brittleness often arises from efforts to maximize efficiency, to wring every last bit of value — money, power, food, work — from a system. Brittleness can be found in monocultures, where growing a single crop means maximum output, until a bug that only affects that one particular species or strain destroys the entire field. We see brittleness in the “resource curse,” when countries or regions are rich with a useful natural resource, so focus entirely on its extraction… and then that resource becomes functionally worthless after a change in technology. Brittleness emerges from dependence on a single, critical point of failure, and from the unwillingness — or inability — to leave any excess capacity, or slack, in the system.
Clearly, brittleness is not a new development — but in the past, the consequences of catastrophic failures (e.g., the potato famine, guano obsolescence) were more or less regionally limited. In today’s geopolitically, economically, and technologically interconnected world a catastrophic breakdown in one country can cause a ripple effect around the planet (e.g. the Greek Debt Crisis, Arab Spring). Moreover, we’re seeing brittleness manifest in new and surprising ways. Few would have seen democracy as a brittle system, until we realized how much functional democracy depends upon accountability for intentional mistruths.
How many of the fundamental systems upon which human survival depends can now be reasonably thought of as “brittle.” Energy grids? Global trade? Food? If brittleness comes from the absence of a cushion for failure, then any systems that depend upon maximal output run the risk of collapse if that output drops. Because our core systems are so frequently interconnected, it’s entirely possible that the failure of one important component can lead to a cascade of failures. In a tightly-interwoven set of systems, it’s dangerous for any one piece to fail.
Unfortunately, thinking about that sort of thing is likely to induce quite a bit of anxiety.
So “A” is for anxiety-inducing or, more simply, Anxious.
Anxiety carries with it a sense of helplessness, a fear that no matter what we do, it will always be the wrong thing. In an Anxious world, every choice appears to be potentially disastrous. It’s tied closely to depression, and to fear. An anxious world is one in which we’re constantly waiting for the next shoe to drop — or, in a more modern cliché, where every day is F5 Friday, just smashing the refresh key to update the news, to see what horror shows up next. Conversely, we may do our absolute best to avoid any and all sources of news about the world.
Anxiety can drive passivity, because we can’t make the wrong choice if we don’t choose, right? Or it can manifest as despair, that horrified realization that we missed the chance to make a critical decision, and we won’t get another opportunity. Or that awful gut feeling that there’s a very real possibility that people we depend upon will make a bad decision that will leave us all far worse off than before.
Our media environment seems perfectly designed to enhance anxiety. It stimulates us in a way that prods excitement and fear. The media presentation of information focuses on the immediate over the accurate. We are surrounded by what we might think of as malinformation, a broad category of bad knowledge that encompasses misinformation, disinformation, hoaxes, exaggerations, pseudo-science, fake news, fake fake news, and more. Malinformation is the crystallization of what triggers anxiety.
Some of us may adapt by creating defensive malinformation, poisoning the data stream with intentional falsehoods about ourselves, making things worse but at least keeping some of it under our own control. Or we adapt by embracing and elevating charismatic figures, or hating and mocking charismatic figures, and seeing every event as a sign of a conspiracy or of a counter-conspiracy. Knowing that the world has secret masters in control of all things has a remarkably calming effect for many.
Too great a number of us adapt by taking a quick way out. Globally, suicide rates are on the rise. We see it increasing in frequency among those who discover that the seemingly good choices they’ve made over the years were actually wrong, were dead ends, or were even evil. Hardworking, honest people that once considered themselves in control of things, discovering that, no, they aren’t… and they probably never were.
Not necessarily because someone or something else was actually in control of things, but because control was never possible to begin with.
In this spirit, “N” is for Nonlinear.
In a Nonlinear world, cause and effect are seemingly disconnected or disproportionate. Perhaps other systems interfere or obscure, or maybe there’s hidden hysteresis, enormous delays between visible cause and visible effect. In a nonlinear world, results of actions taken, or not taken, can end up being wildly out of balance. Small decisions end up with massive consequences, good or bad. Or we put forward enormous amounts of effort, pushing and pushing yet with little to see for it.
We’re in the midst of a crisis of nonlinearity with COVID-19. The scale and scope of this pandemic go far beyond everyday experience; the speed at which the infection spread over its first few months was staggering. Even though some locations have been successful at reducing the rate of infection, the increase in world-wide cases still trends towards the exponential.
The concept of “flattening the curve” is inherently a war against nonlinearity.
Climate disruption is another nonlinear problem. We see around us, with growing intensity and frequency, real-world examples of the impacts of global warming-induced climate change… and we’re barely up one degree Celsius over pre-industrial levels.
Here’s something that not a lot of people know: what we’re seeing now is primarily the result of carbon emissions up through the 1970s and 1980s. There’s massive inertia in the global climate system, and the consequences don’t manifest immediately. That’s the “hysteretic” element to our climate — a long lag between cause and full effect.
That means that even if we’d gone all-in on the Kyoto Protocols twenty years ago, we would likely still be seeing the kinds of climate chaos now underway. And it means that we could stop putting any carbon emissions in the atmosphere right now and we’d still see additional warming for at least another generation, and continued high temperatures for centuries. The human brain simply didn’t evolve to think at this scale.
COVID-19 and the planet’s climate aren’t the only examples. Nonlinearity, especially in the form of disproportionate cause and effect, is clearly visible in the world of politics, especially international politics. How much did the Russian hack on the U.S. elections of 2016 cost, compared to the impact it had on the world? Or, more broadly, we can understand terrorism as nonlinear warfare, in terms of the money and effort required to undertake it versus the money and effort spent to spot it, prevent it, and/or avenge it.
We see it in economics, from the rapid spread of financialization and the creation of novel financial implements to hyperkinetic algorithmic trading systems. Demands for incessant, ever-increasing growth are ultimately a demand for nonlinearity.
Most importantly, nonlinearity is ubiquitous in biological systems. The growth and collapse of populations, the effectiveness of vaccination, swarm behavior, and, as noted, the spread of pandemics — all of these have a strongly nonlinear aspect. From outside, they’re fascinating to watch; from within, they’re staggering to experience, as we are now discovering.
And sometimes, they’re impossible to understand. So “I” is for Incomprehensible.
We witness events and decisions that seem illogical or senseless, whether because the origins are too long ago, or too unspeakable, or just too absurd. “Why did they do that?” “How did that happen?” We try to find answers but the answers don’t make sense. Moreover, additional information is no guarantee of improved understanding. More data — even big data — can be counter-productive, overwhelming our ability to understand the world, making it hard to distinguish noise from signal. Incomprehensibility is, in effect, the end state of “information overload.”
One way that it manifests is with systems and processes that appear to be broken, but still work, or are non-functional without any apparent logic or reason. It’s a programmer cliché to encounter software that only operates when a particular non-functional, seemingly unrelated line remains in the code. Take it out, the program crashes or doesn’t compile. Leave it in — even though it doesn’t seem to do anything — and the program works. Why? Incomprehensible.
Incomprehensibility seems to be intrinsic to the kind of machine learning/artificial intelligence systems we’re starting to build. As our AIs become more complicated, learn more, do more, the harder it becomes to understand precisely how they make their decisions. Programmers know that there is a web of logic at work, but find it difficult to figure out precisely how that web is shaped. We can’t just ignore it; regulations, like those in the European Union, increasingly require that users of algorithmic systems be able to explain how and why these systems came to their conclusions.
This isn’t just a technology riddle. As AI software becomes more tightly woven into our daily lives, we have to pay close attention to the ways in which complex algorithms can lead to racist, sexist, and other biased outcomes. Code that learns from us can learn more than the intended lessons and rules.
Furthermore, how do we understand systems where complex behaviors execute almost flawlessly, while simple functions randomly fail? Why might an autonomous, self-driving system that can cross the country by itself also smash into a wall while simply backing out of a garage? Why might a learning system tasked with generating realistic human faces occasionally produce something utterly monstrous? You can say that these kinds of things happen with people, too — but we already knew that human brains are very much in the realm of the incomprehensible.
But that statement suggests an important point: incomprehensible now doesn’t mean incomprehensible forever. There are certainly dynamics that remain shrouded in mystery that we will eventually figure out. It may, however, mean that the 1,400 or so grams of incomprehensible meat in our skulls might need to cooperate with a similarly incomprehensible chunk of silicon.
“The End is Near.”
A sign-holding cartoon figure in robes and beard seems less amusing these days. It’s easy to make fun of apocalyptic thinking when such a possibility seems remote. When we are confronted by the immensity of the climate disaster or a global pandemic — or insert your preferred end-of-the-world scenario here — a sidewalk prophet of doom feels more like a confirmation than a provocation.
A sizable share of those of us who work in the field of imagining the future often struggle with what we might call an “eschatological urge” — a difficulty in seeing our world in anything other than an apocalyptic frame. It’s not because we want it this way, but because other framings seem inadequate or false. The danger of this urge is that it can easily become a trigger for surrender, a slipstream into despair. Such a danger isn’t limited to futurists; for so many around the world, things are too strange, too out of control, too immense, and too fragile to even begin to imagine appropriate responses.
It doesn’t have to be that way. The BANI framework offers a lens through which to see and structure what’s happening in the world. At least at a surface level, the components of the acronym might even hint at opportunities for response: brittleness could be met by resilience and slack; anxiety can be eased by empathy and mindfulness; nonlinearity would need context and flexibility; incomprehensibility asks for transparency and intuition. These may well be more reactions than solutions, but they suggest the possibility that responses can be found.
Maybe it’s enough that BANI gives name to the gnawing dread so many of us feel right now, that it acknowledges that it’s not just us, not just this place, not just this blip of time. BANI makes the statement that what we’re seeing isn’t a temporary aberration, it’s a new phase. We’ve gone from water to steam.
Something massive and potentially overwhelming is happening. All of our systems, from global webs of trade and information to the personal connections we have with our friends, families, and colleagues, all of these systems are changing, will have to change. Fundamentally. Thoroughly. Painfully, at times. It’s something that may need a new language to describe. It’s something that will definitely require a new way of thinking to explore.
Jamais Cascio, Distinguished Fellow, Institute for the Future