The Best Book for Understanding Your World

Thinking In Systems

By Donella Meadows

Rating: 10/10

Best Line #1: Our knowledge is amazing; our ignorance even more so.

Best Line #2: We don’t talk about what we see; we see only what we can talk about.

A Note for Readers

Before we begin, I would encourage anyone who hasn’t learned the basic principles of systems thinking to visit the following articles highlighted in Monday’s post. This is especially crucial for new readers because these articles provide wonderful, critical illustrations that help display the concepts of stocks, flows, loops, and the connections between them:

Tools of a Systems Thinker

Systems Thinking Scales

Problem Solving Desperately Needs Systems Thinking

Introduction

I can’t overstate the importance of this discipline. For all the attention given to economics, biology, and engineering, systems thinking is the epistemological glue that binds them all together into a useful whole. It is also the best expression of the humanities that I’ve ever come across. Literature, philosophy, law and politics? They’re great but they’re also narrow and distinct. What can tie the insights of each discipline? Systems thinking.

If someone asked you to explain why your city is thriving, you would have to jump to a simple ontological idea of cause and effect. Seattle is doing great because Amazon is thriving. The condition of Seattle is caused by the effect of Amazon.

This causal relationship is probably more true than false but someone would dispute your claim entirely. Someone would say (a) Seattle is not thriving, and (b) Amazon is killing it. Someone else would say it isn’t Amazon at all but the supportive secondary market that surrounds it. Someone else would have another theory altogether.

The trouble comes when we believe we have to select a winner in this big, meandering debate. Far too often we hear the views and read the data from all sides arguing around this sort of question and we feel like we must decide the One True Answer from the field.

Systems thinking frees us from this tendency. It says that everyone and their theories can be right to some degree. Not because we want everyone to get along. No, because they can literally be correct about a certain causal relationship. They would only be wrong if they thought that one relationship is all that there is.

So why is Seattle thriving? It’s because of Amazon, the secondary markets, port capacity, the talent pool, amenities, cultural significance, transportation infrastructure, schools, and much, much more.

And if someone thinks Seattle is not thriving, well, you can probably point to the same items already listed as the explanation for why. Because these components are deeply connected to one another in corrective and/or reinforcing relationships that net out the state of the city. There is never a single answer because there isn’t a single connection or causal relationship. Such things are called “complex”. But before we lose ourselves in the complexity, let’s just consider them systems. As our author explains:

A system is a set of things interconnected in such a way that they produce their own pattern of behavior over time.

I want to emphasize the phrase “their own pattern” because the behavior that systems produce over time is emergent, both unpredictable and natural, and understandable in the aftermath but seldom, if ever, predicted beforehand. These systems can be tweaked through certain leverage points, which we’ll discuss soon, but what’s amazing about all systems is the fact that they are fueled by and built from information.

Even tangible assets are just forms of information, representations of value that change over time.

I don’t mean to get too deep here but the point is that simple systems evolve into complex behaviors that generate more simple systems that evolve into more complex behaviors. If there were a unified field theory, I think it would come from systems thinking.

AirBnB is a fine example. It is worth billions today because the simple system of renting an air mattress was capable of evolving into a complex system of new markets for short-term rentals on a global scale. The simple process of selling the space and time of an air mattress grew into what we have today by operating on the kernel of a few simple rules that beget a few more rules that beget a mature (overripe?) system. This has always happened. Digital technology just allows it to happen much faster than it used to when fixed assets, like railroads, had to be physically developed prior to market maturity.

There’s never been a better time to study emergence and systems than today. As an aside, please read this beautiful article on the topic to learn more.

And finally, before leaping into the heart of the book, consider the other articles written on the topic this week, all derived and inspired from Meadows’ fantastic gift to the world:

Systems Thinking — The Best Lens We Have

Your Behavior Only. Nothing Else Matters

How Much Longer? Systems and Necessary Delays

A Bad System Beats A Good Person Every Time

Systems All The Way Down

One thing that this book helps me appreciate is the beauty of simplicity and a smaller scale. Consider these terrific words from the author:

Large organizations of all kinds lose their resilience simply because the feedback mechanisms by which they sense and respond to their environment have to travel through too many layers of delay and distortion.

This isn’t a critique of large organizations; it’s just an observation of their limitations. We know it’s hard to steer ocean liners relative to dinghies. We know a startup is more nimble than Proctor and Gamble. But I think the real tradeoff of large organizations is the inability to truly sense a change in the environment. It’s not about the speed of change; it’s the ability to see when it’s necessary. Yesterday’s post was all about the way Sony, the dominant large player, couldn’t properly seize on the changes happening in their environment (i.e., the rise of digital music and the emergence of the mP3 player market).

This is a company worth billions. With some of the best talent in the world. If they wanted to do something, they would do it. But feedback loops were buried under too many layers and old paradigms were too calcified because, frankly, those paradigms had made them successful.

I think what’s important is that the large organization, such as Sony, cannot withstand the constant pressure of the larger system it is a part of. The environment/economy is its own system and it brings a level of uncertainty and risk that Sony can’t totally subdue. They may be an ocean liner but they’re still in an ocean.

As Meadows puts it:

What is the point of the game? To grow, to increase market share, to bring the world more and more under the control of corporations so that its operation becomes ever more shielded from uncertainty.

Except no system can be shielded from uncertainty. And the layers within the large organization prevent those within it to understand and appreciate that. This is further explained in the great Jared Diamond book, Collapse. Fundamentally, this points to the simple reality that the corrective mechanisms of the larger system (be it the competitive forces of Germanic tribes overtaking Rome or the ecological responses that decimated the population at Easter Island) always hold sway.

As Meadows writes:

The goal of keeping the market competitive has to trump the goal of each individual corporation to eliminate its competitors, just as in ecosystems, the goal of keeping populations in balance and evolving has to trump the goal of each population to reproduce without limit.

That’s deep, right? Well, be that as it may, the critical component of this section is to establish that systems are housed within other systems. It’s systems all the way down. Each system is vulnerable to the actions of other systems above and below (i.e. the supporting environment). No system can grow its way out of this dynamic. In fact, growth can actually make the given system worse. A monopoly of the sort Meadows mentions above can still decay and fade if it destroys the very resources it exploits. It leads to horrible market failure. This is the natural corrective action of the broader system/environment. It’s not fun! The reason for antitrust laws and trust-busting efforts is to avoid such failures as they are as painful to others as they are to the monopoly itself.

Consider the following from Meadows:

If city governments, company managers, etc., don’t choose and enforce their own limits to keep growth within the capacity of the supporting environment, then the environment will choose and enforce limits.

The point? When the environment (i.e. the larger system) chooses, things get ugly.

Prediction Is Overrated. History Is Underrated.

The manager/regulator in me thus cries out for a prediction. Where do we see the next failure? Where is there the next threat to our system? The next technological disruption (Sony and the iPod), the next capable competitor (the Goths against Rome), the next ecological scourge (climate change)? Let’s find out. Let’s make a prediction.

It’s deeply seductive to build predictions once we understand the system’s components and their relationships with the ecosystem. Build me a reliable, valid causal model (i.e. system diagram) and I’ll become an amatuer Nostradamus in no time. But there’s a problem in such tendencies that Meadows explains quite well:

Behavior-based econometric models are pretty good at predicting near-term performance of the economy, quite bad at predicting the longer-term performance, and terrible at telling one how to improve the performance of the economy. And that’s one reason why systems surprise us. We are too fascinated by the events they generate. We pay too little attention to their history. And we are insufficiently skilled at seeing in their history clues to the structures from which behavior and events flow.

History tells me that prediction is overrated. Even in the Age of Big Data and our expanding capacity to process awesome regressions, Taleb warns us that we are too fooled by randomness and Rumsfeld tells us there are still the unknown unknowns we cannot foresee. Such are the behaviors of a system. They fascinate us, as Meadows says, right to the point that we use their history to predict again — wrongfully — in a way that fascinates us once more, thereby repeating the cycle.

Better instead to consider the approach Ray Dalio has championed in his book Principles. Build models and principles based on prior experience and proven history. Make decisions based on what principles fit the defined situation. Define the situation based on the characteristics and patterns that are similar to what has been seen elsewhere in history.

History doesn’t repeat, as Twain once said, but it always rhymes.

This approach is mirrored by one of today’s great systems thinkers, Yuval Noah Harari.

This is a prime example of how systems thinking is the best method of utilizing our knowledge from the humanities and science. The humanities help us understand the way in which we interact with environments; science helps us understand the environment itself. Systems thinking underlies both as it captures the behaviors within each and gives us a formal way of demonstrating their relationships.

Okay Systems Thinker: What Do I Do When There’s A Problem?

So imagine you encounter a real problem. As a systems thinker, the most important thing to consider is the very nature of the problem. Is it something expected? Unexpected? If it was expected, systems thinking would suggest that the expected problem isn’t really a problem at all. Instead, it’s a correction. Much like frequent-but-small recessions that naturally emerge to clear out bad debt and overvaluation in the marketplace.

Consider this line from Meadows:

Aid and encourage the forces and structures that help the system run itself. Notice how many of those forces and structures are at the bottom of the hierarchy. Don’t be an unthinking intervenor and destroy the system’s own self-maintenance capacities.

If you’re feeling sick with a head cold, you don’t seek radiation. It’s not surprising, first of all, to get a head cold. Second, it’s a minor thing compared to the broader health of the system (i.e., your body). You probably don’t even go to the doctor. You get some chicken noodle soup and allow your body to correct things.

But think about all the times we define a problem as being “huge” based on whatever we’re desiring at the moment. I have seen friends consumed by the “problem” of not being a millionaire. Their goal, clearly, is to have a million dollars. But even with a million dollars, we know there is still a level of insatiety and dissatisfaction that still affects people when money is their goal. The million dollars seldom fixes anything because, well, that’s the wrong measure and a poorly defined goal.

To return to Meadows:

If the goal is defined badly, if it doesn’t measure what it’s supposed to measure, if it doesn’t reflect the real welfare of the system, then the system can’t possibly produce a desirable result.

So what does a systems thinker do when there’s a problem? More times than not, they do nothing. They allow the system to correct the problem. And if that isn’t enough, if an intervention is still necessary, the systems thinker first considers the goal. Is the goal properly defined?

Faulty goals make me think of recessions again. I’m reminded of a great book about the economy aptly titled “What’s the Economy For, Anyway?” Through the economist’s lens, we continuously drive our policies towards productivity. Is there ever enough? And when the recession hits or the bear market takes over, is it instant despair for all humanity? What, again, is the point of the economic system?

There’s an answer and it must be balanced against the purpose of other systems (social, political, physical) so that the right goal is known and the problem is better defined.

In short, you can’t have a good problem without first having a better goal. If either are less-than-clear, futility will ensue.

How To Intervene (If You Must)

So with all that said, imagine that you have a clear model of a particular system, a terrific and proper goal to guide its performance, and a real problem defined (i.e. the gap between real and ideal performance). To keep it simple, we’ll move off the topic of recessions and into the idea of a simpler system: our morning routine. We have certain actions and activities that make up the system, certain conditions which we must overcome (e.g. sleepiness), and a goal to achieve (e.g. feel productive, refreshed, and ready for the day).

But there’s a problem. Your system isn’t successful. Try as you might, your mornings are still terrible. To consider the problem and identify solutions, you can work through a fantastic list of “leverage points” within the system that can help you correct the issue.

Meadows provides us the following leverage points of all systems. This is a deeply-valuable tool that I use quite often.

Leverage Points From Least To Most Effective:

  • Numbers: constants and parameters such as subsidies, taxes, standards;
  • Buffers: the sizes of stabilizing stocks relative to their flows
  • Stock-and-flow structures: physical systems and their nodes of intersections
  • Delays: the lengths of time relative to the rates of system changes
  • Balancing feedback loops: the strength of the feedbacks relative to the impacts they correct.
  • Reinforcing feedback loops: the strength of the gain of driving loops
  • Information flows: the structure of who does and does not have access to information
  • Rules: incentives, punishments, constraints
  • Self-organization: the power to add, change, or evolve system structure
  • Goals: the purpose or function of the system
  • Paradigms: the mindset out of which the system: its goals, structure, rules, delays
  • Transcending paradigms

This is also illustrated with a simpler framework provide by the fabulous Academy for Systems Change:

As a systems thinker, you can best tackle your problem of the bad morning routines by examining actions from every level of the iceberg and the more-granular categories of the 12-point list. Perhaps your issue is one of sequence. You need to brush your teeth first, then do your exercise. That would be found on Item #10, the stock-and-flow structure.

Or maybe you need to consider your constraints from Item #5. You need to expand your constraints and give yourself more time to conduct your routine, perhaps. Lest you rush yourself and just make yourself more tired.

Or perhaps the problem is with your paradigm. That’s Item #2. Maybe morning routines are overrated. Maybe routines, as a whole, are ridiculous. A change in the paradigm leads you to abandon the whole goal. This is not a bad thing if done consciously!

And finally, Item #1 in the Meadows list, which reflects the bottom portion of the ASC’s iceberg illustration, is the stuff of mind-boggling depth. You can simply acquire a transcending paradigm that, much like our Buddhist friends, eliminates desire completely. Now the system isn’t even a thing for you to think about anymore. When Meadows writes about these transcending paradigms, even she starts to question the worth of, well, everything. It gets deep.

Thoughtful depths aside, systems thinking allows us to jump beyond the typical cause-effect, problem-solution reactionary thinking that mires us all down and traps us within the very system we’re trying to improve. The most original thinkers and the most self-authored people I’ve encountered have an ability to think about things through each one of these steps, assessing which of the 12 leverage points make sense for the particular issue they encounter — to the point of questioning whether the issue is even valid.

With that in mind, I want to offer two more lines from Meadows’ book to further this idea:

Watching what really happens, instead of listening to peoples’ theories of what happens, can explode many careless causal hypotheses.
Listen to any discussion and watch people leap to solutions, usually solutions in “predict, control, or impose your will” mode, without having paid any attention to what the system is doing and why it’s doing it.
The intervention can become a system trap. A corrective feedback process within the system is doing a poor (or even so-so) job of maintaining the state of the systems. A well-meaning and efficient intervenor watches the struggle and steps in to take some of the load. The intervenor quickly brings the system to the state everybody wants it to be in. Then the original problem reappears since nothing has been done to solve it at its root cause. So the intervenor applies more of the solution, disguising the real state of the system again, and thereby failing to act on the problem. That makes it necessary to use still more solution.

I can’t think of a better way to conclude. So to offer a few final words, this is yet another book that is worth far more than its price. This a book for which I can only give a very basic overview. We’ve barely scratched the surface. And without resorting to hyperbole, I can’t think of a better book for us all to read at a time like today. With massive disruption on all fronts, and systems changes rapidly in their function, behavior, and visibility, we must become more aware. This is the book to open our eyes. No other paradigm has affected my thinking as much as this.

Here’s a link to buy the book at Amazon.

Mental Models and Principles

  • A system is a set of things interconnected in such a way that they produce their own pattern of behavior over time.
  • A system must consist of three kinds of things: elements, interconnections, and a function or purpose.
  • It is easier to learn about a system’s elements than about its interconnections.
  • Purpose is deduced from behavior, not from rhetoric or stated goals.
  • An important function of almost every system is to ensure its own perpetuation.
  • Changing relationships usually changes system behavior.
  • We tend to focus more on inflows than outflows.
  • A stock takes time to change because flows take time to flow.
  • Everything we do as individuals, industry, or society is done in the context of an information-feedback system
  • Three questions to judge the effectiveness of a model: are the driving factors likely to unfold this way? If they did, would the system react this way? What is driving the driving factors?
  • Delays are pervasive in systems and they are strong determinants of behavior.
  • Nonrenewable resources are stock-limited. Renewable resources are flow-limited.
  • Placing a system in a straitjacket of constancy can cause fragility to evolve. — C.S. Holling
  • Resilience is not the same thing as being static or constant over time.
  • The capacity of a system to make its own structure more complex is called self-organization
  • Self-organization is often sacrificed for purposes of short-term productivity and stability.
  • When a subsystem’s goals dominate at the expense of the total system’s goals, the resulting behavior is called suboptimization.
  • Nonlinearity means that the act of playing the game has a way of changing the rules.
  • It’s a great art to remember that boundaries are of our own making and that they can and should be reconsidered for each new discussion, problem, or purpose.
  • At any given time, the input that is most important to a system is the one that is most limiting.
  • Policy resistance comes from the bounded rationalities of the actors in a system, each with his or her or its own goals.
  • If there is a delay in your system that can be changed, changing it can have big effects.
  • Missing information flows is one of the most common causes of system malfunction.
  • Most of what goes wrong in systems goes wrong because of biased, late, or missing information.