Systems Thinking in Architectural Practice: Part I

Sevki Topcu
4 min readDec 4, 2022

--

I came across the term “Bounded Rationality” in Donella Meadows’ groundbreaking book Thinking in Systems. The term was originally coined by Nobel laureate Herbert Simon in 1982 and in part it challenges the concept of homo economicus. Homo economicus implies that humans are agents who are perfectly rational and narrowly self-interested. In other words, any individual participating in the economy will always choose to act in their best interests, and through that society will benefit collectively.

According to Simon, rationality is bounded because there are limits to our thinking capacity, available information, and time. Bounded rationality means that people make reasonable decisions based on the information they have. But they don’t have the perfect information, especially about more distant parts of the system. Here are some examples of this phenomenon from Thinking in Systems:

  • Farmers produce surpluses of wheat, butter, or cheese, and prices plummet,
  • Fishermen overfish and destroy their own livelihood,
  • Corporations collectively make investment decisions that cause business-cycle downturns.

So, if people are not always rational and they don’t make decisions by optimizing for their best interests, how do they make decisions? According to Simon, people tend to make decisions by satisficing (sufficing + satisfying) rather than optimizing. As a result, most of our decisions are “good enough”. We do our best with the information easily accessible to us to make a decision before moving on to the next decision. This theory has important implications for organizational structure and decision-making.

Perhaps one of the most striking claims that Meadows makes is that taking out one individual from a position of bounded rationality and putting in another person is not likely to make much difference. So long as the structure of a system is kept unchanged, individuals have a relatively small impact on the overall outcome. Meadows uses role-playing to teach this point: “We teach this point by playing games in which students are put into situations in which they experience the realistic, partial information streams seen by various actors in real systems. As simulated fishermen, they overfish. As ministers of simulated developing nations, they favor the needs of their industries over the needs of their people. As the upper class, they feather their own nests; as the lower class, they become apathetic or rebellious.”

Once you start observing the world through this lens, it’s not hard to draw parallels. This notion that as humans, our decision-making is bounded and often does not result in the optimal decision made me reflect on the role of practicing architects within the larger ecosystem of real estate and construction industries. For instance, I’ve listened to so many architects criticizing the work that a contractor does on a specific job and vice versa. If we were to assign the architect the role and responsibilities of the contractor and provide her with the same knowledge and information, odds are the outcome would be very similar and she would make very similar decisions within the limits of her new bounded reality.

I think there is a certain danger of becoming paralyzed by the idea of bounded reality. It can certainly instill a cynical worldview where decisions that individuals make have a relatively low impact on the overall outcome: “Why should I even bother if my actions aren’t going to change the behavior of the system?” In order to move beyond the bounded realities of our individual positions, we must zoom out and look at the entire system. Meadows points us in the direction of what she calls leverage points in a system. Leverage points are places to intervene in a system so that we can “change the structure of systems to produce more of what we want and less of that which is undesirable.” In increasing order of effectiveness, below are the 12 leverage points outlined in the book:

12. Numbers: Constants and parameters such as subsidies, taxes, standards
11. Buffers: The sizes of stabilizing stocks relative to their flows
10. Stock-and-Flow Structures: Physical systems and their nodes of intersection
9. Delays: The lengths of time relative to the rates of system change
8. Balancing Feedback Loops: The strength of the feedback relative to the impacts they are trying to correct
7. Reinforcing Feedback Loops: The strength of the gain of driving loops
6. Information Flows: The structure of who does and does not have access to information
5. Rules: Incentives, punishments, constraints
4. Self-Organization: The power to add, change, or evolve system structure
3. Goals: The purpose or function of the system
2. Paradigms: The mindset out of which the system — its goals, structure, rules, delays, and parameters arises
1. Transcending Paradigms

Leverage Points: Places to intervene in a system

In Part 2 of this blog series, I’ll be diving into these leverage points and explore how could they translate to systems thinking in architectural practice. Architectural practice as a system is highly interconnected with many stakeholders and collaborators. Each participant in the building of our physical environment whether it be policymakers, designers, developers, builders, or inhabitants, is limited in their decision-making. By understanding the underlying system in which we operate, I hope we can be more intentional and impactful in our practice of architecture.

--

--