Complex Systems — Part 1: Old beliefs are failing in a complex world

Mark C. Ballandies
7 min readApr 5, 2023

--

Rising sea levels, social unrest, banking crises, military conflicts — the world seems to be facing more and more interconnected challenges to which our globalized society is less and less efficiently responding:

The resilience of our world is declining.

In particular, we are no longer able to manage our world such that desired living conditions are reached (e.g., lower CO2 levels in the atmosphere). I think this is mainly due to the outdated ways of how we are taking decisions and control their implementation in our changing society. Let me explain.

The Control Challenge

Let’s start with an example: German society had made the normative decision to reduce CO2 emissions and has therefore introduced two top-down measures: First, by subsidizing bio-gas, and second, by pushing the use of bio-fuels through regulations.

However, this good intention to reduce CO2 emissions had led to side effects that were not anticipated at design time, such as the creation of large monocultures in Germany, which nowadays threaten the local environments biodiversity, soil fertility and groundwater quality.
This side effect was further exacerbated by a cascading effect: As it is now more economically lucrative to grow energy crops in Germany, feed production is outsourced to other regions of the world, leading to increased land consumption for agriculture in these areas. Among other things, this has led to the deforestation of the Amazon.
So we observe here a counterintuitive negative feedback loop: The initial desire to reduce CO2 led in the end to an increase of CO2 in the atmosphere (since the Amazon is traditionally one of the largest carbon sinks).

Cause and Effect in Networked Systems: The initial reduction of CO2 results via side and cascading effects into a feedback loop that contradicts the initial intention: CO2 is released to the environment.

These and other observations suggest that the use of centralist control approaches to improving society has failed.

In response to this conclusion, a “new” idea for centralized control of desired system states through machine learning/AI is currently emerging. However, such ideas face several problems. Besides practical challenges (e.g., different algorithms often do not converge, contain biases, or are sensitive to the data sets/hardware used), there is also a philosophical challenge. Can the past (on which an AI/machine learning algorithm is trained) predict the future?

Furthermore, there is a normative challenge when one attempts to control for a desired goal, in particular when applying such machine learning/ AI algorithms:
How do we know what the right goal (function) is for which one optimizes/controls and for which an AI algorithm is then trained (e.g. reduced levels of CO2)? And how do we weight different goal functions such as reduction of CO2 vs. equal standards of living vs. (maximized) happiness or freedom — just to name a few.

The Normative Challenge

Lets assume for the moment, we would know how to implement a desired system state, the question remains what this desired state should actually be (e.g. less CO2 in the atmosphere), or what trade-off are we willing to accept (e.g. between security and freedom).
What is the goal (function) we want to optimize/control?

Traditionally, we left the determination of this goal function to more or less centralized entities in society:

  1. In the past, the church determined the correct goal function through the interpretation of its holy book, the Bible via its priest cast. This resulted in social goal functions that are highly controversial today in hindsight, e.g. same-sex marriages were banned.
  2. The church was then replaced over time by the state (more or less democratic depending on the place). It became the bearer of knowledge about right and wrong and summarized this knowledge in the law, which is interpreted by its jurists. The state, through its institutions, now has the knowledge to define the right goal function for society, e.g. what is legal consumption (e.g. alcohol) and what is illegal (e.g. cannabis).
  3. Nowadays, it seems that the state is slowly being replaced by corporations, especially larger IT companies. These organizations now have the knowledge of the proper goal function, implemented via clever algorithms written and interpreted by a caste of programmers who know, for example, what hate speech, what censorship, and what free speech is.

These different approaches are derived from the common assumption that a centralized entity can know how the right goal functions (for you and the society) look like, and that you, the individual cannot figure these out. Nevertheless, as the previous examples illustrate, this centralized view in decision making often results in goal functions that cause sufferings, e.g. on an individual level when you are not allowed to be with the person you love, but also on a societal level when one considers for instance the increasing prison population due to the war on drugs.

One might now (again) argue that in the future IT companies will be in the position to determine and weight these goal functions with AI/ machine learning in a fair and unbiased way, e.g. via unsupervised learning. However, in addition to the practical and philosophical challenges mentioned in the previous section, there is another challenge that is combinatorial in nature. This challenge is related to those already mentioned and has actually a deeper cause that fundamentally undermines the belief in a central authority (such as an AI) that knows the correct goal function or controls its implementation.

The cause: Our world is complex

Our world is a complex system which can be represented as a highly interconnected graph, as shown in the figure above. It is actually a network of networks. And this network is characterized by the fact that it is no longer the nodes in the system that determine the overall behavior of the system, but the connections between these nodes. Or to put it another way: complex systems like our world are more than the sum of their parts.

This results in system behaviors that are often perceived as unintuitive: For instance, cause and effect does not seem to work as we were used in a less complex world. This is illustrated by the butterfly effect: A small local change in the system can results via cascading effects to large impacts on the overall system, e.g. the flapping of a butterfly’s wings can lead to a tornado or, as in our above example, the introduction of bio-fuels results in the cut-down of the rain forest. But, also the opposite is possible, large impacts on a system can happen and nothing is changing on the aggregated level.

Not only is our world complex, but its complexity is increasing with every day that is passing. Every day, links are added between the nodes resulting in a combinatorial growth in complexity. This is actually a stronger growth than exponential.

“In complex dynamical systems with many interacting components, even the perfect knowledge of all individual component properties does not necessarily allow one to predict what happens if components interact. In fact, interactions may cause new, “emergent” system properties.” — Dirk Helbing

Complicating matters further, the interactions between the individual parts of a complex system can lead to new, emergent system properties that have feedback effects on the individual parts and further increase complexity.

Growth of systemic complexity in comparison to existing processing power and data volume illustrating the shortcoming of using computing power to control and decide in complex systems. Image taken from Helbing, D. et al. (2019). Will Democracy Survive Big Data and Artificial Intelligence?

Because neither processing power nor data volumes are increasing in the same pace, a computational/ algorithmic/ AI solution will not solve the normative problem of identifying the right goal functions or their tradeoffs, as the systematic biases that may result in individual/ societal sufferings as illustrated before will persist in such solutions because one needs to take compromises on the utilized data sets and the approximated/ modeled complexity.

The takeaway from this is that due to emerging properties, combinatorial growth in complexity, and the mentioned practical and philosophical challenges, no individual node in our complex world, not even an AI intelligence can know how the things will be in the future or what a beneficial state in the future could look like (normative challenge) and due to effects such as the butterfly effect how the steps should look like to implement them (control challenge).

This might sound depressing if one is carrying the old belief of centralized control and decision-making as the only mechanisms to manage (complex) systems. But luckily, we have more tools available than these.

The Solution

The solution requires a radical shift in thinking, from top-down control and decision making to bottom-up co-ordination and co-operation, and consists of several tools. In my opinion, they are all based on one very important component:

You.

Hence, the question is: Are you willing to learn about your power to change the world to the better? If yes, then stay tuned for the next part of this Mini-Series:

Complex Systems — Part 2: Managing complexity with bottom-up solutions

--

--

Mark C. Ballandies

PostDoc@ETH Zurich; co-founder@onocoy&WiHi, Lecturer@FHV. Views shared are my own. Always interested in academic, philosophical and hands-on exchanges.