A person navigating a fogger mountain cliff, with uncertainty about what comes next

Understanding Uncertainty: From simple models to complex systems

Bioform Labs
10 min readJul 6, 2023

Part One: Bioform models can help uncover unexpected causal insights.

Introduction

Welcome to our three-part series on understanding and modeling the causal structure of complex systems. In this series, we’ll explore Bioform Labs’ unique approach to systems modeling, taking you on a journey from the simplicity of deterministic models to the intricate, uncertain systems we encounter in the real world. We’ll demonstrate how our Bioform models can empower organizations with a 21st-century toolkit to enhance performance, decision-making, and resilience.

Our ambition at Bioform Labs is not just to navigate uncertainty, but to unravel and understand the causal structure and dynamics of any system we encounter. Whether you’re a potential customer, a partner, or simply interested in the field, this series will provide you with a deeper understanding of our work and its potential applications.

Uncertainty: A Challenge and A Catalyst

In the ever-evolving world of systems modeling, one constant remains: uncertainty. It’s the lurking variable, the unknown factor that makes predicting outcomes a daunting task. Yet, it’s in this realm of uncertainty that innovation flourishes, and new tools and methodologies emerge.

Uncertainty can be seen as both a challenge and a catalyst for innovation. It hampers those who can’t harness it, yet fuels innovation for those who can. At Bioform Labs, we’ve taken a unique approach to this challenge. Unlike traditional machine learning methods that rely on correlations and assumptions, our models infer the underlying causal structure of systems. By “causal structure,” we mean the underlying mechanisms that drive the behavior of a system — the cause-and-effect relationships that determine how the system responds to different inputs and conditions.

Understanding this causal structure is crucial as it allows us to resolve uncertainty by predicting how the system will behave under different scenarios, and identifying leverage points where we can intervene to influence the system’s behavior. Our ambition is to not only navigate uncertainty but to unravel the causal structure and dynamics of any system we encounter. In this series, we’ll demonstrate how our unique approach not only adapts to and resolves uncertainty but thrives within it, capturing the causal dynamics of systems.

Bioform Toolkit and Methodology

We’ll start our journey by sharing with you the toolbox we use to transform uncertainty into certainty. We’ve developed a novel approach to system modeling that marries biological principles with the power of computational techniques, rooted in the concept of Active Inference. This approach, akin to living organisms interacting with their environment, is designed to minimize surprise by continuously refining our models to better predict and comprehend what we encounter. This allows us to tackle uncertainty in a way that others can’t, leading to more robust and accurate models.

Creating a Bioform model involves distinct steps:

  1. Define the system: What is the system we’re interested in? What are its key components and processes? What uncertainty do we want to resolve? What data do we have about the system? A system could be a company, a local economy, a community, a power plant facility, an ecosystem, or something else.
  2. Generate a hypothesis: Based on our understanding of the system, how does the system function and operate? What do we think will happen under different conditions? What are the key variables and relationships that drive the system’s behavior? This hypothesis serves as the foundation for our model.
  3. Design the model: How can we represent the system and its dynamics in a model? What are the key functions and variables, and how are they connected? Our goal is to resolve uncertainty about the system by mirroring the system’s dynamics.
  4. Evaluate accuracy: How well does our model predict the observed data? Where does it fit well, and where does it need refinement? Using the trained model, we emulate the system we’re investigating to see if we can accurately recreate data similar to the data we gathered.
  5. Refine and iterate: What changes could we make that might enhance the model’s performance? Based on the evaluation, we identify potential improvements to the model.

The hallmark of our methodology is its remarkable generality. Our models, guided by statistical datasets, can be universally applied across systems of varying complexity, ranging from simple mechanical systems like pendulums to complex economic markets. They are built to evolve and adapt, mirroring the systems they model, thereby allowing us to manage uncertainty dynamically. More importantly, our approach aims to understand the causal dynamics underpinning the systems we model, offering a deeper insight into their behavior.

To bolster this process, we’ve built a robust toolkit comprising model design and development, data analysis, visualization, and emulation utilities. These tools, when combined with our models and methodology, transform intricate system dynamics into actionable insights, enabling us to navigate uncertainty effectively.

Thus armed with our unique methodology, toolkit, and models, we are prepared to journey from deterministic models to the realm of complex, uncertain systems.

So, let’s begin with a model as old as time itself — the pendulum.

The Pendulum — A Model of Simplicity and Certainty

The pendulum, a classic example of a deterministic system, serves as an ideal starting point for our exploration. It’s a straightforward example of the cause-and-effect dynamics that our models aim to capture.

It’s simple, predictable, and its movements can be accurately described using Newton’s laws of motion. In its idealized form, where factors like air resistance and friction are ignored, the pendulum swings back and forth in a reliable, repetitive pattern. It’s the epitome of a system that can be fully known, fully understood, and perfectly predicted.

Figure 1. Our simulated, idealized pendulum with a length of 1m, mass of 1 kg, and gravitational effect of 9.81 m/s2. Used to demonstrate our active inference, bioform, causal AI models.
Figure 1. Our simulated, idealized pendulum with a length of 1m, mass of 1 kg, and gravitational effect of 9.81 m/s2

Yet, even within this straightforward system, the subtle dance of cause and effect is at play. Our task is to capture these causal dynamics. For instance, the pendulum’s current position and velocity (cause) determine its position in the next moment (effect). This cause-and-effect relationship is consistent and predictable, making it possible to accurately predict the pendulum’s future positions based on its current state.

For this exercise, we simulated the movements and positions of an idealized pendulum for ten seconds (Figure 1). We then took the simulated data and fed it into our model, along with a hypothesis of how a pendulum works.

Despite knowing nothing about pendulums, the laws of motion, or the specific characteristics or gravitational context of this pendulum, our model was able to predict the pendulum’s position over time with remarkable accuracy (Figures 2 and 3). The model captures not just the pendulum’s movements, but the underlying causal dynamics that drive them.

Figure 2. The actual horizontal position of the pendulum
Figure 3. The predicted horizontal position of the pendulum–a near-perfect match with the actual position shown in Figure 2 (red line).

So, how does it do this?

Bioform models infer the underlying causal structure of any system for which we have data. This is a significant departure from traditional machine learning and analytics methods that often posit a structure based on assumptions or correlations. Instead of learning just a static mapping from inputs to outputs, our model infers the underlying dynamics of the system. It learns how the state of the system changes over time, enabling it to predict future states based on the current state.

In the case of the pendulum, our model isn’t just learning the relationship between time and the pendulum’s position. It’s learning the underlying physics that govern the pendulum’s swing, enabling it to predict the pendulum’s future positions based on its current position and velocity.

Bioform Models: Inferring Causal Structures

This ability to infer rather than posit the structure is a crucial distinction. It means that bioform models are not limited by our current understanding or assumptions about the system or correlations in the data. They are free to discover new, previously unknown causal relationships.

Think of it this way: If a statistical model is like a person trying to understand the movement of a shadow cast on a window, a generative model like ours is like someone trying to understand the object casting the shadow. The statistical model focuses on the shadow itself, while our generative bioform model seeks to understand (and emulate) the process behind the shadow, which unlocks a whole new world of possibilities.

This ability to infer the causal dynamics is what sets bioform models apart from more traditional machine learning approaches. Instead of simply identifying patterns or correlations in data, our models delve deeper, striving to understand the underlying dynamics that generate these patterns. This leads to more robust, accurate predictions, especially when dealing with systems that evolve over time.

“Machines’ lack of understanding of causal relations is perhaps the biggest roadblock to giving them human-level intelligence.”
- Judea Pearl (AI pioneer, Turing Award winner, author of “The Book of Why”)

While developing and calibrating our toolkit, we also tested bioform models on other deterministic systems including pendulums in different gravitational conditions (like on Mars vs. Earth), spring-mass systems, engines, electrical circuits, and more.

However, we didn’t start Bioform Labs to model simple, highly predictable systems like pendulums and springs… We aim to tackle the complexity and uncertainty of real-world complex systems, and our ability to infer causal structures from data is a powerful tool in this endeavor.

Embracing the Chaos

As we venture from the simplicity of the pendulum, we encounter systems that increase in complexity and uncertainty. The further we journey, the more we come to realize that the world around us is less like a predictable pendulum and more like an ecosystem — ever-changing, interconnected, and complex.

In this reality, our conventional tools, approaches and analytics, which were largely designed under the assumption of deterministic and predictable systems, begin to falter. They are not equipped to handle the complexity and inherent unpredictability of these systems.

Take, for instance, the financial markets. Traditional models often struggle to accurately predict market movements because they are influenced by a multitude of interconnected factors, from economic indicators to geopolitical events to investor sentiment and psychology. These models often assume linear relationships, independent variables, and isolated simple systems, but in reality, the relationships are often non-linear, the variables are interdependent, and the systems are not isolated or simple. When a major event occurs, such as political upheaval or a global pandemic, it can trigger a cascade of effects across the market, creating feedback loops and emergent behaviors that are difficult to predict with traditional models.

Traditional models often falter in the face of complexity and uncertainty, which can lead to destabilizing overreactions and effects in markets, companies, and other environments. (Source: WSJ)

Additionally, consider the task of managing a forest to prevent wildfires. A simple deterministic model might suggest that removing all dead trees would reduce the risk of fire. However, this ignores the complex interactions within the ecosystem. Dead trees provide habitat for certain species and nutrients for the soil, and their removal could disrupt these relationships and lead to unintended consequences, such as an increase in pests or a decrease in soil fertility. Moreover, this could prevent regrowth and permanently damage the ecosystem, which in the long term could actually lead to more fire risk.

Finally, consider how companies set strategies to achieve their goals. Every goal — whether to increase ARR, grow users, or achieve profitability — usually has a set of value drivers. Yet very few companies, if any, have a causal understanding of the relationship between these value drivers and their goals. For instance, a company might invest heavily in certain marketing activities with the assumption that this will lead to increased sales. However, without a clear understanding of the causal relationship between marketing activities and sales (“Are our marketing activities actually causing sales, or is it something else?”), many strategies are misguided.

In fact, most CEOs and leaders, whether they know it or not, are operating as if they have evidence of a causal relationship between value drivers and goals. But in reality, these relationships are often based on untested or under-tested assumptions. This often leads to ineffective strategies and missed opportunities. (And sometimes it’s even worse: this study showed that most organizations are nothing more than the aggregation of uncoordinated, ad hoc decisions of people dispersed throughout the organization).

In all these examples, the complexity and unpredictability of the systems make it difficult to manage and steward them effectively with conventional tools and approaches. This is why we need new models and methodologies that can embrace and navigate this complexity.

This is where Bioform Labs steps in.

Our aim is not to force the uncertainty into a deterministic box but to embrace uncertainty, to understand it, and to interact with it in a more meaningful way. We acknowledge that we cannot know everything with absolute certainty, but we can strive to understand the underlying dynamics, the causal structure that governs these systems. Through this understanding, we can predict, learn from, and influence these systems more effectively.

Bioform models, rooted in Active Inference, allow us to do just that. They don’t just learn patterns, they learn the system’s dynamics. They evolve with the system, adapting their understanding as new data emerges. This means they’re not just responding to change — they’re anticipating it.

Part of what’s unique about our approach is that we’re not just learning from the past, but we’re actively shaping our understanding of the future. By continuously refining our models and our actions based on what we learn, we can guide these uncertain systems toward desired outcomes.

As we continue to explore these landscapes, we are confident that our approach will lead to more effective strategies and decision-making, better predictions, and deeper insights.

In our next piece, we’ll dive deeper into this journey, where we’ll tackle the complexities of modeling systems that do not swing as predictably as our pendulum.

Continue to Part Two: Navigating Complexity: Unraveling the causal structure of real-world systems

Sign-up for Early Access

We’re excited to have you along on the journey, and we invite you to be part of it.

If you’re interested in navigating the complexities of our uncertain world with us, we invite you to sign up for early access to our platform at: https://www.bioformlabs.org

If you’re interested in whether bioform models can help address a specific challenge or need, like uncovering unexpected causal relationships to drive growth, let’s have a discussion. Reach out at joshua@bioformlabs.org or cory@bioformlabs.org.

Onwards and upwards 🙏

--

--

Bioform Labs

Bioform Labs: Building a toolkit for the 21st century