Systems Thinking 101

Aaron Brown
5 min readAug 26, 2022

--

In my first post I described myself as a software systems thinker on a climate journey. Before going too deep into that journey, I thought it’d be useful to say more about what I mean by systems thinking, and share some of the techniques and tools I’ve learned in my two decades of experience applying systems thinking to software problems (and their underlying people and organizational problems!). These define the lens that I’ll bring to the climate system in my subsequent posts. If you’re already familiar with systems thinking, you may want to skip this and jump to my next post where I start unpacking the climate system.

Let’s start with what “system thinking” means. There are various formal definitions; Wikipedia says:

“Systems thinking is a way of making sense of the complexity of the world by looking at it in terms of wholes and relationships rather than by splitting it down into its parts.”

To me, this captures some of the essence — notably the prerequisite of complexity, the importance of a holistic perspective on that complexity, and the primacy of understanding relationships. But when I think of “systems thinking”, there’s more to it. First, it’s more than just “making sense” — it’s also the ability to create change, often by using the sense made of the system to reveal ways to cut through complexity and achieve non-obvious outcomes.

Second, in my experience systems thinking always involves creating a deep understanding of not just relationships, but the underlying values, motivations, incentives, fears, and reward systems that drive the actors in the system and the relationships they form.

Third, system thinking involves a constant peeling-back of the onion, looking at motivations and behaviors as the surface-level manifestations of deeper forces, and continuing to explore those until you hit the foundational bedrock of first principles.

And finally, my version of systems thinking includes a generative component, an ability to form strategy that draws on all the layers of understanding to bring the system’s forces and actors to bear on each other to achieve outcomes beyond what can be done by direct, individual action.

There’s a common toolbox of techniques I find myself using over and over as I apply this more-robust version of system thinking to new problems:

  • Mapping out what each actor and entity in the system values, their main concerns, and the reward structures in which they operate. These help delineate the constraints that define their choices and behaviors.
  • Mapping out the interactions between parts and players in the system, exposing hidden dependencies, simmering conflicts, and critical points where progress can be stalled or unblocked.
  • Creating and using metrics as a means of getting different actors aligned on what matters (trying to drive consensus on metrics often exposes hidden disagreements). Metrics are also a critical tool for cutting through bluster to expose the truth of progress (or lack thereof), creating accountability and rationalizing decision-making.
  • Establishing gates or control points (often tied to metrics) as guardrails to discourage behavior that acts against the desired system direction.
  • Identifying and creating missing capabilities at the critical junctions in the system that help remove constraints and incentivize things that previously were more difficult. In the software world, these often take the forms of new software tools or features, but can also be operational programs or services that encourage system actors to do the right thing.
  • Crafting strategies based on all the above tools (maps, metrics, tools/programs, control points, and more) to guide and influence existing system forces into alignment, allowing the system to move itself into a new state at scale, achieving the desired change.

Let me make all of this more concrete with an example from my past career. One of the challenges we faced a few years ago at Google was the decline in user-perceived quality of our home-grown (“first party”) mobile applications. As they grew more complex and rich in function, these apps were consuming increasingly more system resources (battery, memory, CPU). For any individual app, this resource consumption didn’t matter — indeed, looking at incentives, each app was incentivized to use more resources, as it could add more differentiating features and often run faster and do more. But when all those individually-growing apps were installed on a phone, in aggregate they consumed far too much of the phone’s limited resources, slowing down the entire user experience, causing poor battery life, and setting a poor example for the rest of the mobile ecosystem.

Tasked with solving this problem, I quickly realized it required a systems-thinking solution. There were many actors involved, from individual app developers to the underlying Android OS platform team to teams responsible for release processes and quality-control gates; each had different (and misaligned) incentives. The seemingly-simple solution of mandating every app reduce its resource consumption had been tried and failed; the incentives did not line up for app developers to withhold their features and functionality to serve a “commons” outcome, and moreover it was unclear how to balance resource allocation across apps with very different functions and business models. (Not to mention, there was no organizational “authority” who could credibly issue such a mandate.)

After applying the tools of values/incentives and dependency mapping, and peeling back the onion as far as it could go, it became clear that the solution would need to involve two aspects: first, bringing everyone’s disparate incentives into alignment through some sort of common currency; and second, finding a way to harness and adjust various teams’ intrinsic motivations such that they naturally aligned with the overall system outcome, without the need for top-down control.

In practice, achieved this through a series of steps: we built a data science team that created a set of standard resource health metrics that applied across all apps as well as the device-wide platform; we built models connecting those metrics to the business outcomes that teams cared about (app adoption and perceived user quality, phone battery life, etc.) enabling them to understand and consider the externalities of choices they were making; we provided easy-to-use tools to help apps avoid easily-avoidable resource bloat early in the development process; and we harnessed the desire of the release teams to maintain quality control by connecting their “release gates” to the common metrics.

I’m eliding a ton of detail here, but in the end, every app knew where they stood in terms of their impact on the larger phone ecosystem, they had the tools and incentives to make better choices (because that impact on their business metrics was now visible), and we had aligned everyone’s intrinsic motivations with the common outcome. Once this systems-driven strategy was established, we saw no further regressions in app quality, and overall device health started improving — all without any micromanagement of the teams.

This is just one example and painted with a broad brush, but hopefully it illustrates the way a systems-based approach can crack complex problems, how managing and redirecting existing forces in a system can enable bigger outcomes than direct changes, and how some of the tools like value/incentive mapping and metrics can play a key role in these approaches.

Ok, that was quite a detour from the topic of climate, but it lays the groundwork for everything I’m planning to write about going forward. In my next post, I’ll bring us back into the deep end of climate, and start articulating the system that I see underpinning our response to climate change.

--

--

Aaron Brown

Systems thinker & long-time product management leader focused on creating change in complex systems. Pivoting to Climate. All opinions are my own.