AI as a Systems Change Thought Partner

Jewlya Lynn
AI & Systems Change
4 min readMar 10, 2024

I’ve been listening and learning from the massive collection of new AI blogs, and I’m thankful for all the thinking happening. However, I’m also seeing a gap — who else is using AI a systems change thought partner? What can we learn together about the ways AI can show up when we work systemically?

For those of us working on systemic change in complex, dynamic systems, there is no substitute for human knowledge — the lived experiences of being in and seeking to change systems from global security to human rights to education to a single neighborhood’s multifaceted needs. Yet, our lived experience has boundaries — a collection of people with relevant experience and a variety of types of expertise can discover and understand many parts of a system, while still not having the time and space to expand our knowledge far beyond our day-to-day exposures.

It is this expansive role that I have discovered AI can fill. When we engage in learning about what has happened in the system (the past), what is happening right now in a dynamic system (the turbulent present) and is emerging that might influence the system over time (forecasting futures), there is knowledge in the public domain we are often lacking at our systems change tables.

What happens if we invite AI to the table as another stakeholder — someone who brings knowledge that we value, but also brings bias, limitations on the knowledge, and an underlying point of view that isn’t always clearly articulated, not unlike all other stakeholders at our table?

If we treat AI as another stakeholder, we might be able to ask and explore questions well beyond the knowledge of our current partners and bring new information we can make sense of and interpret, even as we acknowledge its limitations.

I’ve been experimenting with this approach in multiple settings, which I will share in a series of blogs over the next few months. The settings have ranged from looking at slavery issues related to the seafood industry globally to the early childhood system in the United States to training GPTs to help with methodological questions when seeking to surface learning in complex, dynamic settings. The use cases have included:

  • Establishing a base of knowledge to support my own engagement, particularly in highly dynamic settings with complicated and often misunderstood histories and current context;
  • Testing the assumptions underlying a systems map, recognizing the map was developed with a given set of stakeholders at a specific point in time;
  • Testing a strategy’s continued relevance as designed by exploring whether assessments made about systemic needs have held true over time;
  • Seeking examples from around the world of other ways of solving a common problem, helping a group of stakeholders to spark new ideas for their own work;
  • With partners, digging deep into a specific issue that has challenged the systems change work, in part to challenge our own assumptions and biases about the nature of the problem and how it can be addressed;
  • Actively engaging the AI during a systems sensing dialogue rather than using it as an input gathered prior to the dialogue or assessment tool afterwards; and
  • Engaging AI analytical platforms in analyzing and making sense of data about systems utilizing a mix of publicly available information and, in one case, confidentially sourced information.

Across these types of uses, the AI tools have included ChatGPT, Perplexity, Copilot, and Atlas.ti among others I’m exploring right now. Each one seems to have its strengths and weaknesses related to systems change work. I’m also learning about Apurva, an AI partner created with a more intentional equity orientation — making the collective wisdom of those on the ground more visible and usable.

You may notice across these uses that most do not include inputting confidential information into any AI platforms. In fact, in only the last use case is confidential information included. In the remaining use cases, we (myself and the partners at that table) did not name their organizations or initiatives, did not indicate who was involved, and did not copy and paste anything from their own documents, disclose their strategic priorities, or otherwise share their information. Rather, by engaging in a conversation with an AI partner, we brought another perspective into the dialogues, opening our thinking and surface new ideas. Nor did we treat this perspective as “true” and more legitimate than other perspectives.

In the blog series to follow, I will be sharing case studies of these different uses, along with other examples, including the types of prompts used, where and how deeper information was surfaced, and how we dealt with issues like bias, hallucinations, and the AI’s desire to please.

I intend this blog series as an opening to a discussion about how to use AI to support systemic change work, not as an opportunity to provide instructions. I know I’m early in this journey. Are you using AI for similar purposes? I’d love to hear more about your experiences!

--

--

Jewlya Lynn
AI & Systems Change

Jewlya Lynn is a facilitator, advisor, and researcher who works with leaders dedicated to making a difference in the world by solving complex problems.