How to track and report on progress when working with complex problems

UNDP Strategic Innovation
12 min readMar 14, 2023

--

By Søren Vester Haldrup

UNDP has set up an M&E Sandbox to nurture and learn from new ways of doing monitoring and evaluation (M&E) that are coherent with the complex nature of the challenges facing the world today. In the Sandbox we prioritize action and practical insights ‘from the front lines’ about how to do it, rather than abstract theory and general principles (though those are important too!). In this connection, we seek to explore how M&E needs to be different when we work on complex problems in uncertain and rapidly changing contexts, rather than through the more ‘linear’, control-focused, and projectized M&E that tends to dominate today.

We convene a series of participatory sessions as part of the M&E Sandbox. In each session we collectively explore a theme in depth, inviting practitioners to speak about their experience testing new ways of doing M&E that helps them navigate complexity. You can read digests and watch recordings of our previous Sandbox sessions here and here and consult our overview piece on innovative M&E initiatives and resources.

In our most recent Sandbox session we explored the challenge of progress tracking and reporting. This blog post provides a summary of the discussion and includes the recording, an overview of questions and answers from the discussion, as well as the many resources shared during the session.

But first, let’s consider why progress tracking and reporting is challenging: meaningful transformational (systems) change is a long-term process, it is fraught with uncertainty, and we rarely know up front how to best support change. In this connection it is hard to gauge if we are making progress. For instance, we may not want to look at the implementation of activities as an indicator of progress because we will be adapting these activities on a regular basis. Similarly, traditional quantitative KPIs (with a baseline, value, and target) may also not be that helpful. So how do we know if we are on track?

Recently we explored these challenges with panelists from a organizations who are all rethinking how to capture and report on progress when working to tackle complex systems challenges. The panelists were: Kecia Bertermann from Luminate, Veronica Olazabal from BHP Foundation, Leslie Wingender from Humanity United, Tatiana Mosquera Angulo from Ideas for Peace Foundation, and Mamadou N’Daw from UNDP. The session also had countless valuable contributions from the over 200 participants who tuned in from around the world.

The session brought out a number of important themes (further unpacked in the summary below):

  • It is possible to capture progress in many different ways.
  • Reporting can be conversation based and learning focused.
  • Changing how we track and report on progress requires trust and shifts in power.

From KPIs to a hypothesis-based learning approach

Kecia kicked us off with an outline of how Luminate is rethinking how it tracks performance and progress. At an organizational level, Kecia noted, KPIs are not a useful tool for Luminate to capture progress and generate learning. Instead, Luminate has shifted to a hypothesis-based learning approach where they use a simple framework (see below), comprised of a few core ‘building blocks’, intended to ensure that their evidence collection is focused and informs reflection and sensemaking. This approach includes collection of both confirming and disconfirming evidence, and it helps them navigate uncertainty, drive towards action, and unveil biases. However, adopting and mainstreaming this approach across the organization has not been without challenges. It entails a very different way of thinking and requires a mindset shift among their M&E team as well as the teams that issue grants.

Luminate’s learning framework

Moving from a culture of compliance to a culture of learning

Veronica described how BHP Foundation has overhauled its approach monitoring, evaluation and reporting to incentivize learning, adaptation, and focus on impact. This includes a shift from retrospective and unstructured program evaluations towards deeper and “forward looking” focus of program evaluation structured around testing the most critical and uncertain issues, assumptions, and hypotheses about impact, scale and sustainability. They use these program evaluations to ‘connect the dots’ across portfolios in the foundation on what is and isn’t working — this happens 2–3 times a year in so-called learning huddles. When it comes to capturing progress and impact, they try to measure what matters most: focusing on 5 dimensions of impact.

Five dimensions of impact

Lastly, BHP has updated their partner reporting requirements to align with the above. Shifting orientation from compliance to learning, reducing frequency of reporting, and moving away from KPI reporting on all activities towards a limited set of milestones and deliverables that incentivize adaptation — i.e. reporting becomes a conversation about what we are learning and how we adapt based on measuring what matters most, rather than whether or not we have hit our KPI targets.

BHP’s evolving practice of reporting

Tracking change through stories

Next up, Leslie from Humanity United (HU) explained how they, as an organization, have evolved their efforts to learn and track progress as a portfolio level with their partners / grantees. They initially adopted an outcome harvesting approach to understanding whether their peacebuilding portfolio was making progress. However, they quickly decided to instead talk about stories — in part to avoid some of the challenges associated with using the word ‘outcome’ (for some people it carries a very particular meaning and triggers a certain set of assumptions and incentives). In their story-based approach they therefore allowed greater variety in what progress looks like — asking people to write stories about anything that individuals, groups, institutions, or parts of institutions do differently. This exercise helped them gain a much deeper understanding of how change happens in the contexts where HU works — as well as how the system may sometimes ‘push back against change’.

Allowing for greater variety in the types of change we are looking for,

In order to really enable and incentivize people (both HU grant making officers and grantees) to contribute with open and honest insights, Leslie’s team had to make explicit that the stories would not be linked to funding decisions. Having run through this process once and built trust among grantees about how the information is used, HU has now begun to more directly engaging their grantees in generating stories.

Rewiring the relationship between ‘funder’ and ‘grantee’

Tatiana from Fundación Ideas para la Paz was up next. Tatiana provided a fresh perspective on the challenges and needs facing people and organizations working directly to effect social change. She also shed light on the benefits and learning curve associated with HU’s grantmaking process and reporting requirements. As a HU grantee, the Fundación Ideas para la Paz initially looked at HU’s new story focused reporting process with suspicion. In practice, Tatiana noted, funders are often not interested in learning even though they may say it on paper. Furthermore, funders may often not allow for much flexibility, and they tend to maintain an unequal power relationship with their grantees: “You always feel that you are the underdog, and you have to be…like…quiet and you have to pretend to not have any errors.” However, with HU it was different. The emphasis on stories (instead of KPIs) and an equal relationship between funder and grantee, slowly created a very different type of relation — one much more conducive to surfacing genuine learning (including mistakes), encouraging adaptation, and allowing for a more holistic way of tracking progress.

Building the trust required to go on a genuine learning journey

Rethinking Results Based Management in UNDP

Before moving into a wider discussion, Mamadou provided a quick overview of how UNDP is trying to rethink Results Based Management (RBM) at the corporate level to encourage more focus on higher level impact (captured, for instance, through narratives) rather than aggregation of output-level KPIs. The M&E Sandbox is of course part of this corporate process of rethinking M&E so keep an eye out for updates later this year.

Key take aways

For me, this session was very thought provoking. The rich discussion is hard to summarize, but a few points stand out:

  • We can capture progress in different ways: progress tracking and reporting is so much more than KPIs. For instance, stories are a great way of capturing higher-level change and they can be produced collectively as a learning exercise.
  • Reporting can be conversation based and learning-focused: reporting doesn’t have to be about the submission of quantitative values against a set of pre-specified KPIs to unlock the next tranche of funding. It can be regular touch points where relevant parties meet, reflect, and learn together, and where they use these insights to make decisions and adapt.
  • Doing it differently requires trust and shifts in power: there is often mistrust and an unequal power relationship between funders (or managers or headquarters), on the one hand, and those working directly to tackle real world problems, on the other. We need trust building and a much more equal relationships between funders and those receiving funding to encourage progress tracking and reporting in ways that enable genuine reflection and learning,

If this post has sparked your interest, I recommend that you watch the full recording of the event and browse the Q&A and list of resources below:

Questions and answers

Question: Do we not need grantee reports for the institutional memory and making sure to document how programmes are progressing?

Answer: (Kecia) Grantee reports are incredibly important for documentation and sensemaking. We include grantee reports and learning conversations with grantees along with other evidence sources (such as evaluations and secondary data).

— — —

Question: What are the structural changes organisations might allow to integrate this flexibility (i.e. systems are often rather rigid); if learning is also management, to what extent should MEL people then interact with management and operational people?

Answer: (Kecia) Every organization likely approaches this differently; at Luminate we’ve found that taking this approach helps foster our efforts toward becoming a learning organization. For example, we’re inviting management and operational people to participate in our evidence collection and sensemaking, especially when our learning questions have implications across the team.

— — —

Question: Can you reflect on the appropriateness of TOCs in complexity? One of the key things I see it doing is surfacing known assumptions.

Answer: (Mamadou) In development interventions, the ToC by essence is to deal with social change which can be a messy, complex affair rather than a predictable linear process. In such circumstances, we have to be adaptive, iterative and non linear. Learning from what works and what doesn’t work will be key in the success of the ToC design.

— — —

Question: Do you have a specific example that you can share about how you are collecting and presenting evidence using your hypothesis-centered framework?

Answer: (Kecia) The evidence that we’re collecting really depends on the learning question. We have a lot of conversations when we’re building our evidence collection plans about the perspectives which are important to represent in the data, as well as the specific types of data that would help us answer the question. So depending on the question, the evidence could be a mix of conversations with our partners (grantees), social media reactions, conversations with government stakeholders, citizen surveys, evolving positions of key stakeholders, timeline analysis, evaluations… The evidence itself is collected by our staff/teams, our partners, and in some cases we commission a third party to help us collect the relevant data. We then use all of this data in our learning conversations as we think about the “so what/now what”.

— — —

Question: It would be helpful to understand the resource inputs required. How much do each of the organizations presenting spend on M&E and how many full time staff do they have managing the process?

Answer: (Veronica) At the BHP Foundation, there is an internal and external lift. We budget around 6–7% at the program level. Separately, each of our partners have dedicated resources for mid / final evaluations. The amount really depends on whether we bring along an MLE partner earlier in a more developmental (i.e. developmental evaluation) type of way or whether we stick to formative/ summative evaluation.

— — —

Question: Leslie, thanks. Would love to hear more about how you’re conceptualizing and applying the ‘sensemaking’ aspect. Who is involved with that / what does that look like?

Answer: (Leslie) We are currently developing key learning questions related to our areas of focus. We are doing this with our partners. As part of our strategy refresh process we are engaging in mixed groups of partners which has been amazing and laying the foundation for a process of sensemaking.

— — —

Question: At Ecorys, we are doing a lot under the UK ‘s CSSF through various MEL contracts to help FCDO capture progress and lessons via change stories. I’d be interested to know more about how to supplement these with other evidence sources to triangulate and develop a more robust evidence base?

Answer: (Leslie) I like the word “story” to pull people into writing. I also found our definition of an outcome story to be a basis for how we can triangulate: “an outcome is anything that others — individuals, groups, institutions, parts of institutions — do differently. These actions can be large or small. Significance, not size, matters. Context matters: We asked “what happened? Why is this significant?” This information along with the date and partner, and our HU contribution allowed us to really dive in deeper for each story.

— — —

Question: At what level/unit, does Luminate create learning frameworks? I’m trying to get a sense of how big and heterogeneous vs homogenous those portfolios are. For example, is there a single learning framework for 25 grants working on issue X in a single geography? This question also applies to BHP Foundation. I find for large, global multi-sectoral strategies (e.g., agriculture), sometimes theories of change, assumptions, and learning questions can get a bit unwieldy so I’d love to hear how other foundations right-size the level of the theory of change and learning.

Answers: (Kecia) We create learning frameworks for each of our internal teams. For example, our Latin America funding team has a learning framework which includes five learning questions which tests their hypotheses across their portfolio. (Veronica) Great question, I will also share that we do this at the program level as we connect investment to theory of change to outcomes and then hypotheses / learning questions. I will do a voice over on this real quick during the discussion.

— — —

Question: Some of the approaches described would seem to work best for complicated challenges, rather than complex ones. Would you agree? How do you respond to more complex challenges, where hypothesis might be a long way off? E.g. where a ‘probe-sense-respond’ strategy might be necessary.

Answer: (Kecia) We’ve found in practice that many of the challenges we face have complex and complicated challenges and tend not to sit neatly in a specific category. We have actually found that thinking in terms of hypotheses often helps us articulate the ways in which we want to probe and test our approach to a system.

Additional resources

Here’s a list of resources shared during and after the session:

If you would like to join the M&E Sandbox and receive invites for upcoming events, please reach out to contact.sandbox@undp.org.

A bit more about the speakers:

Kecia Bertermann, Director of Learning and Impact at Luminate. Kecia was previously Director of Digital Research and Learning at Girl Effect. Prior to joining Girl Effect, Kecia was Senior Monitoring, Learning and Results Manager at Nike Foundation

Veronica Olazabal, Chief Impact and Evaluation Officer at BHP Foundation and former President (Board of Directors) at the American Evaluation Association. Veronica is also Adjunct Associate Professor at Columbia University and has previously worked at the Rockefeller and Mastercard Foundations.

Leslie Wingender, Director, Peacebuilding at Humanity United. Leslie has also worked for Mercy Corps, Catholic Relief Services and Partners for Democratic Change

Tatiana Mosquera Angulo, Coordinator at the Centre de Formación y Pedagogia (Training and Pedagogy Center) at the Fundación Ideas para la Paz (Ideas for Peace Foundation) in Colombia. Tatiana has also worked for UNESCO and with the Ministry of Education in Colombia

Mamadou N’Daw, Policy Adviser and Team Leader UNDP Bureau for Policy and Programme Support where he works with results-based management and impact measurement

--

--

UNDP Strategic Innovation

We are pioneering new ways of doing development that build countries’ capacity to deliver change at scale.