Adaptive Management Across Project Cycles: Look into Coherence in Time

Florencia Guerzovich
13 min readAug 8, 2023

Post 3 of 3

This is a series about Monitoring, Evaluating and Learning (MEL) whether sets of interventions/portfolios are adding more together than each one would produce on their own. In post 1, I pointed to coherence, the new OECD-DAC evaluation criteria as a way to bridge the ambition of bringing bigger change with the MEL world. In post 2, I shared 3 of 4 practical lessons I’ve learned in experimenting with MEL systems and exercises that focus explicitly on interactions of interventions/portfolios.

In this post, I bring Paul Pierson’s groundbreaking argument for social science to MEL. Paraphrasing, most contemporary MEL takes a “snapshot” view of interventions and portfolios, distorting their effects and meaning by ripping them from their temporal context. Instead, we should place in time interventions/portfolios with the ambition to add more than the sum of the part by constructing MEL systems looking at “moving pictures” rather than taking snapshots. This can vastly enrich our understanding of complex systemic dynamics and greatly improve the theories and methods that we use to explain whether their combined effects are greater than their individual effects under conditions of uncertainty. As David Jacobstein reminded me, this is related to Political Economy Analysis/Thinking and Working Politically efforts to put work in context.

But time is abstract. Here, I present practical ways in which we may do better in situating present work as part of a broader story of how change happens within systems. Usually, we look into short-term project periods (the present) as a silo, hoping to make a bit of linear progress. Many projects are often understood as running in parallel by most.

What does adaptive management across project cycles looks like?

When we realize that, in many cases, the value of those short-term projects is to be part of a process that builds on and adapts the past and adds layers for future work, then the outlook changes. So, the outcomes that matter for the movie may be different than those that are meaningful in the snapshot.

Do you focus on the pink in the picture below and the purple building blocks and actions are in practice invisible? Or do you include the purple as part of your MEL of the pink and change your perspective accordingly.

Two moves can help us do that better. First, the OECD-DAC invites us to think creatively about time, when it suggests evaluators focus on conditions for actual results and/or prospective results, rather than only assessing interventions and portfolios based on today’s observables. Second, I follow practitioners who pragmatically defy the zero-sum debate between short-term and long-term approaches by weaving them often behind the scenes.

Lesson Four: Take Time Seriously

Time is super important in MELing combined effects of multiple parts. There are many ways to think about it when designing a MEL approach, for example, while the lessons I am sharing here come from ex-post and real time exercises, adaptations were necessary for each approach. Some of those adaptations would require a separate post, others should be clearer as I discuss other ways in which we need to take time into account.

A. The present and the future

Complex, synergistic effects take time to mature and achieve the resilience to survive. This means that while we are conducting real-time evaluations we will be unlikely to observe them. Even in final, ex-post reports evaluations we may not necessarily observe that the whole is adding more than the sum of the parts either, as effects may take longer to appear than the time frame for the evaluation. Alix Wadeson and I illustrate this point in a forthcoming paper with the case of the TAME project in Mongolia. TAME sought to support parent-teacher associations (PTAs), school authorities, government authorities and civil society groups working in different sectors, and development partners to solve problems in the education system, adding collectively more than each would on their own. The World Bank’s Implementation Completion Report noted that additive effects seemed stalled at project close — probably biased by its focus on short-term reporting and accountability. Yet when the independent evaluation (with a learning goal) was delayed and carried out months after the project’s closure, it found positive interactions at work. The 31 Parent-Teacher Associations established by TAME were still functioning and playing their roles, continuing to find ways to collectively solve problems at school level and rekindling the possibility for other synergies with other actors in the education system.

Recent guidance from USAID-funded Feed the Future goes further. It argues that one knows how to change complex systems. “Rather than viewing systems change assessments as a backward-looking exercise to be conducted at the end of a program to justify impact… (one should ask) “How might we assess systems change more frequently and effectively, generating feedback our teams need to better understand and catalyze change? Can this help us leave a more impactful legacy?”. That present and forward lens, which acknowledges that in the face of uncertainty systems change is about discovery and learning, is integral to its MEL.

While little discussed, the OECD-DAC also refers to this challenge. It proposes to think creatively about how to deal with it in relationship to sustainability (i.e. the criterion that is explicitly about time). It suggests that evaluators embrace complexity and uncertainty by focusing on conditions for actual sustainability and/or prospective sustainability, rather than only assessing interventions and portfolios based on observables at the time of data collection. I have found quite useful to take a similar approach when assessing coherence and whether projects and portfolios are on the right track (or making the best possible bets) to jointly adding more value than they would on their own, adapting the OECD-DAC’s language:

a. Conditions for actual coherence: Examine if and how opportunities to support the continuation of positive additive effects from the intervention/portfolio have been identified, anticipated, exist and/or have been planned for, as well as any barriers that may hinder the continuation of positive additive effects.

For instance, a focus on conditions for coherence of a portfolio might be a good reason to look into what a project team is doing to integrate interventions in the health and governance sectors, as well as whether the health and governance units in the donor agency funding them is doing its part to create an enabling environment those additive effects with an eye to the future.

In my work across many organizations and in the assumptions of many others, there is one set of actors whose work seems particularly critical to MEL additive effects: the Skoll Foundation calls them orchestrators — language I’ve used in my work with Pact and the Open Society Foundations; the Bridgespan Group calls them catalysts, and the Wenger-Trayners call them systems conveners — language that resonates in my conversations with social accountability practitioners at PSAM and beyond and there are those who prefer talking about gardeners. These are different types of behind-the-scenes or backbone actors that help others overcome collective action problems which hinder additive effects and amplify their efforts. They do so by looking at the landscape in which they operate, spotting unrealized potential across a field of across stakeholders and sectors and driving connected, networked decision-making and action to increase the chances that interdependencies, interactions, and synergies across different actors have additive effects. MEL systems often focus on the people who are the most visible protagonists of an intervention — the implementation team, the target stakeholders, the beneficiaries. But in collective impact efforts, backbone organizations, such as portfolio managers and all those mentioned in the paragraph above, perform critical, but invisible functions to enable overall coherence. These include convening, brokering, or performing a multitude of strategic and operational functions to increase that others do not miss opportunities to interact, coordinate or collaborate to create positive effects (or mitigate the risks that they undermine each others’ work). For a MEL system that seeks to shed light on positive interactions, omitting their backbone role can be a weak point.

And yet, I’ve found that putting these actors in the spotlight can create a normative dilemma for organizations that are committed to bottom-up or locally-led approaches, that seek to put local stakeholders at the helm of their own agendas, solutions, and MEL. At the heart of this dilemma is the (wrong) perception that focusing on backbone organization’s role is zero-sum in relation to the main actors in the story. It is possible to make visible these actors while making clear that they stand behind in implementation. It is imperative to do so, if we expect M&E to inform their ongoing learning and adaptive management and decision-making. Going back to the movies, have you ever seen a good one that has a protagonist but no secondary characters, side-kicks, mentors, or what have you? Backbone actors are not only important for causality’s sake but also for good storytelling.

b. Prospective coherence (or the future potential for coherence) given factors in the operating environment that could favor coherence): This assesses how likely it is that any planned or current positive additive effects of the intervention/portfolio will continue, usually assuming that current conditions hold.

For example, my colleagues Sol Gattoni, Dave Algoso, and I found in two projects commissioned by the Open Society Foundations that “(Social) Learning and collaboration are two sides of the same coin, because working with others provides the best opportunities to learn (together) and also creates space to apply that (joint) learning”. The value of social learning as a plausible basis for additive effects is why, in many cases, I find the Wenger-Trayners’ value creation framework can be a useful tool to capture aspects of the pathway to additive effects in the short, medium and, even the long-term. The opportunity for learning with others in the short-term can make us feel good, empowered, energized, and provide ideas. When we learn with others we can also develop relationships, trust, shared histories and mental models, all of which make it easier to pick up the phone and try finding synergies for future interventions down the line.

I have turned this insight into a relational rubric to MEL prospective coherence and have been testing and refining it by doing — unfortunately I cannot share results for now but may share the tool sooner rather than later.

B. The past

In many interventions, MEL begins after implementation has begun, or at least when money has been disbursed. The challenge is that in many cases the conditions for synergistic effects and its likelihood have been partly defined well before the intervention starts, by stakeholders that may not be part of the implementation team. This creates an important blindspot for the assessment. One such example are the parameters that donors set when scoping and designing strategies and portfolios, defining many of the ambitions and conditions for additive effects well before any specific intervention is implemented. This is why it’s important that we begin to see donors such as Luminate, Open Society Foundations former Fiscal Governance Program, or the Global Partnership for Social Accountability take their own role, perhaps more seriously, as part of the mix.

Another example is what Tom Aston and I are calling layering of projects. Layering is adaptive management across project cycles. It entails building on a previous project without replicating it or continuing a linear trajectory. Instead, new projects consider the new baseline, the new operational and contextual conditions and lessons from the past and iterate goals and strategies, much like these series of sequential World Bank projects in the Dominican Republic.

We have noted that, in layering, many donors and implementing organizations “use short-term projects to play incremental, long-term games”, as a Pact colleague put it. Short-term projects whose short-term goals may look they are not adding up on their own, can look different when considering that they are laying foundations for new targets in a future cycle and/or that along with a project that that begun and ended in an earlier project cycle are producing more than any individual project could have done on their own. In the case of Pact-Zimbabwe, I had to connect the dots between the final evaluation of one project and the inception, mid-term review and final evaluation of the subsequent ones to reconstruct how layering was orchestrated.

In a recent webinar, I heard new examples that I’d love to dig deeper into. Jenifer Swift-Morgan from Chemonics highlighted the value of this kind of dynamic when reflecting on the way their partner Municipal Development Institute USAID/Ukraine Democratic Governance East managed the effects of the Russian invasion of the country. In the same webinar, Le Hong Minh from the Foundation Management Board Member Start-Up Vietnam (SVF) under USAID/ Vietnam Strengthening Provincial Capacity Activity shared how to look for other donors that will allow them to do what we are calling layering. She described it as a time-consuming journey to find new funder and then bringing together different funders, while “translating” all the different approaches of the different partners. All often invisible, yet catalyzing activities that require building the capacity of the orchestrator to do so.

I suspect this pragmatic and politically savvy defiance of the short-long term zero-sum framing is much more common than we think! In these cases, many seemingly siloed, sequential interventions are woven together into a single process. They should be assessed as part of a single story. When MEL fails to make explicit these connections, we miss a key opportunity to tell each project and all projects contribution to additive effects. One way, World Vision addressed this challenge is by investing in retrospective evaluation of their long-term portfolio of work in a sector and country, showing how later projects built on earlier more modest investments. Learning reviews can also help, but we should be looking for ways to add this into our ongoing reflection, monitoring as well as story-telling.

Towards additional practical guidance

As I mentioned earlier, applying these insights to different MEL exercises (e.g. learning reviews, ex-post evaluations, real time monitoring) requires more specific adaptations and guidance. And so do different exercises to take time seriously. Tom Aston and I are writing more about MELing layering, which partly overlaps but is different from what it takes to build a relational rubric that focuses on conditions for or prospective coherence.

Causal assessments about interactions need to address concerns with endogeneity (i.e. when the factors that are supposed to affect a particular outcome depend themselves on that outcome). This is a technical issue that social scientists researching interactions between national, sub-national and international effects, among other complex processes, have long found ways to address in their research design by taking time seriously (e.g. here, here). A few possible ways forward. One can lever the sequences in theories of change more forcefully, including interacting them with contextual configurations. One can purposely sequence MEL so as to thoughtfully criss-cross levels (e.g. regions, national, and international)- a methodological insight which I have not seen consistently applied in a growing body of multi-level research and evaluation for philanthropy and development.

Those and other tricks of the trade have informed the bricolage I mentioned in post 2 of the series. We don’t need to reinvent the wheel, but those insights may need to be translated in a separate piece for MEL practitioners? Do they need to be considered in other capacity building and training efforts, so that we move beyond searching for what Estelle Raimondo called the “miracle (MEL) consultant” and Andrea Azevedo calls the unicorn consultant? For now it seems important to focus on what we may gain by taking time seriously when MELing whether the whole is more than the sum of the parts before we get into the nitty gritty. We have to walk before we can run.

Recapping the 3 posts: Right-fitting MEL systems for synergistic effects

The trend towards seeking bigger change, including systems change, means that we are increasingly seeing organizations concerned with whether the whole adds more than the sum of its parts. The same organizations are still facing pressures that push towards ill-fitted MEL (e.g. also need to report on results of individual interventions or deal with stakeholders who require oversight and evaluations that punish systemic approaches) — which makes for an even more difficult situation. The OECD-DAC addition of coherence in its evaluation criteria ensures that this concern will increasingly make it to MEL systems and terms of reference. But if you are not there yet, a pre-lesson is: if your intervention/portfolio/organization seeks additive effects, then you have to focus energy and resources in right fitting your MEL processes and systems that ask the question, prioritize relational outcomes, and focus on the movies rather than snapshots. The blindspot will not self-correct automatically.

The four practical lessons I outlined provide an entry point to begin explicitly reflecting on what is different about MEL for additive effects. These lessons can help add new insights and course-correct a particular MEL system. It can also help us spot patterns across portfolios and accumulate knowledge. This list, for example, is the product of working with a range of teams that complement my own experience, adding important insights to my own experimentation. Then, I wove lessons across MEL projects and organization to others. I make a point to reflect on insights that traveled well or not and why. One result of this exercise is an evolving relational rubric to monitor, evaluate and learn in real-time how projects, programs and organizations are doing in paving the way to ensure that the whole adds more than the sum of the parts? I hope to share more soon.

PS: As a consultant, I am often not able to publish the results of each exercise, but I can anonymize some work and share the methodological insights as well as invite others to join in the conversation. If this resonated with you, please reach out via linkedin. If you appreciated the work that went into writing up these emerging insights, please consider buying me a coffee (which can free time to continue analyzing and sharing insights across experiences). I am also open to exploring collaborations with others keen to solve this and other complex MEL challenges.

--

--