Three out of Four Practical Lessons about right fitting MEL for Additive Effects

Part 2 of 3 blog post series

Florencia Guerzovich
9 min readAug 1, 2023

In the first post of this series, I argued that we can do better to ensure MEL systems explicitly contribute to the growing set of expectations and claims that interventions/portfolios/systems-work will add more than the sum of their parts. I also introduced the OECD-DAC new evaluation criterion: coherence, which provides an important entry point to address this challenge (from a different starting point colleagues working at the UNDP Strategic Innovation Unit seem to have reached a similar conclusion).

This post presents 3 of 4 lessons I’ve learned in experimenting with MEL systems and exercises that focus explicitly and concretely on coherence and the different types of interactions of interventions/portfolios or systems.

1. Prioritize: Which Parts Do You Think Should be Adding up Now?

Many systems Change gurus and toolkits often tell us that we need to be savvy and realistic in our approach by right-fitting our approach to context, mission, capacities, comparative advantage and resources, etc. One size does not fit all.

Alternative Perspectives on Additive effects

The same goes for MEL systems which rarely can be transformed and resourced to look into all the parts that need to come together to produce synergistic effects. We can consider any large organization in development, such as a donor, large implementing partner, international non-governmental organization, or consortium, as containing a range of distinct opportunities to bring together parts that add up more than their sum. The entry points in the Figure above are often interrelated, with for example the way in which parts of an organization interact affecting the approach they may take to designing and supporting a portfolio, which in turn, may affect how its implementers relate to each other and to others in their sphere of influence and/or the broader system. The figure present a non-exhaustive list of parts that one might focus our MEL.

A critical choice to incorporate a focus on additive effects in a MEL system that previously ignored is to prioritize which combination of parts it will look into by framing the boundaries of the exercise: What is in and what is out considering what it might be most meaningful analysis for key stakeholders at a point in time. Sometimes this will mean focusing our attention on whether implementing partners are creating positive synergies with each other (or mitigating negative interactions and missed opportunities). Other times, it is more important to first gauge whether the different units in a donor organization that support those partners are themselves adding up to more than the sum of their parts by creating the conditions for positive synergies within portfolios, across portfolios, or between portfolios and external interventions. Coincidentaly (or not), Milica Begovic, Giulio Quaggiotto, Mariela Atanassova from the UNDP’s Strategic Innovation Unit found that organizations experimenting with portfolio approaches tend to appear in a point along a continuum from internal and external coherece in their work (here I am linking their “rough (emergent framework” to thinking and experience in MEL work).

Back to MEL, if you are able to take time seriously you may be able to sequence the inquiry, looking first at synergies among one set of parts (e.g. internal coherence within a donor agency or program components) and, at a later stage, focusing on another set of parts (e.g. coherence among implementing partners of a program) while assessing the causal connection among both groupings. If you focus on interactions at societal level, without considering whether other interactions are setting the conditions for bigger positive synergies, your analytical framework may be setting the stage for a negative assessment by design. More on sequences, conditions for coherence, and time in blog post 3 of the series.

When partners have an overall narrative (or theory) of how some sub-set of parts or all parts are supposed to 1+1 = 3, it is easier to prioritize and, more generally, to MEL. Examples include this one (h/t Soren Vester Haldrup). In practice, often they don’t. MEL practitioners need to find a way around it by thinking how they structure the inquiry, more on this next.

2. Articulate Ideas About How & Under What Conditions 1 + 1 = 3 Happens — while operationalizing and legitimizing qualitative thinking

Interconnections — understanding the cause and effect relations between different elements of a system (including portfolios and interventions) — is a foundation of system thinking. The challenge of MELing additive effects is one of complex causality, meaning that it “requires unpacking the assumptions in the black box of what happens when strategies are unleashed” and one intervention/portfolio meets (or fails to meet) another. As such, one of the main challenges is that we need to shift “from quantitative metrics to a qualitative assessment … the integrity (of the system, alignment between parts, synergies, distribution, resilience, etc.”.

We have to question relationships and challenge hope that interactions will always add up — as results are likely to be mixed. When, where, how, different actors and partners can be more than the sum of the parts? Under what conditions adding might create more trade-offs and risks than it may be worth taking? And when it may do harm or backlash? In a nutshell, it’s about identifying patterns of diversity rather than aiming for universal truths or giving up on impact and learning and assuming that no pattern is possible, as Toby Lowe seems to do (cf. Tom Aston’s arguments and more generally systems theory on the importance of patterns to navigate emergence).

Source: Humanity United in this M&E Sandbox Webinar

This approach and rationale is not different to what others have written about the importance of making assumptions, causal pathways, and/or theories of change central in MEL as well as thinking about transferability and comparative analysis as we do so. Proponents of realist evaluation, for example, have long argued that the key question should be “What works, for whom, in what respects, to what extent, in what contexts, and how?.”

Assumptions turned slogans and jargon — a new intervention will turbocharge the work — need to be explicit and visible to key stakeholders, different perspectives considered, probed (especially where there might be group think and other biases), and contextualized, so they can be tested, supported with evidence and refined.

But this is not easy. The challenge, in short, is that we are working on a MEL world that is knee deep in a culture of quantitative inference (with tons of guidance assuming this worldview) that is unfit for providing the answers to the questions we have. We need to operationalize and legitimize a culture of qualitative inference (and stop talking past each other with colleagues steeped into a quantitative one).

As Soren Haldrup reminded me. another reason why it’s difficult is that many times we do not have a good enough theory of change — because we don’t know enough about a complex problem or how change would happen, in which case one is “paving the road while driving”. This is why in the next section I discuss using inductive-deductive approaches as a concrete, proven way in which many social scientists square this circle.

3. Bricolage Heterogeneous Inputs to Make the Most of What We Have — especially in the Global South

Source: Adapted from

How to build your MEL approach to interaction effects? Recently, Tom Aston and Marina Apgar made the case for being more intentional about bricolage or recombining heterogenous, complexity-aware and qualitative methods. I second this approach and I’d combined, for example, insights from process tracing and contribution analysis to work on a learning review focusing on additive effects produced by an organizations’ collaborative initiative. I mixed-in rubrics (examples hopefully coming up soon) with case selection insights from the comparative method, among others, to design a methodology to MEL whether the whole was adding more than the sum of its parts in a series of complex projects.

But I’d take Tom and Marina’s proposition further, for MELing interaction effects we need to be pragmatic about how we recombine and make the most of the methods we have as well as of the bodies of theory and voices and perspectives we have.

In the spaces where I’ve worked assumptions are rarely clear and shared by different stakeholders, or backed up by evidence. Many times actors working in the same process do not have a shared, or even partly overlapping, narrative for what they are putting all that time into “coordination,” (or the elusive quest for harmonization). This is why I have found that an inductive-deductive approach is the best way to make the most of the tacit and emergent knowledge and the range of theories at play. I’ve found this approach helpful to address gaps and some dialogues of the deaf between and within theory and emergent practice as well.

In terms of theories, systems thinking theory can be helpful, but it’s more so when combined with theories that are specific to a given problem, sector or geography. Sometimes, combining systems thinking with insights from other approaches is quite instrumental. I already mentioned collective action theory. I also used the the Wenger-Trayners’ value creation model. It was great to see that Bev Wenger-Trayner spotted in my first post of this series, how the framework anticipated key aspects coherence with the combination of “Orienting value” (your intervention having an orientation to the wider landscape) and “Strategic value” (improving the quality of conversations with stakeholders in the landscape)”. I found the work of my colleagues at Grupo Politeia at the University of the State of Stanta Catarina (Brazil) helpful to use insights from the co-production literature, especially as we are refining it through systems convening for problem-solving-learning-capacity building. Political economy analysis, among other approaches also help out. After all, isn’t systems thinking about working in interdisciplinary and cross-sectoral ways?

I find bricolage among methods, theories and voices gives me more mileage, especially as we go about figuring out what may be the key leverage points and tools to build knowledge about and how to go about MELing interactive effects.

To be sure, there is an art and craft to connecting the most relevant dots within and across each set of inputs (theory, methods, and voices). Bricolage and inductive-deductive approaches entail tailoring (a measure of which I think is inherent to evaluating interactions). But technically, we don’t need to start imagining how to do it from scratch because there is practice in other fields we can draw on. These blog posts also reflect that it may be possible to accumulate knowledge across different MEL system and exercises and work towards some shared principles and questions that can begin filling the current blindspot in MEL systems and guidance about how we bridge the widespread assumptions in programing and the OECD-DAC criteria.

Politically, though, bricolaging across theories, methods and voices may require greater courage, imagination and resourcing to rethink how we systems convene around MEL, complexity and systems thinking:

And the 4th lesson?

For the fourth lesson — taking time seriously in MEL — you’ll have to read the last post of the series. A sneak peak? I think the OECD-DAC’s little discussed recommendation to be creative about sustainability (i.e. the criterion that is explicitly about time) is a useful building block to revisit our approach to MEL whether the whole is adding more than the sum of its parts.