Getting intentional about M&E: choosing suitable approaches for adaptive programmes
Does the choice of monitoring and evaluation (M&E) approaches and tools matter for adaptive programmes?
In short, yes: monitoring, evaluation and learning (MEL) and adaptive management (AM) are intertwined. While programme monitoring data and evaluation results are not the only sources of evidence that programmes use for learning and iteration, they often are amongst most important ones — or at least they should be.
Selecting what type of information to collect and analyse — and how — is critical for any type of programme. However, what AM especially focuses on is intentionally building in opportunities for structured and collective reflection, ongoing and real-time learning, course correction and decision-making in order to improve effectiveness. This means that adaptive programmes require intentional M&E design from project inception that is geared towards both learning and accountability, not just the latter (which is unfortunately still often the case in many programmes).
Each and every M&E tool or approach has potential to support learning and adaptation — but in different ways and at varied stages of a programme. For example, some tools can support strategic planning and diagnosing throughout a programme — especially during design and inception — while others can help analyse causal relationships at specific points in a programme.
So then, how to choose the most appropriate tools and approaches for (adaptive) programmes?
I wish I could say that this is straightforward but, unfortunately, it is not. First of all, there is no shortage of potential MEL tools and approaches, or toolkits and guidance notes on how to use them (see BetterEvaluation, Goldilocks Toolkit and Bond’s Evaluation Methods Tool). The amount of tools and guidance can be overwhelming.
On top of this, there is no one ‘right’ choice — it’s more about the level of usefulness, appropriateness and whether certain requirements are in place. The choice of method(s) depends on:
1. Evidence and data needs. What are the programme’s learning priorities, evaluation questions and accountability requirements? What kind of data is needed to make evidence-informed adaptations?
2. Programme attributes. What type of programme is it? How long does it run for? Do programme attributes align with the technical requirements of the specific approach?
3. Resources available. What resources (human and financial) are available for data collection, analysis, learning and decision-making?
The key is to tailor the approach for its intended purpose. Ultimately, the choice of MEL method(s) requires judgement and, sadly, no toolkit or guidance note can make a decision for a programme.
However, it is helpful to get an idea of what is already out there and what others have tried. Thus, for a recently published GLAM Working Paper, we selected a small set of tools and approaches that either have been used in adaptive programmes or that we think could be used more widely.
We examined how these could be useful for adaptive programmes characterised by complex aspects such as:
1. They are innovative (e.g. pilot programmes with limited evidence what works),
2. They have uncertain or contested change pathways, and
3. They operate in a fragile or non-stable setting (e.g. post-conflict).
Does the ‘right’ choice guarantee adaptation?
Of course not. Whether learning and adaptation happens depends on factors other than the choice of M&E method(s), including:
· How approaches are applied in practice and tailored for the intended purposes,
· How data collection and analysis are designed to support reflection and learning, and
· How this analysis and learning supports operational and/or strategic decision-making in programmes.
Whether, and to what degree, this happens depends in turn on several other factors within the enabling environment, many of which are discussed in a previous GLAM paper. Thus, testing and building the body of evidence on which approaches (or combination of approaches) can be especially useful for different types of adaptive programmes is important but not enough. We also need greater understanding of how factors in the enabling environment facilitate or hinder evaluative thinking, evidence-informed decision-making and ongoing programme iteration.
Author: Tiina Pasanen is a Research Fellow at the Overseas Development Institute (ODI), and specialises in monitoring, evaluation and learning (MEL) methods and practices.