How do we use M&E as a vehicle for learning?

UNDP Strategic Innovation
6 min readJun 21, 2022

--

By Søren Vester Haldrup

UNDP has set up an M&E Sandbox to nurture and learn from new ways of doing monitoring and evaluation (M&E) that are coherent with the complex nature of the challenges facing the world today. The Sandbox was originally intended as an internal space for experimentation to support M&E innovations in UNDP, but we have decided to open-up the space for others to join. This earlier blog post discusses the rationale and focus of the Sandbox.

The initiative has received an overwhelming amount of interest and we successfully had our first open reflection session last week (with only a few technical hiccups). In this blog I provide a short summary of the session — including a link to the recording.

The first session focused on learning. Specifically, it sought to explore this question: how can we get better at learning when engaging with complex development challenges?

We had three great speakers to help us unpack the question: Munyema Hasan from the Open Government Partnership; Toby Lowe from the Centre for Public Impact; and Thomas Aston — an M&E expert (see profiles at the end of this post along with a variety of links/resources that other Sandbox members shared during the session).

Human learning systems — using learning as a management strategy

Toby kicked us off with a talk about Human Learning Systems and how to use learning as a management strategy to achieve outcomes. He walked us through the use of connected learning cycles (between individuals, teams, organizations, regions) and showcased, with an example from Gateshead Council in the UK, how learning relationships establishes opportunities to co-design more bespoke public services. Toby also explained some of the nuts and bolts of running learning cycles and made the case for using M&E as a vehicle to develop the capacity of people to continuously design and run experiments that improve public services.

Connected learning cycles

Developmental evaluation in the Open Government Partnership

Munyema walked us through the Open Government Partnership’s use of developmental evaluation as an approach not only for accountability towards donors but as a vehicle for continuous organizational learning. The OGP’s experience highlights important lessons for anyone working to navigate and learn in the context of complexity. One key point is that we need to be able to tolerate a certain amount of discomfort with the process of learning. For instance, if we want to be able to learn and adapt in a timely manner, we need to balance our desire for ‘definitive findings’ with ‘good enough’ real-time information. This can be difficult if our organizations, risk calculations, and decision-making structures are obsessed with rigorous conclusive evidence. Furthermore, we need to find way of making sense of various forms of evidence (from many different sources) when working on complex issues, and we should rethink the role of evaluators as critical friends (assisting us on a learning journey) rather than as external auditors.

Learning as an output — PERL Nigeria

Tom spoke about the UK funded Partnership to Engage, Reform and Learn (PERL) programme in Nigeria. This GBP 100 million programme had built-in learning explicitly from its inception and introduced a specific learning indicator in an effort to create stronger incentives for learning. The programme was based on a payment by results model and it linked payments to the learning indicator (with a 20% weight). The initiative illustrates some of the dynamics, (sometimes unanticipated) incentives, and challenges associated with introducing a stronger learning focus and the difficulties of trying to bridge accountability and learning mechanisms — especially when the achievement of results is linked to payments. Key points from Tom’s presentation include that learning is about trust rather than control, that we need to focus on how to support staff to embrace a ‘positive error culture’ rather than simply report on successes, and that we should emphasize double and triple loop learning rather than only single loop.

PERL Flagship Report from ODI

A few closing reflections

For me, this session was immensely useful in a number of respects. First, it reminded me that we are not alone — many organizations are trying hard to get better at learning and we can learn a lot from each other.

Second, it highlighted that learning is perhaps best seen as a continuous process of cycles (that exist at various levels — individuals, teams, organization-wide, etc.) rather than as a point in time output. In each cycle we implement and experiment, generate (good enough) real-time data, reflect and ‘sensemake’, and re-design/adapt our efforts. The idea of learning loops is not new but seeing practical examples of their application is instructive.

Third, the session underlined that we still have a lot to figure out when it comes to balancing and bridging learning and accountability (both of which are important). It is useful to think about how we can make ourselves accountable for learning. However, transplanting ‘learning outputs’ into traditional accountability structures (such as a results framework to which financial incentives are attached) may have a variety of (desirable and less desirable) effects. This poses the question of how we can do this differently (if you have any good ideas or examples please get in touch!).

Lastly, the session leaves me with the question of why examples such as the ones presented by Toby, Munyema and Tom tend to be the exception rather than the norm? How can we work together to tip the balance?

If this post has sparked your interest, I recommend that you watch the full recording of the event here:

If you would like to join the M&E Sandbox and receive invites for upcoming events, please reach out to contact.sandbox@undp.org.

A bit more about the speakers:

Munyema Hasan: Munyema is the Deputy Director at the Open Government Partnership. She leads the Learning and Innovation program at the OGP, she is responsible for designing advanced learning approaches and strategically delivering learning programs, and she leads on measurement and evaluation of OGPs contributions and impact.

Toby Lowe: Toby is a visiting Professor in Public Management with the Centre for Public Impact. Toby has spent 15 years working across the public and voluntary sectors in the UK, working in both policy and delivery roles. He is on secondment to CPI from Newcastle Business School, where he has been working alongside public and voluntary sector organizations to develop an alternative paradigm for public management.

Thomas Aston: Tom is an independent consultant specializing in participatory, theory-based, and configurational methods to M&E and learning. Tom has a PhD in Development Planning from UCL and has previously worked with organizations such as Care International, the Overseas Development Institute, and Oxfam.

Here’s a list of resources shared during the session:

--

--

UNDP Strategic Innovation

We are pioneering new ways of doing development that build countries’ capacity to deliver change at scale.