Evaluation and Innovation in a complex world

UNDP Strategic Innovation
4 min readDec 7, 2021

--

By Oscar A Garcia, Director of UNDP’s Independent Evaluation Office

Multilateral development agencies are in a continuous state of reform, needing to remain relevant and effective in addressing development challenges of increasing magnitude and complexity. In response, organizations such as UNDP have been establishing innovation functions to explore alternatives and experiment outside traditional operative frameworks. The vision of the UNDP Strategic Plan[1] is to achieve sustainable development by eradicating poverty, accelerating structural transformations and building resilience to crises and shocks. For this, an evaluation system that is innovative and transformational is critical. Hence, we are now turning away from “linear approaches” to problem-solving, instead integrating “systems thinking” in our programming and in evaluation. In doing so, we hope to generate innovation and meaningful transformations needed for the complex development challenges of our time.

Evaluation sits at the crossroads between the oversight function and research, generating evidence with norms and standards to inform decision-making. The practice of evaluation is inherent to innovation, and vice versa. Evaluation leads to reflections about what works, what does not and how to improve performance.

Discussions on the relationship between innovation and evaluation are not new. Its practice resulting in great methodological modernization and diversity, with a vast array of new approaches and methods to meet evolving needs. Evaluators now recognize that traditional evaluation methods have not always been adequate for supporting innovation learning[2]. However, institutional adoption of new methods and approaches in multilateral development cooperation has remained limited. It is therefore useful to consider a few key principles when framing monitoring and evaluation (M&E) systems that can support innovation and problem-solving in complex development settings.

Evaluating for transformation. Balance is key when considering the diverse evaluation and data collection methods and approaches designed to inform adaptation. These methods and approaches usually generate feedback loops and insights for transformational change. Stakeholder participation is an important source of data which can mitigate observational biases. Culturally sensitive participatory approaches can shed light on the complexity and power dynamics embedded in social change. Programme leadership should commission evaluators that are able to use the most appropriate of these diverse methodological approaches.

Evaluators need to further integrate a systems approach in support of innovation. This is evident through the evolution of developmental evaluations and other transformational approaches. In a recent IDEAS publication on ‘Evaluation for Transformational Change’[3], Osvaldo Feinstein called for adopting a dynamic evaluation approach, where evaluations contribute to transformative learning or triple-loop learning, leading to changes in the collective understanding of complex problems. Evaluations supporting transformational change need to be more forward-looking and require managers and practitioners to consider evaluation as part of the larger system being addressed by the programme intervention.

The practice of monitoring and evaluation has been part of a drive for greater accountability and transparency over the use of resources. This point has been raised when calling for a reconsideration of traditional Results Based Management (RBM) approaches. Lately, development practitioners advocate for greater focus on adaptive management to promote learning. While RBM theories do not preclude adaptive management and do seek to promote iterative learning, its implementation presented shortcomings in development cooperation.

Transformative evaluation goes beyond the false dichotomy between learning and accountability. It is critical for independent evaluation to balance its contribution to collective learning and change, with accountability over the use of public resources, considering the strategic context and the authorizing environment[4].

Enhancing the evaluative value of monitoring systems for transformation. Infusing evaluation standards and approaches into monitoring practices help enhance the monitoring function, while supporting timely learning for adaptation. Strengthening the linkage between monitoring and evaluation — improving the depth of monitoring activities — can help spread the investment required to measure and learn from complex systems. By routinely incorporating evaluative activities in the monitoring of interventions, we strengthen the ability of evaluations to address more fully the questions around impact and trade-offs, and to inform key decision-making that supports transformation.

In this regard, developmental evaluation seeks to embed evaluators throughout the implementation cycle of interventions. This approach encourages closer feedback loops with decision-makers and supports timely learning and adaptation. There is an inherent high level of uncertainty when addressing systems transformation and piloting innovative approaches. To navigate this, managers and evaluators can use new technologies and real-time evidence from these enhanced monitoring systems to shorten feedback loops. This could then help focus evaluations towards more in-depth contribution and impact analysis.

Evaluating large complex systems relies on an “ecosystem of evidence”. The systems approach called for by the SDGs requires significant investment in M&E to better understand performance, their value and trade-offs, and to support adaptation and scaling up. Evaluation needs to build on existing, reliable, and credible data sources that go beyond organizational boundaries, of different forms, and produced by other actors. This sort of ecosystem of evidence for research, monitoring, and evaluation activities can trigger broader effectiveness and efficiency in data collection and analysis when supported by information and communications technologies.

Embracing change. For UNDP, the path forward is already being forged. The UNDP’s independent evaluation function has been bolstered by a strong evaluation policy, adequate resources, and a mandate of high-quality evaluation for improved decision-making and intervention effectiveness. A systems-thinking and realist approach is part of UNDP’s new evaluation strategy, with an emphasis on innovating participatory approaches to transformational evaluation.

The increasing complexities of our world require M&E systems that evolve in parallel. At the UNDP Independent Evaluation Office we are on a quest to constantly upgrade systems to meet the challenges of our times. We invite you to join us on this journey.

[1] https://www.undp.org/publications/undp-strategic-plan-2022-2025 — UNDP Strategic Plan 2022–2025 | United Nations Development Programme

[2] https://www.researchgate.net/publication/238433161_How_to_-_and_How_Not_to_-_Evaluate_Innovation — Burt Perrin, “How to -and how not to-Evaluate Innovation, Evaluation” · January 2002, Sage publications

[3] Rob D. van den Berg, Cristina Magro, and Silvia Salinas Mulder (eds). 2019. Evaluation for Transformational Change: Opportunities and Challenges for the Sustainable Development Goals. IDEAS, Exeter, UK

[4] https://zendaofir.com/guest-post-the-evaluation-governance-implications-of-transformative-evaluation/ — The evaluation governance implications of transformative evaluation, by Robert Picciotto / February 8, 2021 / Governance, Transformation

--

--

UNDP Strategic Innovation

We are pioneering new ways of doing development that build countries’ capacity to deliver change at scale.