Measuring Impact for Development Mutants, An Ongoing Conversation
A Google search for the “Fourth Industrial Revolution” will produce millions of results in a speed that seems like the blink of an eye. These results include books, articles, academic papers, information about conferences and workshops… the works. As one author puts it, this revolution brings with it technological breakthroughs in artificial intelligence, robotics, internet of things, nanotechnology, biotechnology, energy storage, and more.
These new technologies are evolving at a very fast pace and will have a strong impact on our societies. For governance systems, they pose two sets of challenges: 1) governments have to keep up with very fast technological and knowledge advances; and 2) these changes occur simultaneously in multiple areas of the economy and society.
In order to deal with the uncertainty that these changes are already bringing, experimentation is gaining traction in government agencies around the world as a way to test public policy solutions. As a result, new organisations are emerging which are actively involved in funding and testing technology-led public policy experimentation.
Over the past few months, Arnaldo Pellini, Research Associate at the Overseas Development Institute and founder of Capability, has been working with Pulse Lab Jakarta (PLJ), a data innovation lab established in 2012 as a joint initiative via United Nations Global Pulse and the Ministry of National Development and Planning (Bappenas), to explore different methodological approaches to measuring impact for this particular type of organisation. With the goal of systematising this in a discussion paper, along the way he had several discussions with Diastika Rahwidiati, our deputy head of office, on the topic. Here we share some snippets from their conversation:
On Defining the Lab
Arnaldo: We have written together about Pulse Lab Jakarta (PLJ) in a book chapter and now here we are writing about the Lab for a discussion paper. Every time, I found it very interesting to reflect on the nature of PLJ and whether it falls in some sort of organisational category. It is not a project or programme in a traditional sense. It is not an NGO. It is not a think tank. It is not a company. To me, it defies these categories because it belongs to a new wave of organisations whose emergence is closely linked to the digital revolution. The digital revolution demands different ways of working to solve problems. New organisations emerge that can do that: policy hubs, policy labs and data innovation labs. What do you think?
Diastika: I think some of the things that Brian Robertson pointed out in his Holacracy book resonate with us as a lab. In a world where changes happen rapidly and knowledge of those changes are at our fingertips, the conventional “predict and control” model that powers many development organisations and initiatives is woefully inept in dealing with this kind of environment. The whole purpose of setting up PLJ was to continuously experiment with new development insights that can be gained from different sources of data. I feel that this approach of continuous experimentation is not just about how we work, but is encoded into how we are as an organisation: we try things; we ditch things quickly if they don’t work; we learn from our successes and our failures; and we integrate these lesson into how we operate.
In this context, I also like Giulio Quaggiotto’s definition of development mutants: new players in the peripheries of the international development sector that freely borrow across different disciplines and recombine elements, because they are “unfettered by legacy”. Going back to the topic that triggered our conversation, though, this sense that we are more like organisms that continue to evolve rather than a development initiative with pre-defined end-of-program outcomes provides an interesting tension in how we define and evaluate the impact of an entity like PLJ.
Arnaldo: An interesting point about PLJ is also the fact that the lab is under the umbrella of UNDP and receives its core funding from a bilateral donor. Both, UNDP and the funder, have subscribed to various international agreements on aid effectiveness and apply a results-based framework that defines certain accountabilities about achieving development outcomes. On the other hand, as you mentioned earlier, the work of PLJ involves designing prototypes, testing possible solutions to data and information problems. Some succeed, some do not. I think it is difficult for this way of working to fit into a results-based framework. Duncan Green, in his book How Change Happens, has written that a process of testing, failing, learning, re-testing, and (maybe) succeeding is a nightmare for the current donor’s systems. In my opinion, data innovation and technology advances will require more and more these adaptive and experimental approaches. In other words, the accountability systems will need to change and adapt to the development of tomorrow.
Diastika: Love Duncan Green. I very much agree with you, Arnaldo. Lest we forget, PLJ is a joint initiative of the United Nations and the Government of Indonesia. So on top of the accountability requirements of the UN and our donors, we also need to meet those of the Government of Indonesia. It is easy to get lost in the myriad of forms, tables and briefs from three different bureaucracies and perhaps complain once in a while about having to meet these, but as Green advocated in his book, it’s also good to take a deep breath and reflect a bit. What is the basic ask here?
My feeling is that the basic ask from all three institutions is whether the resources given to PLJ is being put to good use. The next step would be to discuss with each organisation what “being put to good use” would look like for them, and what evidence they would require to back this claim. For several months now, PLJ has been discussing with the Government of Indonesia a new set of operating guidelines that would allow us to balance the need for accountability with our inherent need for flexibility and freedom to experiment. We’re at a point now where almost everyone involved feels comfortable with these guidelines, but I think one of the biggest learnings for us was that the collaborative (and at times, super-intense) process of developing the operating guidelines themselves was invaluable in building mutual trust. Through this process, we clarified assumptions, expectations, ways of working, and most importantly, the needs that drive accountability processes.
So, while I wholeheartedly agree with you that accountability systems will need to change and adapt to the development of tomorrow, I think we also need to get off our ‘innovation’ high-horse a bit to recognise that our partners and funders have information needs that need to be met today. I think helping them meet those needs would go a long way towards creating a conducive climate for conversations about putting in accountability measures that are as adaptive as the initiatives they are meant to safeguard.
To measure or not to measure?
Arnaldo: This conversation made me think about Pablo Yanguas latest book, Why We Lie about Aid. In it, he mentions several times Andrew Natsios, who served as Administrator of the U.S. Agency for International Development from 2001 to January 2006, and who, in a paper written for the Center for Global Development, wrote: “Those development programmes that are most precisely and easily measured are the least transformational, and those programmes that are most transformational are the least measured.” The conversation we are having about the meaning of impact of the work done by PLJ, is part of a wider conversation about whether the measurement of development initiatives in general needs reforms and whether the drive to measurability and certainty we have today for all sorts of programmes is actually helpful. I think it isn’t.
Diastika: PLJ has written on its current approach to measuring impact, but I think this will be a challenge that the Lab will continue to explore and grapple with. My feeling is that maybe some of the questions that we are asking about measurement will need to change. For instance, rather than trying to demonstrate how our work has contributed to certain predetermined outcome areas, should we instead reframe this into exploring how we will know whether as a lab, PLJ is effectively expressing its purpose?
In his new book, Principles-Focused Evaluation, Michael Quinn Patton posits some really interesting ideas. To evaluate the implications of innovations and adaptation in complex systems, Patton recommends focusing on principles as a distinct evaluand. Patton’s premise is that principles provide guidance to decisions, choices and actions — and we can evaluate whether they work, whether they are adhered to, and whether they are achieving what we want them to. PLJ probably needs to explore this further as a team, but some of the concepts in the book might be useful in answering the question of whether PLJ is effectively expressing its purpose as a lab.
This conversation between Arnaldo and Diastika is part of a broader ongoing discussion about how we measure impact as a data innovation lab. Based on reflections alongside our partners and stakeholders over the years, we have come to describe our impact through the lens of operational, methodological and ecosystemic contributions (read more here) rather than categorising our work as direct impact in the common sense of quantitative measurements and predefined rubrics. Our Stories of Change reflections on two of our data analytics tools that have been adopted by the Government of Indonesia are also part of the discussion.
Exciting times ahead as we continue to transform as a Lab, as well as experiment in new areas with the needs of our direct stakeholders and the wider public in mind. The conversation on impact goes on, please jump in if you have new insights to share. We’d love to hear from you!
Pulse Lab Jakarta is grateful for the generous support from the Government of Australia.