Dan Posner on Bottom-Up Accountability in the Uganda Health Sector
CEGA speaks with Affiliate Dan Posner, Professor of International Development in the Department of Political Science at UCLA, about a replication that he and co-authors conducted of a randomized field experiment involving community-based monitoring of public primary health care providers in Uganda.
In 2009, Martina Björkman and Jakob Svensson’s “Power to the People” (P2P) paper presented a randomized field experiment on community-based monitoring of public primary health care providers in Uganda. They found that in villages where localized nongovernmental organizations encouraged communities to be more involved with health service provision, there were large increases in utilization and improved health outcomes including reduced child mortality and increased child weight.
In 2019, Dan Posner, Pia Raffler, and Doug Parkerson conducted an evaluation of a scale-up of the same P2P intervention in Uganda, finding no evidence that citizen monitoring was in fact the channel through which the intervention affected treatment quality. We spoke with coathor and CEGA affiliate Dan Posner about these findings, and the challenges that come with replicating field experiments in development economics.
CEGA: You recently finished up a major RCT project on bottom-up accountability in the Uganda health sector. Can you briefly tell us how this came about and what you were looking to measure?
DP: Our implementing partner, GOAL Uganda, read Martina Björkman and Jakob Svensson’s 2009 “Power to the People” (P2P) paper and was taken with the findings. When GOAL approached DfID about funding a scaled-up rollout of the P2P intervention, DfID was enthusiastic but asked that GOAL include a rigorous impact evaluation, and they approached Doug Parkerson at IPA about running it. Doug then reached out to Pia and myself. We had both been working on similar issues of citizen mobilization and bottom-up accountability and saw it as a terrific opportunity.
We went into the project expecting to find treatment effects similar to those reported in the original P2P paper. Like many in the development world, we were excited by the paper’s findings but wanted to get a better handle on which part of the complex intervention was really doing the work. In designing our evaluation, we therefore put a big emphasis on setting ourselves up to identify the channels through which information provision affected health outcomes. We collected lots of data on intermediate outcomes that captured these channels. We also employed a factorial design to break up the multifaceted P2P intervention into two of its components: 1) the delivery of information to citizens and health providers and the mobilization of these two groups in light of that information, and 2) the holding of interface meetings between citizens and health providers in which they worked together on a joint social contract spelling out specific steps they each could take to improve health services and outcomes. We also collected a lot of data on the characteristics of the health centers and communities in which we worked to better understand the conditions under which the intervention operated more or less strongly.
CEGA: Your replication found different results than the original study. What are the implications of this for the program?
DP: In our paper, we devote a lot of space to discussing why we believe our results differed from those reported in the P2P study. We conclude that the most likely reason lies in the big differences in baseline conditions across the two projects. Health conditions and treatment quality improved markedly in the ten years that separated the two studies (P2P’s baseline data was collected in 2004, whereas ours was collected in 2014). It may be that the positive treatment effects found in P2P were simply more difficult to achieve once baseline levels had risen beyond a certain point. Indeed, when we restrict our sample to health centers whose baseline child mortality rates were within one standard deviation of those reported at baseline in P2P, we do find treatment effects on child mortality (although not on utilization, treatment quality or other health outcomes).
Also consistent with this line of argument is the fact that Christensen et al (2018) find treatment effects on utilization and child mortality (but not treatment quality or other health outcomes) in a P2P-style intervention in Sierra Leone — a place where child mortality rates are even higher than in Uganda at the time of P2P. One possibility is that these interventions work by getting more people to utilize the formal health sector. The implication seems to be that information-based, bottom-up accountability interventions become less effective as baseline health conditions, utilization, and the quality of health service provision improves.
CEGA: How does this experience influence your views on replication?
DP: Replications of influential research findings have been touted in recent years as a key means of addressing the problem of research credibility and knowledge accumulation in the social sciences. While we wholeheartedly agree with this position in principle, one of the key takeaways of our project is the difficulty of “replicating” a field experiment. Although we modeled our intervention as closely as possible on P2P, there were in the end too many differences in the study populations, baseline conditions, the details of program implementation, and the way we operationalized variables and modeled treatment effects for our study to be considered a “replication” in the usually understood sense of the term. Our view is that our results should more properly be viewed as what Michael Clemens calls a “robustness test” that provides an additional data point rather than a test of whether the prior study should be believed.
CEGA: What should other researchers take away from this?
DP: One lesson, as I’ve stressed, is that replication of field experiments is hard — maybe even impossible. Another, more substantive, lesson is that changing bottom-line health outcomes, and mobilizing citizens to apply bottom-up pressure to effect such changes, is extremely difficult, for a number of reasons. Citizens often have limited leverage over health care providers, low effort by health workers may not be the binding constraint, and collective action problems are hard to overcome. As much as we would like to believe that we can “harness transparency and citizen engagement” to improve service delivery, this may not be the most powerful strategy for improving the quality of healthcare and other services in developing country settings.