Intervening intensely: how to estimate dosage in accountability interventions
Many of the experimental evaluations of transparency, participation and accountability research were always doomed to fail — because the interventions they applied were hopelessly weak. That was one conclusion of a vitally useful 2019 blog by American University’s Jonathan Fox.
He wrote that “We should not be surprised when ‘low dose’ interventions lead to uninspiring impacts. Let’s keep in mind that when a medicine is administered in a very small dose, to little effect, we still don’t know whether the medicine could work.”
So, how should we think about dosage? This post continues that discussion, by offering some initial ideas on a framework for assessing the dosage of transparency, participation and accountability interventions. I then apply it to two high profile papers on bottom-up accountability, and conclude with some suggestions about how these criteria might be used.
Eight criteria for assessing dosage
Dosage in this field is surely multifaceted. I can think of eight things so far that might guide our judgment of dosage: duration of campaign, length of each individual contact, frequency of communication, range of stakeholders brought in, channel, ‘embeddedness’ of the implementer, staffing, and scale.
There would always be a lot of judgment around scoring these, of course. For each of those eight ideas, I have tried to suggest a scoring system, with up to four points available for each one. To keep it usable, I’ve suggested that each element is equally important; a more sophisticated version might weight each criterion differently.
What dose improves health services?
In 2009, Björkman and Svensson published ‘Power to the People’, an RCT which showed a powerful effect of community-based monitoring on health services and health outcomes in Uganda. In 2017 the same authors found the effects lasted years later.
In 2019, Pia Raffler and colleagues published their own paper, studying the ACT Health intervention, which was deliberately based on Power to the People. In a much larger study, they found no such good news.
In explaining the difference, some advocates of these sorts of interventions wondered if there was some difference in the way the interventions had been implemented. Perhaps the Power to the People intervention was administered at a much higher dose?
Not so, at least according to my scoring. ACT Health performed a little better overall, with a longer duration, slightly more contacts, addressing a wider range of stakeholders at a larger scale. If anything, ACT Health should have shown the more potent results of the two. We don’t have reason to think that Power to the People was a much more intense intervention. No doubt there were differences, but they weren’t of the sort of magnitude that would explain the widely varying results of these two studies. We have to look for other explanations for the differences in these results.
Using these criteria
Whatever measurement approach we use, we need to make sure we are trialing interventions that really have a chance of working. As Jonathan Fox argues in his post, “Broad-based, constituency-led change initiatives may be more promising than locally-bounded, tool-led approaches.” Researchers and activists may want to use these criteria to understand dosage in planning new projects. Other researchers might also want to extend this analysis, classifying other papers in this field by the dosage they administered — and seeing if that has a relationship to the impacts they detected.
This post draws on the ideas of Dr. Jonathan Fox, who is founder and director of the Accountability Research Center at American University. Tom is grateful to Jonathan and to Angela Bailey, also of ARC, for their thoughtful comments — all errors are mine.