‘And then a miracle happens’: Unpacking Accountability

Harvard Ash Center
Challenges to Democracy
5 min readSep 30, 2013

This blog was written by Courtney Tolmie (R4D) and originally published for the Transparency and Accountability Initiative in September 2013.

Several years ago, I was attending an advocacy training for partner organizations working on issues of transparency and accountability in Africa. The team leading the training began the session with a slide showing two scientists reviewing a complex econometric equation. At the end of the long line of variables, there was a bubble written into the equation that reads ‘And then a miracle happens’.

Those in the room (including me) could relate. Implicit or explicit, organizations often develop detailed plans and theories for identifying problems and possible solutions to spending and service delivery problems. And then we get to the point of making the change actually happen — of sharing recommendations and getting them implemented. And we get stuck. We start trying to work through all of the assumptions, and the context factors, and our predictions about how context will change over the course of our work to achieve accountability — and eventually many of us end up hoping for the miracle to happen.

And so it was from this viewpoint that we began asking fifteen organizations some of the most interesting and the hardest questions — what change were they hoping to make happen with their accountability work? Who were they trying to influence, and what messages and products were they using? Why did they think this would work? And what did they do when it did not seem to work?

Our hope in asking these questions was that we would get answers somewhere between ‘we hoped for a miracle’ and ‘here is exactly how the miracle happened’.

Linking accountability goals and audiences

In accountability, even the most obvious ‘best practices’ are not always universal. One of the first things we asked organizations about was whose behaviour they were trying to change with their work. Without exception, all of the partners reported wanting a change from above — generally from national level government officials or institutions. Even when the problem manifested at the local level (like doctors not showing up to clinics), the advocacy goals centered around changes from above.

As such, we expected to see that partners were similarly directing their advocacy efforts to national level officials. But this was only the case for about two-thirds of the partners. This seemed like a disconnect, but it turned out to be a smart strategy for many partners.

For example, one organization in Latin America reported using high-level meetings with the Ministry of Education to ensure uptake of a recommended policy to improve timeliness of supplies showing up to schools. And it worked. But many similar strategies by other organizations did not. On the other hand, one East African organization reported wanting the Ministry of Education to improve monitoring and accountability actions of teachers who were chronically absent. How did they choose to do this? By training students to monitor teacher absenteeism and report incidents in a lockbox they placed in every school.

At first glance, this seems like an interesting accountability strategy, but not one that is likely to change national government behaviour. That is, until you hear about the rest of it. The CSO knew that national government monitors were supposed to be keeping track of absenteeism; however they did not have the resources to do so consistently. When the government monitors learnt about the student monitors, they connected with the CSO to discuss how they could use the student monitoring data to hold teachers accountable — and this was exactly what the CSO hoped would happen.

Several of the CSOs we spoke with recognized that the obvious path to accountability was unlikely to be the most effective, due to constraints or incentives or context factors.

So what about context?

As with ‘theories of change’, it is hard to ask explicitly about ‘context’. We decided instead to ask why partners chose specific accountability strategies, and why they felt that their work was successful. Two answers came up a number of times:

Political will. This is as difficult to define and to measure as context, but even without prompting, it seems to be important. Almost all of the successful organizations highlighted that they had made decisions about targeting specific government officials or agencies in their advocacy because they felt that there was a real interest in reform or a willingness to consider recommendations.

Informed citizenry. One of the roadblocks reported by several organizations was presenting recommendations or messages about monitoring services to citizens, only to find that many citizens did not even know what services they should be getting. For several organizations, like one in West Africa trying to improve the effectiveness of the capitation grant, this led to a change in strategy; the organization shifted its focus from engaging citizens to monitoring the capitation grant to widely disseminating messages about the rights and regulations relating to the capitation grant, and how those rights should be apparent to education beneficiaries.

The sample of CSOs is too small at this stage to make any generalizations about how context should affect advocacy. However, these two factors are worth studying further to see how organizations build them into their work, how they course correct when they learn something new about context, and whether these strategies help them achieve their goals.

Disseminating and advocating — what seems to work best?

In addition to looking at overall strategies, we asked organizations about their advocacy products and messages. Many reported that they received positive responses by doing a few relatively small things:

Presenting comparable data. One organization in India had worked for many years on the problem of inadequate spending on social sector issues in poorer districts. However, they only started hearing from district officials and citizens when they presented spending in their districts in comparison with other districts.

Providing concrete recommendations. As seen in the West Africa case, there is value to simply providing information on rights. But most partners reported getting better results when they provided concrete actions that different actors could take to remedy service delivery problems.

Targeting those who value the service or spending — and who have the capacity to take up recommendations. The student monitoring example in East Africa is a perfect example of this. The CSO found monitors who both cared about education and would not need to exert much additional effort (to, say, travel to the school) to undertake the monitoring. Another organization in India that tried to institute monitoring trained unemployed young people, and they found the model to be unsustainable — in part because of effort issues and in part because the monitors were not directly hurt on a day-to-day basis by absenteeism.

All of the above trends were cited by the organizations being interviewed. We observed one additional trend as interviewers — that those organizations that were most successful were willing to undertake a ‘trial and error’ approach. Rather than assuming that they had a full understanding of the context and needs from the start of the project, these CSOs developed a theory of change (implicit or explicit), tried the strategy, and used information they learnt along the way about context, opportunities, and challenges to change their strategy as needed. In the absence of a silver bullet, this type of learning strategy and approach to transparency and accountability work is one way to continue strengthening this work with partners worldwide.

--

--

Harvard Ash Center
Challenges to Democracy

Research center and think tank at Harvard Kennedy School. Here to talk about democracy, government innovation, and Asia public policy.