Apologies for only just getting to this now. Great blog — and thanks for sparking debate around this.
As many will say, the logframe is not the problem per se. Like many tools it can be used well or badly. But I think it would be disingenuous to suggest it is not part of the problem, in that it has come to reflect and reinforce a way of thinking about development that is deeply problematic.
I think it encourages us to believe that we can predict how change happens and then be held (often financially) accountable for those predictions. But if we take the complexity of many development interventions and contexts seriously, then that is often (not always) an absurd thing to do. Those who work on development programmes in their day to day know this far better than me: developing and tracking all these often irrelevant output indicators is a headache, wasting people’s time that could be spent getting their actual job done. We’ve become obsessed about attribution for small-scale changes (documented in a logframe) probably because the reality (we are a small cog in a big machine, albeit possibly a helpful one) is hard for us to take.
Of course there are plenty of tensions here: we fairly want to know what aid money ‘buys’ us as taxpayers; we should try and understand our role in the big machine and track progress; we don’t just want aid organisations doing as they please with no accountability. But as far as I can see we’ve gone way too deep into an audit culture that sets the wrong incentives, if we are interested in long-term impact. As is stands, logframes and their indicators are often a kind of parallel reality occupied by development professionals.
What can we do about some of this?
There’s some sensible suggestions in the comments already: flexible logframes, focusing more on outcomes than outputs, changing how annual reviews are done to allow for greater acknowledgement of ‘context’ changes, etc.
But I think we do have to wary of falling into the same trap, where ultimately ‘we’ believe we can predict, control, and attribute. The focus on outcomes is a good example: almost by definition attributing ‘our’ role in achieving an outcome or not is much harder than with an output. We are in danger of replacing a silly standard (outputs) with an unrealistic one (outcomes).
Reclaiming the results focus could mean ditching the logframe for accountability and only using it as a guiding plan; or only having indicators for learning and adaptation in logframes; using research and evaluation (with techniques such as outcome harvesting) to understand impact (rather than predicting and measuring via logframes); using less but better data to monitor programmes; holding organisations accountable to process (relationships, learning, adaptation, strategy) rather than outputs/outcomes (which are often out of their control); getting out of wholesale programme monitoring and looking more at how programmes interrelate across a portfolio, and what that means DFID is delivering at scale.
The above suggestions are just that, suggestions, but I do think it’s time we rethink how to best encourage a focus on genuine development results. The results we should and often do care about are those that we appear to pay the least attention to. Adaptive programming has been a big step in the right direction. But let’s be clear, adaptive programming cannot simply mean business as usual. It often requires quite significant change in how organisations operate. I think the signals DFID send on this are really important, which is why it’s great to see this blog. There is a danger, that you allude to, that we just see lots of re-labelling of programmes as adaptive, when in reality they’ve just amended their log frame a few times!