Adaptive development: great progress and some niggles..
After 3 years in DFID headquarters, championing adaptive approaches to the delivery of aid and development programmes, I am back in an overseas office, re-immersed in the realities of aid delivery.
One key reflection from my first few months is just how important it is that we continue efforts by DFID and others to challenge and disrupt the conventional approaches to development programming which seem to still be too rigid for the complex world we work in.
I know we have made some great progress in recent years. DFID’s Smart Rules, USAID’s CLA, the Doing Development Differently ‘movement’ are all a good start. The recent partnership between DFID and USAID on the Global Learning for Adaptive Management is a great illustration of how far the ideas have come and a sign of our continuing commitment to new approaches to learning and innovation. Alan Hudson’s recent blog on adaptive development has loads of practical ideas for the World Bank and donor partners. The recent paper on monitoring incremental and adaptive change in Nigeria and the lessons from 6 case studies add to the growing pot of ideas for, and examples of, more adaptive approaches to development. A strong consensus that context really matters and linear blue-print approaches have limited value.
Despite all this progress I have a nagging feeling that the way development organisations continue to use the log-frame to manage project performance is holding us back from really thinking differently and politically about development. A few of my personal niggles:
Niggle 1: Even though the terminology has been widely accepted, I think the term is often misused. What are sold as ‘adaptive programmes’ often seem to be based on conventional log-frames that rely on linear indicators and milestones over multiple years. The up-front intent to learn and adapt is often missing.
Niggle 2: These use of ‘adaptive approaches’ are often based on adjustments to the conventional project parameters of time, cost and quality (i.e. flexible finances, extending time-frames, revising log-frame indicators). This flexibility is certainly useful, enabling partners to adapt to the realities of delivery when costs increase or delivery gets hard. It’s flexible not adaptive. Does this just create the space for weaker assessments at the start or a greater acceptance of optimism? We want programmes that actively seek to use learning and experience to dictate change not just reduce forecasts as they hit reality.
Niggle 3: I have heard a number of DFID partners use our programme scores as a demonstration of their track record as if achieving an ‘A’ in a DFID Annual Review is a sign of quality performance. I’d worry if partners are starting to feel that their credibility is drawn from this conventional view of successes rather than how they have learnt and adapted. Will partners try to negotiate lower ambition results so they can overachieve?
Niggle 4: Conventional approaches to log-frames seem to still be the norm, encouraging teams to focus on activities and technical milestones with limited effort to measure the changes in the wider political context in which aid is delivered. I saw a programme review recently that declared success as the partner had delivered all key milestones, but — in discussion — we agreed that there had been limited or no progress on the overall outcome. Were we measuring the wrong things?
Niggle 5: Poorly used log-frames can lead to misleading conclusions. Last week we approved an Annual Review of a cash transfers programme. The outputs had been weighted towards improving the systems to ensure sustainable delivery (rightly so) and considerable progress has been made. However, these improvements have not yet resulted in change for communities themselves. A rigid interpretation of the log frame would indicate high performance (as the systems reforms were heavily weighted). Yet, it was clear that the programme is not performing as expected as the benefits are clearly not yet being accrued to the ultimate beneficiaries.