Thanks for blogging about this, Pete. I agree wholeheartedly: it’s definitely time to overhaul how programmes are reviewed.
Revamping the logframe is one step. A possible option: In a standard logframe, each output has multiple indicators that get scored on the basis of whether the targets have been achieved or not, and the output then gets scored on the basis of the number of indicators that are met. How about, if we change the way this works and have indicators represent multiple pathways towards achieving an output? This would one, establish from the very beginning that not all of these pathways would work. This would also prompt programmes to reflect on the Theory of Change (and underlying assumptions) and integrate them with the logframe. If we did this, targets/milestones for some of the indicators would be met, while some others would not. But that would be written into the design of the programme assessment framework. Outputs would be scored on the basis of overall progress, and not tied to individual scores against each indicator.
The other step we need to take is to revamp how logframes are used for assessment: Scoring (as discussed above) is one aspect. The other aspect is capacity of those doing the scoring. Obviously, rigid and linear scoring is easier to undertake, as well as easier to defend using a rule-book. But if being ‘flexible and adaptive’ is a genuine priority, reviewers will need to stick their necks out to make judgments that go beyond just numbers. This should not only be “allowed”, but should be considered “essential”. A system of peer-reviews could be used to negate biases (in either direction).