Pete, thanks for your interesting thoughts on this issue, with which I’ve been grappling for the past few years. I agree that the way Logframes are currently used across #globaldev often holds back more adaptive and flexible programming.
On points 1,2, 4 &5: I think, however, that Logframes could and should be used much more effectively, and in fact I have worked on a few LFs for more adaptive programmes in DFID recently. At the output level, for example, it is possible to include indicators measuring (and incentivising) adaptation, eg: “# of documented instances in which high quality learning products have positively impacted on programme design/implementation* (*learning products could be policy recommendations, PEA etc.; positively impacted would need to be defined in the context of the programme)”. This is just one example, of course you could also include more qualitative adaptive indicators.
On point 3: I agree and think, in particular for adaptive programmes, it will be important (necessary?) to monitor (and in DFID terms, score) not only outputs but also outcomes more rigorously. How else do you decide to adapt as you implement? There are of course issues around measuring change within short timeframes etc, but it is possible to address these challenges with the right types of indicators etc.
In a nutshell, I don’t think Logframes per se are the main issue in terms of holding back adaptation, but the way we have used/are using them in programmes. Would be good to hear your thoughts Pete Vowles