Informed Decision Making in Monitoring and Evaluating

James Wilkinson
The Challenges Group
4 min readMar 5, 2019

Previously, Challenges Zambia’s James Wilkinson explored some of the issues surrounding robust monitoring and evaluation. Here, he argues that good M&E should be built in from the very start of a project.

My last article focused on principles we can follow in order to avoid being stung with unintended consequences. This is cardinal in a sector where negative consequences have significant multiplier effects and can significantly impact livelihood.

This week, I’m looking at some specific practices you can employ when trying to estimate the impact that will later be evaluated. Often we see projects being designed and valued in isolation rather than comparatively. Meanwhile, estimates of the value of a project can be based on huge assumptions that are likely to have a broader statistical distribution in reality. They can also be very difficult to measure. This can apply to all sorts of metrics, be it incomes, jobs, CO2 emissions, financial service access or something else.

Designing a project and the outcomes it seeks will determine the direction of the project

It’s in the design!

However, with a bit of planning and intelligent project design, many of the pitfalls surrounding poor quality monitoring and evaluation can be avoided. Here are a five “states of being” that can help when designing a project:

Be comparative

This applies both in terms of the tools you use (data collection/analysis to test assumptions) and the process (how you construct your theory of change)

Be Bayesian

Don’t just assume you’re right — use an existing assumption and calculate the likelihood that this is true. Try and test this assumption.

Be inclusive

Get representation across a project team. All people in the market/system can help us have a better understanding on impact. This avoids optimising locally and not delivering benefit to a wider system or community.

Be diligent

Don’t shortcut! Budget for testing your assumptions in the design of the project, and don’t commit to delivery until this is done. M&E shouldn’t just be an ‘add-on’ at the end of the project — for it to truly add value it should be embedded at the start.

Be practical

Ask yourself, can the metrics you’re using be accurately and independently measured in reality? What is feasible, and what can that tell you about the project?

Accounting for cost

For frameworks you can use to assess impact, a recent HBR piece is well worth reading to look at the deeper considerations surrounding socioeconomic impact. It echoes a lot of the above sentiments and adds more detail in areas such as Social Return On Investment (SROI).

In particular, one observation we have often seen is that private and public organisations focus on the impact/revenue figures (the exciting ones) without truly accounting for cost and duration. Although capital/operational expenditure is calculated in value-for-money assessments, the effect of the project being five years rather than three years is rarely considered. Thus, the “impact per dollar spent” can often be ignored.

It’s easy to talk about big ‘Impact’ metrics without considering the cost

One way to think about this in project design is by considering the “cost of delay”, and what the answer is to the following questions:

  • The delay in value added to the market if you chose project X over Y
  • The delay in ROIC or SROI if a project is delayed by X, Y or Z months
  • Our confidence level on these estimates (risk adjustment)

(For a good read on this subject, have a look at what Charles Lambdin says on this in his article “Estimating Cost of Delay”.)

And finally…

Our last takeaway is to build an approach that you can use across projects, programmes and offices. If you can take a portfolio approach to estimating value, then you stand a much clearer chance of being able to evaluate projects clearly and accurately, and to make quicker decisions about whether to continue with their implementation or not. These approaches shouldn’t be anomalies in your organisation. Instead, we would advocate looking at a larger number of potential ideas and assessing them in good detail, so that you can evaluate progress early and decide what work to continue with.

We see this approach to be similar to that which a venture capitalist or innovation lab may take in principle — it accepts that many ideas may fail initially but if we can learn from this early on then we can maximise our focus on impactful projects. We believe the key to being able to do this is in how we design our programmes and initiatives.

--

--

James Wilkinson
The Challenges Group

Working at Brink as part of DFID’s Frontier Technology Livestreaming Programme. Pragmatic about tech, fascinated by behaviours.