Making evidence more accessible in humanitarian work

There’s been a lot of discussion recently about how we need to improve the use of evidence in the humanitarian sector, but there hasn’t been nearly enough discussion about the ways that we can and should do this. At the International Rescue Committee (IRC), we’ve been making concrete steps towards ensuring that humanitarian action is based on the best available evidence, but it’s been a complicated and bumpy ride. Here are my reflections on the lessons we’ve learned and the work we’ve done to date.

Imagine, for a bit, that you are a humanitarian worker that is trying to design a new project. You are overworked, understaffed, and stressed beyond belief — you have 48 hours to write a proposal for a program (on top of your other work), in a sector that only vaguely relates to your expertise (since trends change faster than skills), in a context that is historically under-researched and heavily context-dependent, and — as icing on the cake — you have spotty internet access (if any!). Then, in the donor’s request for proposals, you see those fateful words:

Please include research evidence that supports the proposal.

This is not a situation that lends itself to success, and yet it, and other situations like it, are incredibly common. How can we expect people to adequately review the evidence base around a program in these conditions? We should be committed to ensuring that our programs are focused on outcomes and based on evidence, but as the example above illustrates, making this happen in our day-to-day reality is incredibly difficult.

Enter the ‘Evidence Map’ approach

In 2014 we developed the ‘Evidence to Action team’, which was tasked with improving the IRC’s use of evidence in difficult situations. As a first step, we tried to find a way to decrease the time it takes to find and assess research evidence. We discovered an early version of the International Initiative for Impact Evaluation’s (3ie) Evidence Gap Map approach, and started prototyping our own version in Excel.

At it’s heart, the Evidence Map approach is a fairly simple one: it is simply a grid, with outcomes of interest defining the columns (e.g. literacy, numeracy, attendance) and interventions of interest defining the rows (e.g. cash transfers, early childhood nutrition, teacher training). You just choose the outcomes and interventions you’re interested in, zoom in on them (or filter them!), and at the intersection you’ll find research relevant to that intervention and outcome. Highlighting one of these boxes and clicking control-t will bring up a list of details on the conclusion, the context, how to find the full text, and so on. Now it’s easier than ever to find research evidence relevant to your work!

It’s like the nerdiest infomercial ever!

On their own, the maps are not enough: we need to consider the ways that people use evidence and the constraints they face

You may notice that our Evidence Maps are pretty rough looking — this is because they are, and continue to be, working prototypes.

I’m really not joking when I say it’s a working prototype…

This is purposeful — the IRC Maps have evolved, and are continuing to evolve, alongside our attempts to establish a culture of evidence in our work: for the past 3 years that we’ve been working on these maps, we’ve consistently been asking our staff for feedback about how we could adapt them to be used effectively in our programs, while simultaneously using them as a ‘foil’ to explore how research evidence can best be integrated more deeply into our organizational culture.

We gained many insights from this process, but I want to focus on two big ones that really impacted the way we designed our maps: the importance of context, and the importance of specific conclusions.

How do you know if that research applies to you?

When we originally showed the maps to our staff, and asked how we could make this information more actionable, the #1 response was ‘context’. Context is important for all practitioners, but as a humanitarian organization we have some additional barriers due to the small number of evaluations in our contexts.

Of the over 4,000 impact evaluations that have been published in developing contexts, only around 100 of them were conducted in humanitarian contexts.

An impact evaluation in a stable development context might not apply in a humanitarian emergency, in a refugee camp, and so on. Even an evaluation in one stable context may not apply in a different stable context. Idiosyncratic realities of politics, economics, gender, the availability of basic services, and so on can greatly impact the effectiveness of programs, even within a single program in a single place, an intervention may work great for some people, but may be ineffective, or even harmful, to their neighbors.

Unfortunately, context is sometimes underplayed in impact research and synthesis studies

As a quick note, for various strategic and technical reasons that we are continuing to evaluate, all of the research evidence in the Evidence Maps (ours, as well as other organizations) come from systematic reviews and impact evaluations. At their very best, reviews and evaluations can do wonders for our understanding of the nuances of context, but not all of them live up to this ideal.

To account for this problem, we went through and pulled out every single piece of contextual information that our staff said was important, and added it to the details of the map. Instead of guessing what context a conclusion applies to or having to dig into the full text of the study, I can now very quickly see what the population is, what countries the underlying studies came from are, as well as some special tags marking if it applies to an emergency context, refugees, whether the authors dis-aggregated results by gender or did any gender analysis (which, turns out, is unfortunately rare), and whether they did any cost analysis (which is also, unfortunately, rare — but this should change soon!). If the authors find different results for the same topic in different contexts, that information is split out — if we know that it works in refugee camp settings, but not in urban IDP environments, then you can see that here.

Context on it’s own is not enough though, which brings me to the second point:

What did the authors actually investigate?

I’ve often heard proponents of systematic reviews state:

You can compare apples and oranges if you’re investigating fruit

I agree with this, but if you’re investigating fruit, and making conclusions, tell us what fruit you used in your analysis! If a review investigated different proxies for a single topic — an apple and an orange — we separate that out so that you can get the precise conclusion that you are looking for. That way, you never need to worry about whether we misapplied a given conclusion — if we marked that ‘X had an impact on the fruit outcome’ in our map, and you’re dubious about that, you can see exactly how the study measured the fruit outcome — did they use the apple proxy, or the orange proxy? This is powerful, because it’s important to be specific: it’s important also to synthesize, to find those abstractions and commonalities, but this should not come at the expense of specificity.

To bring it back to real examples, when I see that an intervention has an impact on “education outcomes”, I want to know what those outcomes are — if they’re all about enrollment, and enrollment isn’t a problem in my context, that’s important. If a review says that something has an impact on “learning outcomes” I want to know what they’re talking about — does that apply to literacy scores, language skills, grade progression? These nuances are important, because the problems we seek to address are nuanced.

So what’s next?

Evidence is complicated, and we cannot go it alone — which is why, at the 2016 What Works Global Summit, the IRC, 3ie, DfID (who kindly funded our maps!), the World Bank, SightSavers, and the South African Government will present together on each of our takes on Evidence Maps, and will forge a path for collaboration.

Evidence is really complicated, and multiple approaches are needed — which is why we’re also going to present our newest tool, the Interactive Outcomes and Evidence Framework (iOEF). Designed to make all of our effective tools easier to use, the iOEF takes data from the evidence maps, makes it a little more user friendly, and integrates it into a framework for understanding how to design evidence-based programs. The iOEF is cool, fun, and isn’t based on Excel. So what are you waiting for? Try it out!

Evidence is very complicated (notice a theme?), and there are still fundamental questions about the use of evidence that we need to address — I mentioned earlier that we’ve used the Maps as a ‘foil’ to learn more about how evidence can, and should, be used in humanitarian practice. This work is ongoing. In addition to all of the complex barriers to use of evidence that exist, we also need to consider how it can be used alongside things like cost data, client needs and preferences, gender analysis, context analysis, measurement, and innovation.

While it is complicated and difficult, as a sector we are making great progress in improving the effectiveness of humanitarian work. We need to continue asking the hard questions, prototyping new solutions, and ensuring that our efforts turn into real positive change in the lives of people. For this to work, we need your thoughts, suggestions, and criticism — so please reach out to us at OEF@Rescue.org to let us know what you think about our evidence work, Evidence Maps, and Interactive OEF!


The International Rescue Committee responds to the world’s worst humanitarian crises, helping to restore health, safety, education, economic wellbeing, and power to people devastated by conflict and disaster. Founded in 1933 at the call of Albert Einstein, the IRC is at work in over 40 countries and 26 U.S. cities helping people to survive, reclaim control of their future and strengthen their communities.

Follow us on Twitter and Facebook and Medium

A single golf clap? Or a long standing ovation?

By clapping more or less, you can signal to us which stories really stand out.