The challenges of assessing impact when scaling a humanitarian innovation

Insights from Translators without Borders

Elrha
Elrha
10 min readMar 20, 2019

--

Communicating with local communities in Northeast Nigeria from Translators without Borders’ Words Of Relief project.

#TooToughToScale Blogging Series

By Kathryn Ripley, Operations Director, Elrha.

This is the third blog in our new series on scaling humanitarian innovation, which delves deeper to unpack some of the most pressing barriers to scale highlighted in our recently published Too Tough to Scale report.

Despite increased investment in the area, scaling humanitarian innovations is still a big and ongoing challenge. In our report, we explore why more innovations aren’t successfully scaling and identify 13 key barriers — from funding to uptake.

In this blog I interview Eric DeLuca, Monitoring, Evaluation and Learning Manager at Translators without Borders (TWB) to explore the barriers to scaling their translation services. TWB is one of three organisations we’ve supported via our three-year initiative, Journey to Scale, supporting them to scale their translation services.

Several of the significant barriers Eric highlights, we delve into in our report through Challenge 4:

There is insufficient evidence of the impact of humanitarian innovations:

Barrier 1: Evaluation of an innovation’s impact is sporadic

Barrier 2: There is a lack of baseline data demonstrating the effectiveness of current practice

Photo credit: Translators without Borders

What’s your role at TWB?

I am the Monitoring, Evaluating, and Learning Manager. My role involves establishing systems for tracking the work that we do, assisting the programme teams with measuring the effectiveness of that work, and coordinating our research initiatives. I started with TWB in January of 2017 — at the same time that our Elrha grant began — so I’ve been fortunate enough to be involved in our Journey to Scale from the beginning. In addition to my role, Elrha’s grant allowed us to hire a few other core staff positions. In my experience, this support for core funding has been one of the most essential components of our ability to scale and learn during the process.

…what happens behind the scenes is almost more important than what happens in the front. Providing more unrestricted support is one of the easiest ways to ensure that innovations can scale.

In the humanitarian sector, we rarely pay enough attention to the importance of core funding. Project-specific grants, especially in innovation, place tight restrictions on who and what can be considered “direct costs.” Yet, so much of our work at TWB happens out of the country or behind the scenes. So we need technical systems to support the translation workflow. We need a rigorous financial management and accountability system. We need HR systems and staff to support the 200% growth that we experienced these past two years. We need staff devoted to new programme development and cultivating relationships with our partners. We need an entire team of people to recruit, train, and manage volunteer linguists.

If I could make one appeal to donors looking for ways to invest in innovation, it would be to appreciate that what happens behind the scenes is almost more important than what happens in the front. Providing more unrestricted support is one of the easiest ways to ensure that innovations can scale.

How did you approach evaluating TWB’s services?

We realised right away that our first and most important challenge to solve was to put language on the humanitarian agenda. Think of it this way — innovators in WASH don’t have to spend time and energy making the argument that clean water is essential because we all accept that it’s necessary for disease prevention and promoting healthy communities. WASH innovators can start every conversation knowing that people already accept the importance of their innovation, and can focus on how exactly their solution can improve the accessibility or quality of clean water. We can’t do that.

At TWB, we have a two-part challenge. We first have to prove there are significant communication issues in humanitarian programming and that those issues have real effects on key humanitarian outcomes. Luckily there is a growing number of community engagement actors making this case so we are not completely alone.

We also have to prove that language is a factor in those problems. There are very few other organisations providing language support in humanitarian crises, and none at the scale that we are. As a result, we don’t really have a cohort of like-minded allies arguing about the importance of language.

We’ve spent, and continue to spend, considerable effort on this. Until we reach a point of momentum where we’re no longer shouting about this alone in the corner, we don’t have the luxury of focusing all our attention on the solutions. To me, this is one of the biggest challenges of an innovation in an unestablished space.

Photo credit: Translators without Border

How did you go about understanding the current problems relating to language in humanitarian programming?

We’ve tried a variety of different methods to better understand language barriers. One common point of learning is that we have to be in each country to have any chance at developing quality evidence. Online surveys, even when we have strong and influential global champions to promote them, usually don’t work. Conducting interviews or collecting data from people remotely is also a challenge. Our most successful formula involves a mixed-methods approach: conducting interviews and focus-group discussions with humanitarians and affected people in country, while also doing structured comprehension testing with specific demographic and language groups to test information in various languages and formats.

If there’s one key thing I’ve learned over the last two years it’s that language alone is not a barrier. The other barriers to more effective two-way communication involve limited commitment, insufficient resourcing, a lack of technical expertise, and a lack of quality evidence or knowledge to inform key decisions. Those are resource and structural barriers.

To send a research team to a country where we don’t have a presence, we need a small amount of seed funding. We also need buy-in from an organisation that can provide basic logistic and security support. Those two factors often have been the biggest barriers in expanding our innovation to new contexts.

Speaking of barriers — I want to take the chance to pick on myself a bit. In the first line of this section I referred to our efforts to better understand “language barriers.” If there’s one key thing I’ve learned over the last two years it’s that language alone is not a barrier. The other barriers to more effective two-way communication involve limited commitment, insufficient resourcing, a lack of technical expertise, and a lack of quality evidence or knowledge to inform key decisions. Those are resource and structural barriers.

As humanitarians, myself included, we need to start seeing language as a solution.

What other challenges did you face in terms of impact evaluation?

One of the biggest challenges that we continue to face is that our innovation does very little direct programming. We do not hand out shelter kits or provide psychosocial support directly to affected communities. We are part of a growing category of organisations that provides support to other humanitarian organisations. This means that our success hinges on our partners succeeding. It also means that we’re multiple steps removed from the end users that our work supports.

This makes it difficult for us to know the impact our work is having or even how many people our language support has reached. We translated over 22 million words last year alone, but we have no easy way to know how many people we reached with those translations.

We translated over 22 million words last year alone, but we have no easy way to know how many people we reached with those translations.

I recall one grant application that required us to list the precise number of lives that our intervention would directly save. But communication is a hard concept to track, especially in the age of digital communication where much of the influential information is no longer controlled by humanitarians. The challenge is even more difficult when we have to rely on our partners to track these factors for us.

We have started to prioritise targeted and small-scale case studies where we can more easily track the effects of various interventions in a semi-controlled environment, but it’s a laborious and slow process.

How much of the evaluation you’ve done can be generalised or do you feel a lot is very context specific?

Often our findings are confined to specific geographic locations or specific target populations. For example, we found that translation and audio support improve comprehension of information amongst pregnant and breastfeeding internally displaced women speaking particular languages in Maiduguri. It’s much harder to measure how translation has affected outcomes in the humanitarian response in Nigeria as a whole. We can also demonstrate how our model can reduce translation costs by 3–4x compared to commercial alternatives. However, this has to be accompanied by an asterisk because some of the languages that we work in (like Rohingya or Kanuri) aren’t often commercially available.

Accountability to affected people — either through surveys, in-person discussions, or anonymous feedback mechanisms — can only truly be accountable if these approaches accommodate minority language speakers and people with low literacy.

Despite the fact that languages are unique in different contexts, many of the lessons we have been learning are very generalisable. People usually understand information better when it is in their own language. Many women understand international or national languages at a lower rate than men.

To our initial surprise, people with low literacy levels commonly ask for information in written or poster format as it gives them something lasting that they can refer back to later. Pictures are effective for basic information but often struggle to convey complex topics. Multilingual audio delivery mechanisms, such as radios or local loudspeakers, are almost always the most effective tools for mass communication. Accountability to affected people — either through surveys, in-person discussions, or anonymous feedback mechanisms — can only truly be accountable if these approaches accommodate minority language speakers and people with low literacy.

Photo credit: Translators without Borders

We often hear how innovations have to pivot. How often did you have to fail before you succeeded and what role did evaluation play in this?

It seems like every step forward has first required three steps back. For example, we have developed a fairly successful model of training interpreters that was first piloted in Greece and later expanded to other parts of Europe, Bangladesh, and Nigeria. In the past two years, we’ve provided workshops and trainings to over 1,000 interpreters, translators, and field staff. We’ve measured significant learning across the board and are often saddled with demand that far outstrips our capacity.

Maybe we should start to include “# of times we failed” as a standard indicator of success in all of our log frames?

It has been quite a process to get here, though. Our training programme started as an intensive one-month course that was attended by fewer than 10 people. We boiled that down to a three- to four-day course and developed an entire instructor and student toolkit. We found that course was still too technical for many people so we simplified it. We then simplified the simplified version. We then modified parts of it to work with interpreters who could not read, created a really short version for organisations who couldn’t spare essential staff for more than a day, and turned some of the content into self-guided online modules. Just last week we realised the written post-training assessment was hard for many participants in rural Nigeria to understand, so we re-developed it to work as an oral test.

So, it’s a bit hard to talk about the success or the impact of this work without at least acknowledging the iterative nature of it. Maybe we should start to include “# of times we failed” as a standard indicator of success in all of our log frames?

Is there anything you wished you’d known when you started out on the journey to scale?

One thing I think we should move away from is thinking of innovation programming and evidence gathering as a discrete event. Grants are confined to timelines (our most recent Elrha Journey to Scale grant was two years) and so we try to understand how the innovation fits into that same timeline. We ask questions like, “what did we accomplish?” One of the things I wish I had understood sooner, and which I think other innovators would benefit from truly appreciating, is how long the journey to scale can take..

Our theory of change is lofty, and the pieces of evidence we have gathered and are gathering feel like such tiny drops in the ocean. Maybe part of the problem is we are always asked to measure that ocean.

Sometimes it’s helpful to step back and appreciate the tiny, muddy puddle we’ve created.

Thank you Eric for those hugely insightful reflections on the challenges of assessing impact.

This learning is so useful for us at Elrha. It helps us to understand the nuances of ‘measuring impact’, and how we can best support innovation teams in the future. It is very clear that a one-size-fits-all approach isn’t appropriate. You highlight how asking ‘how many lives have you changed’ isn’t achievable and in fact doesn’t necessarily tell you what you need to know about how beneficial that innovation really is. This is a great insight for us, and particularly timely as we think about which indicators we need to gather from our portfolio of innovation teams to assess how we are doing against our own theory of change.

Some of the other insights you mention around longer funding cycles, more flexible core funding and the importance of being able to invest in appropriate systems to enable you to grow, resonate strongly with feedback from other innovation teams. This is one of the topics we are looking to address with donors through our series of round table discussions on the barriers to scale this year.

Read our Too Tough to Scale report to discover the other key barriers to scale innovators in the humanitarian face, or learn about TWB’s scale journey by watching this recap of their project or reading their project profile.

--

--

Elrha
Elrha
Editor for

We are Elrha. We are a global charity that finds solutions to complex humanitarian problems through research and innovation.