Client voices in the humanitarian data stream

Listening and Responding for #BetterAid

Peter Biro/IRC

When I tell people that I am working on getting aid agencies to listen and respond to their clients, I’m usually confronted by one of two shades of incredulity.

Outsiders, that is people who don’t work in this challenging business of ours, say words to the effect of:

What?!? You mean you don’t already do this?

Insiders, on the other hand, often roll their eyes and say something like:

But we’ve been talking about this for decades, what’s new?

So let’s get a couple of things straight from the outset:

  1. No, aid agencies do not, as a rule, systematically and deliberately listen to what their clients want and adapt their programming accordingly. There are reasons for this: operating at scale and at speed, in difficult and often dangerous circumstances, with incredibly high stakes (people’s lives, health, well-being), encourages an emphasis on making things happen now and discourages spending time and resources on checking assumptions, listening to changing opinions and adjusting plans and activities. Good intent can and does breed bad habits: at its worst, an isolated and self-referential world of humanitarian decision-making.
  2. But just because this persistent problem has been well-documented over and over again, doesn’t mean that it is intractable. On the contrary, asking ourselves what is stopping us from listening and responding to our clients helps us to identify what we need to change in our own practices and those of the wider aid business.
Hear no clients, See no clients, Speak to no clients: participant contribution at the IRC Learning Exchange on Client-Responsive Humanitarian Action, April 2016

At the International Rescue Committee (IRC), as part of an ambitious strategy to develop and deliver #BetterAid, we’re looking at how we can amplify clients’ voices and their ability to influence our work. We’d like to know what our clients think of what we’re doing and how we might do it better (or something else entirely if we started down the wrong track in the first place). We think that this will make our interventions more effective and help our clients take control of a future in which they don’t need us. I’d like to share some of the thinking and insights that we’re generating in the hope that this can spur others to do likewise.

So how do you listen to clients in a systematic and deliberate way?

Ten years ago, I was running the IRC’s largest operation in the Democratic Republic of Congo, making imperfect decisions based on fragmented information, under the pressure of massive competing demands. I had multiple sources of information: financial data, procurement data, security updates, field office reports, needs assessments, project implementation reports, project indicator data, contextual updates from a variety of sources, etc. Nowhere did I have a measure of satisfaction or opinion, except possibly implicitly in certain monitoring data, such as whether or not people were attending clinics and schools that we supported (although even in this case there were multiple possible stories as to why people were or were not using a service). There was no client voice in the data.

My only option in such a situation was to make field visits, talk to clients and cross-reference this by asking my staff what they were hearing in their field visits.

“Great!” you might be thinking, “at least this got the aid bureaucrat out into the field and meeting people; this is what it is all about”
Well yes and no:
  • Yes, it is really important to get a feel for what is going on, to bypass layers of management and reporting chains to check what is happening. You can learn really interesting things, you can hear some of the perspectives of the people you seek to serve, you can make changes.
  • However, a system which relies on senior managers (or donors, evaluation teams) getting out to the field to notice things is not likely to generate actionable insights fast enough to make a difference.
  • But most importantly, a field visit anecdote is just a single data point. What is seen or heard on a particular day in a particular location may or may not be representative of clients’ experiences across the wider intervention over a prolonged period. How many of us have made changes to programmes because a donor or senior manager formed an opinion on the basis of an encounter with a client or two? Frustrating, wasn’t it? Because our team knew better. Or did they? In the absence of more systematic data, it’s hard to know.

We set out to tackle the listening problem through improved data

We assumed that if we could provide aggregate data on clients’ opinions and preferences, we would be able to lift clients’ voices into the data set available to key decision-makers: project leader, country manager, regional director, senior executive. In an era of data dashboards (something I wish I’d had 10 years ago in Congo), being able to summarise and aggregate data in a way that flags issues and can be compared with other data sets is incredibly useful.

Client voices are crowded out in an increasingly rich data environment

This environment is dominated by stuff that is easier to count: money, supplies, patient visits, pupil attendance, etc. Our challenge is to render client voice into a data dashboard without losing the important nuances that distinguish counting opinions from counting objects.

We partnered with Ground Truth Solutions for this, adopting and adapting a method that they have drawn from the consumer satisfaction industry and modified for humanitarian and development contexts. The approach uses a minimalist survey, stripping down the things we enquire about to the smallest number of questions possible. This generates actionable insights whilst keeping the surveying burden on clients to a minimum. At the same time we know that the bigger the survey the more likely it is to gather dust on a shelf or remain buried on a hard drive.

So less is more…

Project teams survey frequently and after each round discuss results with clients to understand in their words what the data means, why certain findings trend in one direction and what we can do differently to respond. Repetition gives trend data and a sense of whether course corrections made a difference or underlying problems persist.

We’ve been testing this approach in South Sudan, Syria, Kenya and Greece and have just published case studies on Community Case Management in South Sudan, Protection and Information Services in South Sudan and Health in Syria as well as the underlying survey and feedback reports. This forms part of our commitment to transparently share progress, warts and all.

…as long as the information is used

So how do you get staff to respond to client feedback?

We’re beginning to generate some interesting insights into how improved information flow can and sometimes does lead to more responsive aid programming, which we’ve summarised in the case studies. We’ve also seen that improved quantity, frequency and presentation of data is not, in itself, sufficient to drive more responsive humanitarian action. This is not the organisational game changer, although it remains an essential ingredient (no client voice data; no client voice data-driven decision-making).

We’re now digging into three major related ways to stimulate more responsive programming, detailed further in this Briefing Paper:

  1. Improved information flow. We’ve observed that information on client voice gets stuck within project teams and does not flow to more senior decision-makers or across to other teams who might find the information useful. This is partly a function of the way we tested our approach: embedded within individual projects. We intend to tap synergies and improved information flow in the next phase of our work by engaging at the level of an entire country office. At the same time, we will be working with country managers to review and improve information flow between different groups of staff, a recommendation that came directly from our field offices at a Learning Exchange on Client-Responsive Humanitarian Action we hosted.
  2. Institutionalised information review in regular business processes. One of the things that happens when you innovate, pilot and test something is that it tends to sit outside everyday life. This is both a blessing, in that it gets attention, and a curse, in that it is not treated as part of the day job. Decisions are being taken and reviewed all the time within our organisation, on the basis of various information sources. We plan to embed the review, response and interrogation of client voice data in these regular processes of a field office and country office, to make it business as usual to consider this information.
  3. Incentives and motivation. Binding all this together is the need to create the right incentives. We’ve seen managers respond in exemplary ways when faced with negative feedback. And we’ve seen others ignore, delay or otherwise marginalise the information when left to their own devices. No one denies that client voices are important. But reactions upon receiving client voice-related data differ widely. Management staff and processes have a crucial role to play.

At our learning exchange (highlights: #HumVox) there was widespread agreement across agencies that managers play a crucial role in demonstrating the importance of client voice relative to other factors in decision-making. As one participant put it:

“If I am told that client voice is important but rewarded for growing the budget, I draw my own conclusions.”

Data on client voices can be used in a wide range of decision-making settings. Actually using it requires a behavioural nudge or two, with senior (headquarter, regional, country) management leading the way.

Reflecting back on my Congo days, the classic behavioural nudge that I was on the receiving end of was when my regional boss required all country reps to present on three key programme indicators at quarterly regional meetings. How we suffered to get the systems in place to get the data and analysis to avoid delivering shamefully bad presentations in front of our peers. But it worked. Ingredients for success: embedded in already existing management processes; performance related; peer pressure; predictable; repetitive. All of these helped signal that this was important and moved us beyond the financial performance indicators that defined success at that time.

In the coming months we’ll continue report back on how we translate these observations into client-responsive and adaptive humanitarian aid. If you have the time to read and analyse the briefing paper, case studies and underlying reports, so much the better. We’d love to hear your observations on this blog or any of the papers, as a comment here, via Twitter or email me: alyoscia.d’onofrio@rescue.org (with or without the apostrophe).


The International Rescue Committee responds to the world’s worst humanitarian crises, helping to restore health, safety, education, economic wellbeing, and power to people devastated by conflict and disaster. Founded in 1933 at the call of Albert Einstein, the IRC is at work in over 40 countries and 26 U.S. cities helping people to survive, reclaim control of their future and strengthen their communities.

Follow us on Twitter and Facebook and Medium

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.