The aid community calls for data sharing standards

Technology does not provide simple solutions to the humanitarian community when documenting human rights violations. It is necessary to develop a common framework.

Sigrid Bjerre Andersen
Techfestival 2018
5 min readSep 24, 2018

--

Satellite image of potential mass graves in Kadugli, Sudan. Images taken 6/17/11 and 7/4/11, shown side-by-side for comparison. Creative Commons NC-ND: Satellite Sentinel Project / Enough Project

In 2011, evidence consistent with mass graves was found in the Sudanese city of Kadugli. The evidence consisted of ‘remote sensing imagery’ or what is more commonly known as satellite images, cross-referenced with eye-witness accounts.

“We knew we had to use satellite images because we weren’t going to get a team there to actually document bodies in the ground. There was no way that was going to happen with the degree of safety that was necessary,” Caitlin Howarth tells me at Techfestival’s Tech in Aid Summit in Copenhagen. She is a researcher at the Harvard Humanitarian Initiative’s Signal Program on Human Security and Technology, who published the images as part of their pilot ‘Satellite Sentinel Project’. The aim was to document the humanitarian crisis in Sudan for the international community and holding the Sudanese government accountable for their actions.

Her presentation at the summit, on how satellite image technology was used in this context, serves less as an enthusiastic account of how innovation moves the humanitarian agenda forward, and more as a word of caution to the festival audience of aid practitioners, educators, researchers and innovators. We all sit back in silent suspense as we hear her story turn towards the grimmer aspects of modern technological development. “Based on what we saw later on, the government took advantage of the images. New photos indicated that the graves had been dug up again and the bodies had most likely been removed,” Caitlin Howarth says.

So while the images of mass graves were an essential piece of evidence in documenting mass atrocities, the perpetrators — presumably in collaboration with Sudan’s authoritarian government — had also been looking at the images. “Today we don’t know if we would be able to find genetic material to prove that bodies had been buried there,” Caitlin Howarth says and continues, “and if one day, a future government allows for an investigation to happen, would our work actually have helped the prior government in making it impossible to carry out such an investigation?”

How to ‘do good’ in a regulatory vacuum

Caitlin Howarth’s question is one of many highly complex problems posed by speakers and participants in this full-day summit, of how the development and humanitarian sector needs to deal with the cross-section of technology, digitization of data, human rights and unequal power relations. To name a few:

  • Does a refugee have the right to opt out of an iris-scan and can they opt out without becoming a suspect?
  • To what degree is it appropriate for aid organisations to collect data about the bodies of abused women?
  • Is it actually ever relevant to register informants’ names and other personally identifiable information?
  • How can aid agencies allow tech companies to contribute to their work without them taking over the agenda?

For most of the speakers and participants this Saturday, the answer seems to lie on a more general level than the questions: The aid sector is in a regulatory vacuum at the moment, and in order to move forward, everyone has a responsibility to develop standards and guidelines for how to manage humanitarian aid in the digital era.

Throughout the day, the participants discuss and contribute to what resulted in a set of guiding principles, aiming to cover the diversity of experiences represented at the summit.

One of the lessons learned for the Harvard Humanitarian Initiative in Sudan, was the need to develop organizational standards for the use and collection of humanitarian data. But in the absence of global regulatory requirements for the use of information technology in aid, this currently lies on the shoulders of the individual aid agency.

“Previously,” Caitlin Howarth explains of their learning process, “we set an arbitrary threshold and just went with it. And we were allowed to because the imagery we were getting wasn’t detecting people’s faces. But that doesn’t mean we shouldn’t have taken precautions. We had a theory of how we would ‘do good’ by providing humanitarian information but no theory of how we could do harm. And you need that theory, too. We have to hold ourselves accountable to the fact that when you take an action like we did, there is the risk of retaliatory action and these very authoritarian regimes are smart — and getting smarter — at how they interact with organizations like ours. So the story’s never quite over, and the perpetrators get to play a role in it, too.”

The high cost of high standards

The high level of commitment shown at the summit might leave one with the impression that standards for humanitarian work are in high demand among the people who will eventually have to abide by them. Although this is of course part of the truth — as humanitarian work has for decades been tied to global standards and principles — there is always a price to be paid when introducing new standards in practice. This is no different in an organization like Harvard Humanitarian Initiative, Caitlin Howarth explains: “It’s definitely not the kind of thing you want to prioritize when you’re in the middle of responding to an emergency. And going forward, we’re going to have some fierce fights on issues such as how expensive is it to live up to these requirements — and is that reasonable, especially looking at the costs of adapting them locally?”

The work of documenting crimes against humanity takes place in a context of highly complex and political interests, and Caitlin Howarth describes it as a constant balance of, on the one hand, time-consuming verification, cross-referencing and documentation activities, leading to further rounds of research and investigations — and on the other hand, the ambition to do all of this quickly so reports of attacks, mass violence or other violations against civilians will be released as close to real time as possible.

Back in 2011, when the images of mass graves were released, the Signal Program were already actively focused on these sorts of standards and ethics in their work, and worked carefully on limiting their release of images so that they could not be used by armed actors to further the conflict. But the issue of how to do that in practice, was very challenging, and was developed as they went along, with much deliberation on how their work might be carried out in a way that would not put civilians in harms way. This is where a more standardized approach to the use of technology in the aid sector, might serve as a help rather than an obstacle.

Caitlin Howarth also points to the already existing standards on aid and humanitarian interventions, and argues that basically, a set of standards for information technology should simply reflect the principles already at work in the development and humanitarian sectors.

“It’s no different from providing food aid or sanitary facilities. You can’t just roll in and say ‘well, we have some new technologies for digging wells, so we’ll dig some wells around here.’ No, you have to work with local communities, understand their needs and all these other requirements. What we’re trying to say now is, if you’re building digital bodies based on vulnerable populations, that should be recognized as a similar category. There are some requirements that you need to plan for, before you go to work,” Caitlin Howarth says.

--

--