by Sara Vannini & Ricardo Gomez
Lots is said to celebrate how ICTs and (big) data can help development. But what if more data exacerbated the risks that vulnerable populations already face? Many humanitarian and human rights organizations handle digital information about (undocumented) migrants and refugees to help them secure the services that might otherwise be difficult or impossible for them to attain. Their use of ICTs can help them overcome barriers and be more efficient in the work. However, it can also significantly expose undocumented people’s data to greater risks, such as security breakages, leaks, hacks, inadvertent disclosure, and court requests.
Humanitarian data practices and their risks for the most vulnerable are not a topic only relevant when operating in conflict areas and in the Global South. When migrants and refugees find their ways North, they do not cease to be a vulnerable population. In the U.S., for example, data disclosure can signify for the undocumented population detention and deportation. At the border between the U.S. and Mexico, it can enhance vulnerability to human traffickers, smugglers, drug cartels, in addition to authorities and border patrol.
In this frame, how are humanitarian, immigration, and human rights organizations in the United States, who handle digital information about undocumented migrants, ensuring that they are safeguarding migrants’ information and privacy?
How are humanitarian organizations managing data?
In a pilot study, we interviewed seven organizations — four advocacy groups and three organizations with ties to higher education — to understand the current practices that U.S. humanitarian organizations employ to protect the privacy of the undocumented individuals they serv. These organizations were selected as they cover various needs of undocumented people as they navigate the legal, social, and academic systems in the American society.
Our outcomes show that:
● Humanitarian organizations mostly rely on people’s privacy self-management: Organizations apply legal standards in data protection, including receiving people’s consent before sharing their information with third parties, and making sure people understand the basic privacy standards of social media practices (e.g. students are taught to adjust their Facebook settings to avoid being tagged at events that could raise suspicion on their immigration status).
● Humanitarian organizations mostly rely on third parties for data security: Technological security measures include the use of internal databases and listservs, data encryption, or relying on third party companies. Only one organization mentioned considering the moving of sensitive data to overseas servers so it would not fall under U.S. jurisdiction.
● Humanitarian organizations frequently rely on low tech methods for privacy protection: Information is collected on paper forms, which can be easily shredded. People are encouraged to wear large stickers at widely photographed events to indicate that their photos are not to be published online.
● Humanitarian organizations generally lack training on data privacy management: None of the organizations we talked to used information privacy standards or provided regular training to their staff. Privacy training is mostly done when the need arises or when required by funders. Else, data privacy management depended on the staff’s personal knowledge. The security of the data stored by third party companies is left to these companies’ standards and security measures.
Can we resist the call of big data?
The outcomes from our pilot show that, although legal standards were mentioned, humanitarian organizations even in the North have only limited awareness of data security, and they generally lack concrete privacy standards and training. In many cases, organizations are leaving complex, high-stakes decisions about the protection of personal information to the vulnerable undocumented individuals themselves, which is contrary to common legal and information scholars’ advice: although privacy self-management might resonate with the idea of empowering people to make their own choices, its use is problematic and has been pushed beyond its limits.
The recent events connected to the Facebook-Cambridge Analytica data scandal have clearly shown that personal information can be unintentionally shared and used with ease, without the users’ knowledge and consent. In the humanitarian context, the risks of ill-informed or careless privacy consent are further magnified. In particularly vulnerable situations, people might not have the ability or the privilege to decide to opt-out. Usually, the most vulnerable are also the ones who have the least freedom to choose whether or not they consent to provide data.
What are we doing, as scholars and practitioners, to engage with vulnerable people data and privacy protection? When we collect data and create technologies, are we taking into consideration the particular needs of the most vulnerable populations? How much data do we really need to collect and store?
Can we collect and store less personal data, rather than more, and still serve the needs of vulnerable populations?
However radical, some voices are already promoting practices of data reduction. UNHCR recently advocated for practices of accountability for the data we collect globally, highlighting how data protection is not only key to achieve impact, but also vital when breaches could expose people to serious human rights violations and even condemn them to certain death. In her “oath of non-harm for an age of big data”, Virginia Eubanks states that we should not “collect data for data’s sake, nor keep it just because I can”, and that we need to “remember [that] the technologies I design are not aimed at data points, probabilities, or patterns, but at human beings” (p. 213).
Conceptualizing a Global Digital Sanctuary
In Europe, the adoption of the General Data Protection Regulation (GDPR) represents a first step towards the protection of data subjects and the most important change in data privacy regulation in 20 years. Although the GDPR does not represent novelty in terms of regulations, its enforcement implies that organizations will have to adopt responsible data practices that they could easily ignore earlier.
While the GDPR only applies to people and companies living and operating in the EU, some regulations in the Global South are echoing it (e.g., see the Philippines’ Data Privacy Act and South Africa’s Protection of Personal Information Act). The U.S., on the other hand, is not pursuing such a path: recent events and ongoing debates, such as the ones on net neutrality, and on the enactment of the CLOUD Act (which explores giving new tools to both U.S. and foreign police to gather data about people across the globe) suggest that data practices here are far from being subjected to regulations and auditing.
What if GDPR were extended to HIA internationally? Would international agreements on HIA standards in the context of migration provide a layer of protection for the most vulnerable populations? Would that be enough?
Although a first step in the right direction, we argue that that would not solve all the potential risks at stake. The issue of privacy self-management, for example, would still stand:
How can we obtain truly informed consent from people who are particularly vulnerable and whose lives are under threat? Including making sure that they have the expertise to fully assess the consequences of agreeing to opt in, that they understand the legal terms we use — which might belong to different legal systems than they are used to, that they can assess what future uses of their data imply — when all they might be focused on is survival, here and now?
Albeit behind in GDPR practices, the U.S. scene is starting to hypothesize some new interesting operational frameworks in this direction. Academic institutions, for example, are talking about how they can minimize risks associated with technological encounters to undocumented students, leading the way for a conceptualization of the web as a digital sanctuary. They advocate for auditing data repositories and policies associated with third party providers, rethinking student tracking protocols, resisting governmental and third-party policies and regulations about student data, being transparent with students about all the places where their data is stored and how it is used. And, most of all, they advocate for reducing the data they collect.
If we are to truly serve the needs of vulnerable populations, we need to conceptualize, design, and implement practices of digital sanctuary in development across borders.
A previous version of this post was published on ICTWorks in May 2018