Where’s the blindspot? The problem with iris identification for refugees

Tulsi Parida
4 min readMay 29, 2019


Earlier this month, the UNHCR reported that they had rolled out an iris identification (a form of biometric identification) program that gave Rohingya refugees a form of identification for the first time. The initiative was met with plenty of positive press, with many applauding IrisGuard, the private entity that developed the technology, for providing new opportunities for refugees. Having an identity will allow refugees access to basic human rights and economic opportunities, and will provide additional data to aid agencies who are tracking population movements.

This sounds like a win-win-win— refugees get an identity, aid agencies get more accurate data, and tech companies get access to data and global recognition. But my alarm bells went off as I recalled Anand Girdhirdas’s concept of the win-win fallacy: there are likely to be unintended consequences that disproportionately negatively impact the most vulnerable (in this case, the refugees). My discomfort arose from two blindspots this program seems to have missed: lack of true choice and data security. Overlooking these crucial blindspots could both perpetuate existing human rights inequities and further endanger the lives of those that are tracked.

A lack of true choice

Setting aside the idea that tagging particular populations as seperate from the general public is remniscent of the isolation tactics and human rights atrocities of 1930s Europe, opting into iris identification does not seem to be a choice. If iris identification is the only way refugees get access to aid, economic opportunities, and the like, they don’t really have a choice but to consent to the process. Citizens of wealthy countries have other forms of identification — including social security numbers, credit card numbers, and driver’s licenses — and do not have to submit to biometric identificaiton in order to function as a citizen of their society.

Why does this matter? First, by only requiring certain populations to have biometric identification, we are inherently imposing a hierarchy of importance based on the choice and agency that different populations are afforded. We are deeming refugees as inherently below the rest of the world’s population, as they require a different kind of identification compared to everyone else in order to function normally in society. Second, in order to make an informed choice, these individuals must understand what they are consenting to when they agree to have their biometric information collected. Refugees are often not made fully aware of what sharing this information could mean for them, and so many are not truly consenting to being tracked in this way. It could be argued that were they to understand the potential consequences of their data getting into the wrong hands, they would not consent to iris identification.

Data security concerns

The very nature of biometric data means that it cannot be changed — this both makes it an attractive form of identification as well as a very dangerous one. Iris scans allow you to definitively verify the identity of an individual, but what happens if this data is hacked? If my credit card information were hacked, I can ask the bank to issue me a new one, but if my biometric information is hacked, I cannot do the same. The consequences of a hack into a biometric database are far greater than of any other form of identification.

Especially because biometric information cannot be changed, if it were to fall into the hands of the wrong people, the consequences would be lifelong and extremely dire. For example, in the case of the Rohingya refugees, these iris scans would add to a biometric database that could help in the identification of refugees and deportation back to Myanmar, where their lives would certainly be in danger. Entire communities and populations could be put at risk if there were a centralized database that tracked their whereabouts.

So do we stop the collection of biometric data altogether? There is no doubt that there are countless positive effects of giving refugees a reliable form of identification. However, overlooking the blindspots of biometric identification might prevent agencies such as the UNHCR from thinking of the extra protective measures they must take when dealing with extremely vulnerable populations. This could mean ensuring the data is stored in a decentralized way and putting in place strict data sharing regulations. It may also be worth looking into other forms of identification that aren’t as risky. The unintended consequences of iris scanning technologies for refugees just might surpass the benefits, unless the right safety and security measures are in place.



Tulsi Parida

I head data solutions for public sector at Visa. I care about intersectional feminism and inclusive tech.