You can (and should) opt out of TSA’s facial recognition

How the TSA can do better to inform passengers of their rights with AI

Nidhi Sinha
Women in Technology
6 min readAug 4, 2024

--

Aerial view of airport with lots of airplanes during daytime
Photo by Skyler Smith on Unsplash

This week, I tried to take a much-needed vacation. My work brain, which is constantly thinking about AI, privacy, surveillance, etc., was ready to be turned off so I could enjoy the beautiful sights of San Diego. I managed to not think about work for about 60 minutes — approximately the amount of time it takes to get from my home and into the TSA screening line, where I was quickly greeted with a poster informing me that my airport is now using facial recognition technology to verify my identity.

In case you missed it, the U.S.’s Transportation Security Administration (TSA) has deployed facial recognition technology (FRT) in at least 25 airports nationwide. They purport to be implementing FRT as an optional new tool to enhance security, which is ironic considering the vast security concerns that come with FRT itself. Racial discrimination, enhanced scrutiny of queer individuals, and data breaches are far from hypothetical concerns — they are our daily realities. None of us are exempt from the risks and harms associated with FRT, especially as it continues to be deployed by airports, police, retail, and beyond.

If you are reading this as someone planning to get on a flight anytime soon in the U.S., this affects you.

My main point is not to go into all of the pitfalls of FRT, as I’ve done in previous posts, but to highlight a much more concerning aspect of TSA’s roll-out: the lack of meaningful consent in this optional system. From the very beginning of my screening process, where I stood in line, happening to catch out of the corner of my eye a poster so innocuous that if you blink you miss it, to the actual moment I stood in front of the TSA agent as he failed to inform me or any other passengers of our right to opt-out, I counted the TSA’s many missteps in providing meaningful, informed consent. The following points are what didn’t work in the TSA’s rollout, and how to improve meaningful consent mechanisms instead.

Blue text on a white screen that reads “Participation in TSA facial recognition technology is optional. Your photo is deleted after identity is verified. Advise the officer if you do not want your photo taken. You will not lose your place in line.”
TSA’s recent announcement on optional participation in FRT. (Source: TSA’s X account)

1. Lack of meaningful information

In the 30 minutes I stood in TSA’s notoriously dreadful screening line, I counted 2 8x11" posters notifying the usage of FRT at this airport. One was placed at the entrance of the line, and was quite easy to miss for anyone in a rush to get into the line. Another was placed on a pillar, again easy to miss for anyone trying to navigate the crowd. I noted that in TSA’s giant TV displays, where they showcase what items are prohibited and what items need to be removed from your luggage before screening, they neglected to display any of the information on their small physical posters. This was a huge miss on the TSA’s end in my opinion, as those giant informational TVs are the most-eye catching displays in the entire line. Instead, any passenger can easily miss the fact that FRT is being used in the first place, let alone know their rights in this process.

2. Lack of public trust

To be completely clear, these scans are not mandatory. You have a right to say no without question. However, the gap between what rights we say we have and what rights we get to use is often a wide one. My partner, a Brown man, expressed discomfort at the possibility of being pulled aside and detained if he spoke up. Despite both TSA’s signs and I assuring him he would not face further scrutiny, his trust with the agency was already low given TSA’s discriminatory history. TSA needs to do more to build up public trust before promising that we all need to do is speak up. They claim to protect our civil liberties with this AI tool, but have already failed to treat Black and Brown passengers with dignity to begin with. When they roll out this tool without proper transparency or clear human alternatives, they continue to erode trust.

3. Lack of verbal notice

I watched as person after person went up to the TSA agent and were immediately wordlessly motioned to stand in front of the camera. I watched a mother juggle her 3 suitcases and her child with barely a glance to her surroundings as she scrambled to just get through the process as quickly as possible. I watched someone who didn’t speak English nod their head and stand where the agent pointed. Most importantly, I watched the TSA agent fail in their stated duty to verbally inform each passenger of the technology being used and their rights. I cannot fault any of the people I watched for failing to know their rights when they are not being empowered to use them to begin with. Although more time consuming, TSA agents should be taking the time to inform each and every passenger of their liberties and giving them adequate time to respond.

4. Lack of safe social norms

Let’s face it: peer pressure sucks. When I watched 20+ people in front of me wordlessly stand in front of the face scanner, while the TSA agent and his peers continued to fail to even tell people what they were standing in front of, I did not really want this to be my hill to die on. I held a lot of privilege in that situation that these other passengers did not — I have no kids to manage, I can speak English, I do not have to reasonably worry about being detained, and most importantly, I walked into the airport already knowing my rights. Even with all of this, I still felt extremely uncomfortable approaching the agent when it was finally my turn. The moment we are asking for our rights instead of stating them confidently, we should know that something is amiss. Even the TSA agent paused for a moment in clear surprise at this interruption in the flow, before just manually verifying my ID without question. My partner, who had asked me to go first so they could follow my lead, chose to opt-out immediately after me. I can only hope others in the line chose the same, or at least noticed the interaction enough to decide for themselves. The solution to this pain point is much more difficult than the other points I listed: we need a culture change. We all need to be advocating more for stronger protections as consumers and end users of AI.

The moment we are asking for our rights instead of stating them confidently, we should know that something is amiss.

The aftermath

After landing in San Diego, I told a few different friends about my experience. Every single person responded with the same question: “Wait, you can opt out?”. Yes, yes you can. There is a whole other post to be written about why you should opt-out, and another about why TSA shouldn’t be using AI at all, but I am primarily concerned at the moment with the fact that people don’t even know they have an option to just say no. TSA failed me and every passenger in that airport as they repeatedly missed opportunities to safely and responsibly deploy their AI. They can, and should, do better. And we need to hold them accountable to do so.

The Algorithmic Justice League is collecting surveys of U.S. passengers’ experiences with TSA’s facial recognition through their TSA Scorecard. The report is quick, easy, and voluntary. I filled out my report immediately after my experience and I encourage you to share your story as well if you have one. I am not affiliated with the Algorithmic Justice League, but I do find this type of reporting valuable.

--

--

Nidhi Sinha
Women in Technology

Working at the intersection of technology and ethics! Check out the documentary I’m directing @watchthewatchersfilm on Instagram.