The Facial Recognition Dystopian Nightmare

A collage of parts of different faces with different features on a black background with blue, yellow, teal and grey color blocks.

We’re in the middle of an unprecedented leap in artificial intelligence, metadata processing, machine learning and the duplication of human intellectual power (and caprice) on a massive, efficiency-driven scale. Facial recognition tech (FRT) captures the essence of this contentious and mind-blowing moment. Notorious company Clearview AI says that when it gets 100 billion pictures it will have the ability to do facial recognition worldwide. Police forces want this technology so badly they are practically begging for it. The private sector is using it to micro-target its marketing. Governments and bureaucrats are embracing it so eagerly they could be clowns in Michel Foucault t-shirts, using it to supposedly prevent unemployment fraud and keep track of the receipt of other benefits. Some states are fighting back, urged on by privacy advocates across the political spectrum. As Tonya Riley writes, sometimes the bureaucrats do this partially in secret, as they allegedly did in Oregon — currently the subject of an ACLU investigation.

But not enough people, activists, or government entities are fighting back. This might be because the technology seems both inevitable and overwhelming. There are no real controls on what the private sector can do to collect information on users who voluntarily allow themselves to be photographed or surveilled, the contents of their bank accounts and card purchases dutifully recorded and sold. And if the government is letting the private sector do this, we have reason to suspect it’s so when the government does it, it will go down the throats of the people more easily.

City Mayors can do a lot to protect their citizens. I live in Carlsbad and would like to see our City ban use of this technology by police and tech companies, and sue those who are infringing.

The only thing worse than dystopian technology that works is dystopian technology that doesn’t work but whose users think it does. In the midst of the pandemic and the upswing in unemployment claims, Riley says 30 states contracted with company ID.me to prevent fraud. However, both government and academic researchers have raised serious questions about the accuracy of the technology. Studies have shown that facial recognition algorithms have higher rates of false positives for Asian and Black faces than white faces. And many people are raising concerns about what private companies like ID.me will do with their data: will they sell it to other companies? Will they give it away freely to the government? “Another issue” with this technology, according to Scientific American, “is who audits ID.me for the security of its applications? While no one is accusing ID.me of bad practices, security researchers are worried about how the company may protect the incredible level of personal information it will end up with.”

The anti-FRT movement did score one recent victory. After championing facial recognition for those filing their income tax returns online, and receiving tons of criticism for the announcement, the Internal Revenue Service abruptly backed off. National Review speculated that Congress and some other prominent critical voices made the difference. The political force that made the IRS back off must have been considerable, and must have come from multiple, powerful spaces. IRS Commissioner Charles Rettig, a Trump appointee who refused to disclose a million dollars worth of business he’d done with Trump before his nomination, isn’t exactly a champion of transparency. His appointment was political payback for his public defense of Trump withholding disclosure of his tax returns.

But it would be a mistake to take this victory over the IRS as indicative that the tide is turning against government use of the technology. The use of FRT by law enforcement agencies raises the most serious civil liberties concerns and shows no signs of slowing down. There is currently a campaign driven by the ACLU and Amnesty International to dissuade the New York Police Department from their continued use and broader development of the technology. Amnesty International’s lead researcher on Artificial Intelligence and Human Rights recently said: “Our analysis shows that the NYPD’s use of facial recognition technology helps to reinforce discriminatory policing against minority communities in New York City.” Amnesty likens it to “stop and frisk”, but very high-tech — an important simile since stop-and-frisk has been widely found to contribute to racial discrimination. The same minority communities who fell victim to stop-and-frisk are now being surveilled for facial recognition, a technique known to produce false positives that are biased against people of color.

The real problem is ethical, but that doesn’t mean it’s an abstract philosophical argument about morals. Rather, it’s a question of which ethics will guide the policy decisions that will determine if and how we collectively sustain the wider human community and the planet. True leadership communicates, rather than surveils. It gathers information ethically, subject to what the people in question want to disclose.

Jacob Hood of New York University recently wrote a paper about police body cams, biometrics and facial recognition, and how these technologies “reinforce normative understandings of the body and its political functionality,” and do so in a hierarchical way. The police are wearing the cameras, meaning action is being “recorded” from their perspective — a hierarchical perspective with a monopoly on violence. The facial data recorded by those body cams, which are extensions of the cops’ physical bodies allowing them a kind of cybernetic empowerment, is then used to produce FRT, itself a panopticonic form of bio-political control, using “biometric surveillance as mechanisms for leveraging political power and racial marginalization.”

What we end up with, if we allow law enforcement, social service agencies, and (already) large companies to both experiment with and impose FRT on people, is a dystopian nightmare. It may not be as structured or blatant as Oceana in George Orwell’s 1984, but perhaps it’s even more disturbing because of its fragmented and chaotic nature. Private firms are operating with very few parameters, municipal governments largely get to set their own rules, and cops will always find loopholes, when they even bother to find a legal way to justify their violence; in many cases, they simply openly defy surveillance limits. Hopefully the same push back against the IRS’s use of FRT will be replicated against police use.

This article is sponsored by my client, Accurate Append, which supports campaigns, organizations and businesses by connecting them with their supporters and clients and helping to fill the gaps in their data.

--

--

Adriel Hampton: Advertising, brand, and SEO
Extra Newsfeed

Marketing strategist working to help nonprofits, PACs, and B2B achieve growth goals. Exploring opportunities in biochar. adrielhampton.com