Selling ClearviewCombat Facial Recognition to Israel Would Be a Grave Mistake
Note: This article originally appeared in Charged Affairs
This article is part of a special series, “Predictions & Predicaments.” It should be read as if written sometime in the year 2024. Read more about the special series here.
Last week, the US Department of State notified Congress of its intent to license General Atomics for the export of ClearviewCombat, its drone-compatible facial recognition program, to Israel — the first sale of the controversial technology to a foreign government. Allowing the sale to go forward would facilitate human rights abuses by Israeli security forces, make the United States complicit in such abuse, and irreversibly open a Pandora’s Box of dangerous artificial intelligence (AI) technology. Congress must block the sale, and the State Department should place an immediate moratorium on exports of the technology and work with global powers to reinvigorate the Global AI Compact.
Clearview allows users to photograph a person, upload the photo to the application, and instantly identify them using a database of billions of images scraped from social media sites like Facebook and YouTube. The technology first hit US markets in 2019 under the startup Clearview AI, and has since been used by the US Department of Defense, US Department of Homeland Security, FBI, and hundreds of state and local police forces. Agencies claim that the program has been invaluable in solving crime and identifying suspects. In March of last year, Clearview AI was acquired by drone manufacturer General Atomics, who adapted the technology for military drone use. Today, General Atomics’ ClearviewCombat program allows military users to instantaneously identify and lethally target suspected enemy combatants.
The technology is extremely problematic in anyone’s hands. Law enforcement bodies are rife with privacy and due process violations, and despite multiple civil liberties-related lawsuits in the United States, regulation lags behind innovation. Clearview is also vulnerable to misuse, as shown by recent cases of hate crimes, vigilante justice, and sexual assault by off-duty police officers using the technology. When combined with the ability to match facial recognition with lethal force, as is the case with ClearviewCombat, the picture becomes even bleaker, allowing military forces to surveil and summarily execute anyone who matches their definition of a combatant. The technology is also just a breath away from full autonomy — popularly known as “killer robots” — in which machines prosecute targets without human involvement. According to Human Rights Watch, such fully autonomous weapons systems would be incapable of meeting the standards of international humanitarian law.
ClearviewCombat would be especially dangerous in the hands of Israel. Israel’s conflicts with Hamas and Hezbollah have intensified since the failure of President Donald Trump’s so-called “Deal of the Century” and heightened tensions between the United States and Iran, who have made Israel and Palestine the site of an increasingly brutal proxy war. Israeli operations have historically been characterized by significant civilian casualties and war crimes, including last month’s killing of over 50 unarmed protesters at the Gaza-Israel border and the targeted killing of two Palestinian journalists earlier this year by armed drone swarms.
The Pentagon argues that ClearviewCombat would aid Israel in fighting terrorism while reducing civilian casualties by making its targeting more precise. But if past operations are any indication, precision is not the problem — the targets are. Rather than reducing civilian casualties, ClearviewCombat is likely to facilitate continued human rights abuses and war crimes by allowing Israeli intelligence to easily identify and lethally target perceived threats — including protesters, journalists, and civilian supporters of Hamas and Hezbollah. And while Israel currently states that all its autonomous drones operate with “man on the loop” — a human playing a supervisory role and able to intervene at any stage — full lethal autonomy is likely not far off.
The technology itself is also vulnerable to deadly errors. Artificial intelligence is only as biased as its underlying algorithms, which reflect the bias of the humans that develop them — as evidenced by the frequent misidentification of black faces by facial recognition technologies and the wrongful arrests of innocent black men in the United States by police using Clearview’s law enforcement application. And as Clearview’s database grows, it might actually become less accurate thanks to the Doppelgänger effect. Applied to Israeli operations, this could exacerbate existing biases — like the common counterinsurgency bias towards surveilling and targeting “military-aged males” — and result in the misidentification and targeted killing of a Palestinian civilian simply because he or she looks similar to someone on Israel’s watchlist.
Rather than improve conduct, approving the sale of ClearviewCombat will make the United States more complicit in Israeli abuses and deepen US support for an already inhumane occupation, including Israel’s continued starvation campaign in Gaza. To add insult to injury, once facial recognition-enabled targeting technology is exported, the United States will have opened a deadly and unpredictable Pandora’s Box full of human rights abuses.
Congress must block the sale of ClearviewCombat and further regulate the trade of militarized artificial intelligence. Because this technology is too dangerous for anyone to have, the State Department should place an immediate moratorium on exports of the technology and work with global powers to reinvigorate negotiations on the stalled Global AI Compact to outlaw lethal autonomous weapons and ensure that new AI developments respect fundamental human rights.
The United States has long considered itself a champion of privacy, due process, and human rights. Allowing the sale of ClearviewCombat would be an affront to that vision.
About the author: Annie Shiel is a research program manager in civilian protection at FSI’s Center for Health Policy, protection innovation fellow with the Center for Civilians in Conflict, and a national security fellow with the Truman National Security Project. She spent three years at the State Department as a policy advisor in the Office of Security and Human Rights. Annie is an alum of FSI’s Ford Dorsey Master’s in International Policy program.