CS(A)I: Miami
Just a few months before the death of George Floyd led to widespread discontentment with law enforcement worldwide, the Miami Police Department used facial recognition software developed by Clearview AI Inc. to find an assailant in Miami’s Wynwood neighbourhood. This was not the first instance in which the Department had used Clearview’s software — in fact, detectives at the Miami Police Department had previously identified 28 suspects tied to violent crimes and felony cases with its help¹.
As per Clearview’s website, Clearview AI is a new image search technology that helps law enforcement to identify suspects, perpetrators, and victims of crime. It is also being used to exonerate innocent people accused of crime. Upon reading this description of Clearview AI, most people would have no issue with the use of this software. A deeper look reveals that the company behind Clearview AI scraped billions of publicly available pictures on Facebook, Venmo, Linkedin, and Instagram, without user consent. Put differently, Clearview finds pictures of someone already available, then analyzes the person’s face in those images to create a “faceprint” of sorts. The founder of Clearview AI, Hoan Ton-That defends this — he believes he has the American First Amendment right to use publicly available images.
Ton-That’s claim is that the images gathered by Clearview AI are willingly uploaded by people who have also signed certain terms and conditions. In other words, Clearview AI is simply searching and loading images already made public onto its database. Clearview does not use pictures that are uploaded for private access, nor has it poked any holes in the terms and conditions of any social media site (unlike Cambridge Analytica in the Facebook data scandal). Although the argument put forth by the founder of Clearview AI is both logically sound and technically legal, many are hesitant to accept it — and for good reason too. After all, there must be some rational explanation as to why tech giants like IBM, Amazon, and Microsoft have stopped selling facial recognition software to police departments⁴.
There are two main reasons for rejecting the implementation of facial recognition software in law enforcement: fear of the possibility of bias, and privacy-consent issues. There are several existing cases of algorithmic bias disproportionately representing and underrepresenting minority and non-minority populations, as discussed by Jae Makitalo. Clearview claims that its app is 99.6% accurate — but even the slightest chance of any discrimination at the hands of the police is worrisome for the public. This marginal chance of incorrect identification and its implication are not being ignored by people protesting police practices — banning biometric technology is among their top demands⁵.
Miami Police Assistant Chief Armando Aguilar understands the concern of gender and racial bias but says that his department has measures in place to prevent wrongful arrests. Specifically, The Miami Police Department’s policy ensures that officers and detectives are aware of algorithm bias, and cannot make arrests solely on recognition identification. The Miami Police Department’s policy states that “a positive facial recognition search result alone does not constitute probable cause of an arrest”, meaning officers would require additional evidence, like a witness or DNA, before making an arrest⁶. Maureen McGough, National Programs Director of the National Police Foundation reveals that all U.S. law enforcement agencies using Clearview AI and other biometric tools only use facial recognition IDs as leads, not as evidence⁷. While this may be comforting for some, opposition towards facial recognition technology comes from yet another frontier: privacy.
Even if extra caution is taken to prevent algorithm bias, there remains the issue of privacy with respect to facial recognition technology. As it stands, the Clearview AI database is made up of three billion publicly available photos. It doesn’t matter if those images are later deleted or changed for private-viewing only — the Clearview database will retain any image so long as that image was once public. It doesn’t even matter if the images were uploaded without someone’s consent, like people who may appear in the background of pictures. When questioned about this, the founder of Clearview said, somewhat sheepishly:
“There’s also our view that… if someone else was in the background of [your] photo, and that could lead to solving a crime, then it’s a good thing.”⁸
His argument is somewhat akin to the “nothing to hide” argument, which implies that if an individual has nothing to hide, they mustn’t worry about privacy. This argument is certainly an attractive one that many find themselves sympathetic to. Regarding the nothing to hide argument, American whistleblower Edward Snowden offers a few words:
The common argument we have — if you have nothing to hide, you have nothing to fear — the origins of that are literally Nazi propaganda. This is not to equate the actions of our current government to the Nazis, but that is the literal origin of that quote. It’s from the Minister of Propaganda, Joseph Goebbels. [A]rguing that you don’t care about privacy because you have nothing to hide is like arguing that you don’t care about free speech because you have nothing to say.⁹
Despite facing its fourth privacy-related lawsuit in May, Clearview AI’s success continues to boom. The Clearview AI founder shares that over 600 law enforcement and intelligence agencies are currently using the biometric recognition software. That’s right — the Miami Police Department isn’t the only one. Here at home, municipal police forces in Calgary, Edmonton, Halifax, and Windsor have confessed to using the controversial software. Toronto police Chief Mark Saunders disclosed in February of this year that Clearview was used “informally” in 2019, while the Ottawa Police also used Clearview AI for a period of three months that same year.¹⁰ Most shockingly, the RCMP denied using facial recognition software in January of this year, only to backtrack a few weeks later to reveal that it had been using Clearview AI “for a couple of months”.¹¹ Fortunately for those who now harbour an intense opposition to AI in the law enforcement arena due to this lack of transparency, the Office of the Privacy Commissioner of Canada (OPC) announced in July this year that Clearview AI would cancel its contract with the RCMP and would no longer offer its facial recognition services in Canada.¹²
Certainly, it cannot be denied that the use of face recognition technology has led to the identification of hundreds of sex trafficking victims and offenders. Clearview AI and other biometric innovations have also led to an increased number of arrests in violent crime and cases previously thought to have gone cold. On the other hand, facial recognition tools amplify the risk of racial and gender bias and raise issues of consent and privacy. While Canadians no longer must decide on police use of biometric technology, the same cannot be said for our Southern neighbours. American law enforcement will be faced with a high-stakes decision: will justice or injustice be served to the American people?
Update (03/02/2021)
Today, Canada’s top privacy watchdog has ruled that Clearview AI violated Canadian privacy laws by collecting and compiling images of Canadians — public or not — without their permission. In a scathing report from the Office of the Privacy Commissioner, privacy commissioners are calling for Clearview AI to delete already-collected photos of Canadians in its database and take its technology out of Canada.
This may seem like a severe crackdown on a seemingly beneficial technology. Daniel Therrien from the Canadian Office of the Privacy Commissioner disagrees, referring to the disproportionate arrest of racial/ethnic minorities overrepresented in Canada’s criminal justice system. Therrien tells CBC reporters,
It is an affront to individuals’ privacy rights and inflicts broad based harm on all members of society who find themselves continually in a police lineup.¹³
As RCMP and police departments across Canada are charged with cries of negligent discrimination on the grounds of racism, ableism, and sexism, Canada’s ban of Clearview AI is perhaps not a bad idea after all.