Use of Facial Recognition Technology and the Jan. 6 Capitol Attack

The use of facial recognition technology (“FRT”) by both law enforcement and private citizens in connection with the Jan. 6 U.S. Capitol attack represents the growing power of this technology, even as regulatory oversight strengthens.

On January 6, 2021, the U.S. Capitol was violently breached while Congress was in session to vote on the certification of the 2020 election. Many of these attackers sought to disrupt this process. To date, 786 people have been federally charged in connection with the attack in one of the largest criminal investigations in American history. FBI investigators continue to pore through the vast record of evidence. Indeed, between the surveillance and police body camera footage, electronic communications including text messages, and videos and photos posted to social media, the Capitol attack was arguably the most documented crime in U.S. history.

FRT is one of many investigative biometric resources available to law enforcement. Law enforcement regularly uses FRT as an investigative tool to match faces to a large database — called one-to-many identification. This process is used to generate suspect leads, identify victims, sort faces in photos that are part of forensic evidence, and verify the identity of inmates being released from prison. Another way that law enforcement uses FRT is through one-to-one verification — to make sure that a person matches their photo. This process is increasingly being used by U.S. Customs and Border Protection (“CBP”) and the Transportation Security Administration (“TSA”) for ID verification of passengers at airports.

The Future Is Here

The extent to which federal law enforcement used FRT in connection with the Capitol attacks remains unknown. However, according to a recent GAO report, at least three federal agencies reported using FRT on images of suspected criminal activity during the Capitol attacks:

· The U.S. Capitol Police used leading facial recognition firm Clearview AI to help generate investigative leads. The agency also requested that another federal agency use its system to conduct facial recognition searches on behalf of the U.S. Capitol Police.

· CBP used its Automated Targeting System to conduct searches at the request of another federal agency.

· The Bureau of Diplomatic Security used the Department of State’s Integrated Biometric System to conduct searches at the request of another federal agency.

Additionally, there have been several confirmed instances of local law enforcement’s use of FRT in connection with the attacks. For example when the FBI published “be on the lookout” bulletins asking for public assistance in the search, the FBI received a tip from a Maryland county prosecutor’s office that FRT had identified two state residents as possible matches for an individual described in an FBI bulletin. After an FBI investigation, this individual was arrested and became one of the 786 defendants charged.

So-called amateur sleuths, such as members of the online group Sedition Hunters, have also used FRT to help law enforcement identify people who breached the Capitol. The enormous record of evidence available to the public opened the door for ordinary people to lend a hand, most of whom lack a law enforcement or intelligence background. These “internet detectives” have used facial recognition sites such as PimEyes to identify the Capitol attackers and submit tips to law enforcement.

Civil Rights Concerns

The use of FRT by both law enforcement and private citizens in connection with the Capitol attacks was arguably a means to further justice. However, as previously described in our blog, the accuracy of FRT has been improving greatly in recent years, and this creates the potential for misuse, abuse, and privacy and civil liberties violations. For these reasons, the ACLU stands opposes the use of FRT to identify the Capitol attackers and has sued the FBI and Clearview AI, which offers its services to law enforcement agencies, and harvests photos from social media and stores this information on a database for its users.

Congress has yet to pass any federal legislation regulating FRT, but there have been several bipartisan proposals to regulate its use, including the Facial Recognition and Biometric Technology Moratorium Act of 2021 and The Government Ownership and Oversight of Data in Artificial Intelligence (GOOD AI) Act.

In the absence of federal regulation, an assortment of laws has been enacted by state and local legislatures. In Massachusetts and Washington, law enforcement must obtain a warrant or court order prior to using FRT. Police in Maine must now meet a probable cause standard prior to making an FRT request and are prohibited from using a facial recognition match as the sole basis for a search or arrest. California has banned facial recognition software in police body cameras but does not ban police from using the technology in other types of cameras.

Sometime between the 1st and 2nd century, the Roman satirist Juvenal posed the question Quis custodiet ipsos custodes? — literally translated as “who will guard the guards themselves” and sometimes rendered as “who watches the watchmen?” This question comes to mind in regard to the power of FRT and its growing use by law enforcement, as demonstrated in the aftermath of the Jan. 6 Capitol attack.

Daniel Morizono is a 3L student at the Santa Clara University School of Law. This blog post was written as part of Professor Colleen Chien’s AI law class.

--

--