Loading…
0:00
10:56

Real-time face recognition across tens of millions of faces, and detection of up to 100 faces in challenging crowded photos.”

That was how Amazon described its facial-detection Rekognition technology, months before it found itself at the heart of the dispute around police surveillance in the United States.

Documents obtained by the American Civil Liberties Union (ACLU) revealed that Amazon had been licensing its powerful facial-recognition system to police in Orlando, Florida, and Washington County, Oregon, enabling authorities to identify people against a database of tens of millions of faces. In the case of Orlando, this technology had scope for real-time tracking, analyzing feeds from several cameras across the city. Amazon’s marketing material for Rekognition had promoted the use of its detection system in conjunction with police officer’s body cameras.

In the wake of the revelations, Orlando Police Chief John Mina attempted to smooth concerns, emphasizing that plans for drawing footage from body cameras into an Amazon-powered surveillance system were in their infancy: “We would never use this technology to track random citizens, immigrants, activists, or people of color,” Mina assured at a press conference. “The pilot program is just us testing this technology out to see if it even works.”

Yet concerns over the misuse of these surveillance technologies are building on a global scale. The combination of artificial intelligence and surveillance is problematic on both a practical and ethical level, provoking alarm from high office about the culpability of tech companies and law enforcement over these powerful tools.

At the same time, guerrilla resistance is emerging. To bend a quote by Paul Virilio:

When you invent the ship, you also invent the shipwreck. You also invent the lighthouse. You also invent the pirate.

Racially Biased Facial Recognition

The signs for the effectiveness of facial recognition aren’t good. A report by civil liberties group Big Brother Watch found that, in the UK, the London Metropolitan Police’s facial recognition matches were 98 percent inaccurate, while similar trials by South Wales police at concerts, festivals, and royal visits wrongly identified people 91 percent of the time.

A prototype for the HyperFace Mask, designed to confuse facial recognition. Photo: Hyphen-Labs, Adam Harvey

This inaccuracy is compounded by the fact that facial recognition is strongly biased toward white men. Research into algorithms performed by Microsoft, IBM, and Face++ and reported by the New York Times, found that gender was misidentified in up to 1 percent of lighter-skinned males, compared to 7 percent of lighter-skinned females, 12 percent of darker-skinned males, and 35 percent of darker-skinned females.

“Although it may be sensitive, when we are discussing facial recognition, we are addressing, fundamentally, identity politics,” Hyphen-Labs, an international collective of women technologists of color, tells me over email.

The assumption that nascent technologies are “neutral,” Anna Lauren Hoffmann wrote recently, belies the reality that they amplify insidious racial and gender biases. That creates a machine that can supercharge discrimination, turning police body cameras from tools of accountability into partially sighted surveillance engines.

For Hyphen-Labs, this is something to be examined, interrogated, and — to borrow that word much loved by Silicon Valley — disrupted.

“Privacy in public has yet to really be extended to communities of color,” the collective says. “Beyond this, the control of identity and image has been a way to oppress freedom from groups who have been historically and systemically marginalized both in the U.S. and globally.

“We want to have control over our identities, fate, and image.”

One project by Hyphen-Labs, led by Ece Tankal, Ashley Baccus Clark, and Carmen Aguilar y Wedge, questions how control could be wrested in a society of real-time tracking and instant facial recognition. Designed in collaboration with artist and researcher Adam Harvey, the team created a scarf that scrambles computer-vision algorithms. The purple material is packed with glitchy splodges—a camouflage for the 21st century that swaps out fake foliage for ghost faces, spamming the camera with potential matches.

“If we can be clever about the inputs we give our machines, we can trick, troll, and use them to address the issues that arise when we depend on them too much,” Hyphen-Labs explains.

The scarf is part of Hyphen-Labs’ wider NeuroSpeculative AfroFeminism project, which also includes earrings embedded with cameras and an Afrofuturist “neurocosmetology” salon, all angled toward exploring the roles and representations of black women in technology. When it comes to facial recognition, the scarf is a way to draw attention to the creeping growth of new surveillance techniques in Western society and how these mechanisms are imperfect, built from flawed algorithms that can can be fooled, sidestepped, hijacked.


How to ‘Perturb’ a Computer System

Hyphen-Labs admits that the actual effectiveness of the scarf is unclear, in that it’s based on open-source facial recognition software, which becomes outdated relatively quickly. Instead, the project is meant to surface questions around privacy and the politics of surveillance. “It is a provocation, bending the imagination to think about the cameras that coexist with us in society and capture our movements with and without our consent,” the collective explains.

While Hyphen-Labs is disrupting the conversation, others are disrupting facial recognition on a technological level.

Frames for glasses that have been designed to fool facial biometric systems. Photo: Carnegie Mellon University

Researchers from Carnegie Mellon University, for example, have developed a way to use specially designed eyeglass frames to “perturb” computer systems. The patterns on the frames confuse the image-recognition systems, meaning the wearer can evade recognition or, most impressively, masquerade as an entirely different person. One subject in the study fooled a system into thinking he was both the actress Milla Jovovich and the actor John Malkovich. Talk about identity politics.

Another project, led by Stanford University researcher Jiajun Lu, involved creating “adversarial examples” to spoof image-recognition tools, including facial identification and the systems used by self-driving cars to recognize stop signs and traffic. For the former, Lu and his team created a camouflage that looks like one of those Google DeepDream nightmares. “By applying our camouflage, faces cannot be detected from various view conditions: angles, distances, lightings, and so on,” Lu says.

A DeepDream-esque camouflage mask, overlaid on a video of a person’s face. Photo: Jiajun Lu, Hussein Sibai, and Evan Fabry

While the camouflage in the research was overlaid onto a person’s recorded face, Lu tells me it would be “a piece of cake to manufacture tattoo stickers to put them on a person’s face.” He suggests that experimental “living tattoos,” made from genetically programmed living cells, could be used for the purpose. A benefit of this would be that the pattern could be designed to appear only under certain circumstances—the cells in the tattoos programmed to show their colors only when commanded or when particular environmental conditions are met. Given that the camouflage makes you look like a walking acid trip, being able to turn it on or off as you please would be useful.


China’s Test Bed for Surveillance Technology

Lu isn’t developing these “adversarial examples” for protesters, hackers, or spies, however. He says that the basic idea behind this research is to push the people who design neural networks to improve the algorithms in their systems.

“People in the deep learning area [are becoming] more and more concerned with the security of the networks, as the networks achieve more and more success. We cannot afford the security threat. There is no doubt that more and more people will try to attack these systems.”

By wrecking the ship, the idea is that flaws will be exposed and improvements can be made. It’s a reminder that, while trials like those recently undertaken by London’s Metropolitan Police are laughably inaccurate, the technology is evolving.

If you want a hint of where it could be in the next decade, look east. While UK and U.S. police face a gamut of technical and political barriers to developing facial-recognition systems, China is another matter.

SenseTime surveillance in action.

SenseTime, a Chinese company at the heart of the country’s AI boom, was recently valued at $3 billion, a figured fueled by the firm’s image-recognition capabilities. Among SenseTime’s customers are a platter of government-related agencies, which are able to feed the company datasets of a magnitude that would make many Western AI firms drool. Speaking to Quartz, SenseTime CEO Xu Li gestured to a training database of more than 2 billion images: “If you have access to the government data, you have all the data from all Chinese people.”

In return, SenseTime is able to provide its Viper surveillance system, which the company aims to handle 100,000 live video feeds simultaneously, pulling footage from CCTV, ATM cameras, and office face-scanners. Identity politics are again at the heart of these technologies. The test bed for much of this top-of-the-line surveillance over the past few years has been the fringe province of Xinjiang, home of the Uighur Muslim ethnic minority, which the Chinese state has blamed for a string of terrorist activities. There have been reports that China has been using its advanced surveillance to impose greater central authority and clamp down on the rights of the Uighur population.

SenseTime has emphasized that AI image recognition can be used for good—that it can be used to help find missing children. That’s a sentiment echoed in recent comments by Amazon about its Rekognition system. After the ACLU released public records detailing the technology giant’s relationship with law enforcement agencies, Amazon issued a statement arguing that Rekognition has “many useful applications in the real world,” including finding lost children at amusement parks.

No doubt there is truth in this. No doubt, also, that this isn’t the whole story. The scale and scope of image recognition is expanding, and whether the camera is in our smartphones, on our playgrounds, or on a police officer’s chest, our identities are the target. There may be improvements to stop clever scarves and glasses from spoofing the system, but once you’ve invented the ship, you can’t uninvent the shipwreck. You can’t uninvent the pirate.

“We are aware of updates,” Hyphen-Lab says. “We get them on our phones, computers, cars, and as facial recognition technology develops, we will still postulate on potential ways to subvert its intended use.”