Artist and researcher Adam Harvey launched a project known as CV Dazzle in 2010. It involved a series of abstract hairstyles and jagged makeup patterns designed to spoof facial-recognition surveillance. It camouflaged him to a camera, yet it made him even more visible to other people on the street, revealing the gap between human and computer vision. In the past year, face-detection algorithms have grown more robust, so Harvey is exploring ways to adapt and refine his camo techniques. He has a new project in collaboration with Hyphen Labs called HyperFace Camouflage, which spoofs face detection by offering a “perfect face” in a patterned garment. A surveillance system might sense someone walking, scan for a face, and, in doing so, capture this face-like object in the pattern, rather than the actual face of the individual wearing it.
In our conversation, Harvey shared his thoughts on Apple’s new face recognition functionality and where biometrics is heading next.
Why are faces central to modern surveillance?
Adam Harvey: Faces have evolved to convey information. Facial recognition was developed to replicate that human perceptual task. Then, in the last two years, researchers figured out a new way of seeing that isn’t human anymore. For example, by amplifying the green color channel in your forehead, I can extract your heart rate. The capabilities of computer vision broke away from the bottlenecks of our own built-in multimodal perceptual systems.
The face is also something that our brain has evolved to understand. The fusiform gyrus part of a brain is a really efficient muscle. That’s why you notice if somebody is looking at you when you’re walking down the street. We’re very attuned to seeing a face and knowing which direction it’s looking.
In the case of Apple, the new face-recognition functionality is more advanced than [the wavelength webcam that] takes place at border control. Apple takes that up two steps. One is an infrared camera to do skin reflectivity measurement, and another is projecting infrared dots — 30,000 infrared dots on your face — to measure the depth. Now you have three types of cameras pointing at you. It’s much more secure, if you are speaking in terms of providing a high confidence score. But if you say it’s secure, you’re being a little myopic, because secure in what situation, and does it generalize? Can it be fooled by people who look very similar to you? Apple is not the kind of company that comes out and discusses vulnerabilities, and in general people in biometrics don’t offer that information. Nobody knows exactly what are Apple’s face recognition vulnerabilities. For example, if you get somebody’s twin, then you’re in. Not many people have twins, but a good makeup artist with a special kind of makeup, and find somebody who looks similar enough…
I’m imagining the face-scrambling device in ‘Minority Report.’ If you had that and a 3D scan of someone’s face, you could match.
You can imagine in a sci-fi scenario or in a high-security scenario that having a similarity to a celebrity could be a valuable thing to have.
It’s dangerous to have a doppelgänger. That would make a great thriller, actually.
There you go. Some people are more vulnerable to doppelgänger-type attacks. Biometrics in general is only useful for only so long. You can replicate somebody’s finger—that’s difficult, but that will probably get easier. You can replicate somebody’s voice. You can make a fake voiceprint. It’s called a modality, where technology outpaces the reliability of that biometric. A face is one modality. A fingerprint is another modality. Everyone wants to do multimodal systems, because one modal alone is not secure enough, but then what ends up happening is you build these cascaded multimodal authentication systems, and you just keep adding more and more, and you go deeper into basically somebody’s soul.
The next step is to do vein recognition in your fingers. Then what happens when that database leaks and somebody figures out how to make a totally spoofable fake vein finger? What do you do? Biometrics is a cat-and-mouse game. Every step, you acquire more resolution to increase the confidence score, but there’s really no end in sight.
Apple started with fingerprints and moved to the face. Where can it go next?
If the goal is absolute security, the device has to be fused to your body or become a part inside. Even that’s probably not secure, because Google was testing the pillow that emits a radio-frequency identifier, but radio-frequency identifiers are totally clonable. It’s just asking for trouble.
I imagine this could lead to aggressive scenarios, like kidnapping.
Or developing synthetic skin or growing skin. When the bounty is somebody’s huge bitcoin wallet, for example, then all that stuff makes more sense or is less crazy. The other thing is facial recognition is a common term, but nobody can agree what a face is — where it starts and where it ends.
Does it include the neck? Or just between the eyebrows and lips?
It could be 100 by 100 pixels cropped. Some people crop it a little bit more. When it’s eight megapixels, that’s a much different thing, because then you’re like looking at capillaries and skin texture. It’s basically a fingerprint of the face skin at that point, and you have eyes, so you’re also doing iris recognition. Then you could include motion, so you’re doing behavioral analysis.
It could include looking at muscle patterns—the way a mouth could move. And some people include ears; ear recognition is a hot topic. At some point it’s a hardly a face anymore. You’re looking at blood vessels. You’re looking at skin aberrations. You’re looking at eyelashes. Most people don’t include that when they say face recognition, but that’s where it’s heading.
About this Collection