Platform Biometrics
The following is a blog post from Jeremy W. Crampton, whose article “Platform Biometrics” was recently published in Surveillance & Society.
///
In early 2019, my wife and I were flying to the UK on Delta Airlines at Detroit airport. We noticed that check-in at the gate looked a little different than usual. Our fellow passengers were not using their printed boarding passes or even their phones or passports to board. Instead, they were being asked to stand in front of a screen. It turned out that Delta had implemented a facial recognition system, a form of digital biometrics. Using the system, claimed Delta, meant that boarding could occur more seamlessly, “One look and you’re on.” Although Delta offers the opportunity to opt out (at least for now) what was disturbing was that as far as we saw, we were the only ones to do so out of a planeful of 250 passengers.
How long has this been going on? I wanted to show in my contribution to the special issue of Surveillance and Society on “Platform Surveillance” both the long history of facial recognition, and what appear to me to be some new, very worrying developments.
When we think of biometrics we usually think first of fingerprints (itself dating back to the nineteenth century — Sherlock Holmes uses them to solve a case), but biometrics includes many forms of behavioural and physical characteristics. These include the way a person walks, how much they sweat (electrodermal activity), palm or iris patterns, speech characteristics, and many others. Biometrics aim not only to personally identify individuals, but also to detect and draw actionable inferences about personality, intent, emotional state, social conformity, gender, and sexual orientation.
It is these inferences that I think make up something quite new and worrying. Although facial recognition is already changing the nature of public space by making it compulsory to submit our faces to its scrutiny (the London Metropolitan police recently fined a man for covering his face during a trial run of the technology), it is when these technologies attempt to infer some characteristic of our innermost nature that they cross a red line. If a technology can be used to judge you, then it will be (and is already). So-called “gender recognition technology” for instance promises to classify people as either male or female based on facial imagery, in order to target advertisements. In so doing it entirely erases the possibility of trans lives, forbidding their very existence.
Although facial recognition is already changing the nature of public space by making it compulsory to submit our faces to its scrutiny…, it is when these technologies attempt to infer some characteristic of our innermost nature that they cross a red line.
Furthermore, these inferences rest on very shaky scientific ground, and are often trained on biased or insufficient datasets, or photos of people obtained without their consent. No doubt these problems can be addressed through a technological fix. But that’s not likely to make the situation better, and may in fact make it worse — do we want to be endlessly tracked wherever we go? The abiding question is how biometrics produces new conditions of inequality, and will therefore contribute to a more unjust society. It is not yet too late to resist biometric surveillance as the (partial) ban on facial recognition in San Francisco demonstrates, but the window of opportunity is closing.