The Dangers of Facial Recognition

Jon M
Product AI
Published in
3 min readMay 4, 2021

Facial recognition was once a hallmark of sci-fi, but we’re now years past its introduction in many facets of contemporary society. Social networks use it to tag people in photos, it’s the way that millions of people unlock their devices, and it’s playing an ever-greater role in helping law enforcement around the world identify criminals and missing children in equal measure.

It’s an amazing technology that is only possible because of the rise of machine learning algorithms, the sheer mountains of data that modern society collates every day, and ever-faster computers to process it all. But despite all of these advances, facial recognition technology has a dark side. It’s often wrong, which can create a myriad of potentially quite serious problems for those who fall foul of it. It’s also easily fooled, bringing its legitimacy into question for anything but the most superficial of purposes.

Imperfect: the enemy of the good guys

One of the most glaring problems with facial recognition is that it is prone to mistakes. That’s not uncommon in all sorts of technological fields, but since facial recognition is used for identifying people, the results of mistakes can be disastrous. Simply misidentifying someone when trying to unlock a phone could mean giving an unauthorized person access, but if police arrest the wrong person because of an incorrect facial match, that’s life-changing.

This is especially problematic when you consider that facial recognition is particularly prone to these kinds of mistakes when dealing with certain groups; particularly women, people with darker skin tones, and older people.

A 2018 study into the problems of gender classification in facial recognition found that black women were misidentified as much as 35% of the time. A 2019 federal study made similar findings, showing that black women were over 10 times more likely to be misidentified compared to white men. Black men, white women, and older people also faced a higher chance of misidentification under facial recognition algorithms.

All this research and a major push back against police violence towards minority groups in 2020 saw Amazon, IBM, and Microsoft all end the sale of facial recognition software to police forces. That hasn’t stopped its use in other countries though, nor other companies developing these technologies in other fields, but for groups like the ACLU, who are against its use by law enforcement, that was a huge victory.

Hiding in plain sight

Facial recognition doesn’t just get it wrong, it can be tricked, too. Computer Vision Dazzle (CV Dazzle) was a phrase coined in 2010 by artist Adam Harvey, who suggested that certain makeup and clothing could be used to fool facial recognition algorithms. Due to their reliance on specific facial metrics for identification purposes, disrupting the eyes and the shape of the face, or simply making them look more generic, through makeup, hairstyles and accessories, could trick facial recognition systems into misidentifying someone, or not recognizing them as a person at all.

The effectiveness of traditional CV Dazzzle tactics has waned in recent years as algorithms have improved, but that’s just led to new strategies to counter them. Some protestors suggest that reflective jewels can be useful, while others highlight how important it is to hide your ears, as they can be a major identifying factor.

There have also been attempts to use full facial projectors to display an entirely fictional face over the top of a person’s existing one, and some researchers found that simply holding a photo of a crowd blinded some algorithms to them entirely.

Obtuse over reliance

Facial recognition technology itself is not a villain. It’s an incredibly useful technology that can help bring greater security to personal devices and help find people that would otherwise slip through societal cracks. But its imperfections are extreme, and if not used in conjunction with the intuition that is still, for now, uniquely human, its potential skews towards the negative.

There have been real, criminal convictions of innocent people due not just to the failures of facial recognition, but a lack of corroborative insight from people who should oversee its use.

As the steady march of progression weeds out problems in the algorithms, these concerns will lessen, but an argument could be made that if they fall even one step short of perfection, then their use in settings that could change the life of those it touches forever, should be limited.

--

--