Beyond Bias: Why We Can’t Just “Fix” Facial Recognition

In many cases, the problems at the heart of facial recognition run deeper than bias alone.

Digital Freedom Fund
Digital Freedom Fund
7 min readAug 22, 2020

--

Artwork by Cynthia Alonso

In recent months, the troubling issue of facial recognition has hit headlines.

Following the explosion of Black Lives Matter protests around the globe, a highly publicised moratorium on facial recognition by several big tech companies, and even a segment on John Oliver’s Last Week Tonight, people have become alert to the many dangers of this mushrooming technology.

In light of the calls for racial justice sweeping the globe, the issue of bias in facial recognition systems has ignited particular anger. But, in many cases, the problems at the heart of facial recognition run deeper than bias alone.

Digital activists have been pointing out for years that facial recognition technology identifies some faces better than others — namely, white male faces.

Finally, it seems, people have started to listen. As with all artificial intelligence systems, facial recognition reflects the prejudices of its creators.

Fears about where this might lead aren’t merely speculative. What activists have warned us about is already happening. Perhaps the most striking illustration of this was the viral story of one man, Robert Julian-Borchak Williams, who was wrongly identified by facial recognition and, consequently, arrested by police.

This case is merely the tip of an iceberg. Speaking with Gracie Bradley of Liberty, she points out that, when it comes to automated systems, “you have an issue of scale, which massively magnifies the possibility for injustice to happen”.

What activists have warned us about is already happening

“That’s the difference between, for example, one police officer who tries to recognise a suspect and gets it wrong, and a facial recognition algorithm that is able to scan hundreds of people in one day and is getting things wrong.”

I n response to the recent outcry, a handful of tech corporations announced they were hitting pause on facial recognition for the next year . In some ways, it’s a promising development: a win for activists who’ve been tirelessly pushing back against facial recognition for years.

But, on the flipside, a year isn’t a very long time — and what is arguably one publicity stunt by a handful of companies doesn’t quite get to the heart of what makes facial recognition so worrying.

Contrary to popular assumption, it’s not always easy to simply make facial recognition systems “less biased”

This isn’t just cynicism or “whataboutery”. Contrary to popular assumption, it’s not always easy to simply make facial recognition systems “less biased”. Many experts argue that the belief that the systems can easily be fixed with a few tweaks or by inputting “cleaner” data is extremely short-sighted.

“Where would this clean data come from?” asks Bradley. “The real issue is that you’d have to eradicate discrimination in society before.”

Take the 2015 case of a Google image recognition tool that wrongly classified black people as gorillas. Instead of correcting the algorithm, Google stopped the tool from returning the label “gorilla” altogether, illustrating rather starkly that a quick fix for the root problem couldn’t be found.

These technological weaknesses conflict with a troubling human tendency: to defer to machines. People tend to trust what computers are telling them, which makes it all the more important that we’re aware of their many failings.

“There’s a real risk of a kind of codification of existing bias which then takes on a veneer of ‘Oh, but it’s a computer doing it. It’s a technology doing it, so it must be right’”, says Bradley.

Other biometric tracking systems, such as emotion detection and gait recognition, are quickly creeping onto the market

Putting facial recognition aside for a moment, there are also challenges posed by its close cousins. Other biometric tracking systems, such as emotion detection and gait recognition, are quickly creeping onto the market. Unsurprisingly, these throw up just as many obstacles.

“There’s a lot of research that shows how emotion is expressed and seen varies massively across cultures,” says Bradley. “We also know that there are a lot of background stereotypes that means some people — young black men, for example — are more likely to be seen as angry or threatening.”

For some, focusing on the matter of bias or prejudice jumps the gun. Even before we delve into the “accuracy” of these systems, there are questions around whether we should be developing such technology at all — nevermind deploying them in public spaces without consent.

It shouldn’t be radical to suggest that the potential impact of these systems should be fully understood before they’re rolled out by police and public authorities. Unfortunately, that is not always the case.

“We shouldn’t be saying ‘oh, let’s see how it goes and maybe we’ll be able to fix it at a later date’”, says Bradley. “There’s too much at stake.”

When I speak with Dr Seeta Peña Gangadharan, an associate professor in Media and Communications at the London School of Economics and Politics, she tells me about the facial recognition trials that were run at Granary Square in London, an open space where crowds regularly gather to sit in the sun or spend time with their families.

“I can’t tell you the number of times I’ve brought my kids to that square,” she says. “Every time we go there, I point out the cameras to them so they know what surveillance infrastructure looks like.”

“You would never be able to get away with that in an intimate relationship, that kind of notice or consent process that you have with face recognition systems”

But for months prior, Dr Gangadharan had no idea the pilot was even running. There was no option to tick a box, sign a form, or consent to being surveilled in any meaningful way.

“That’s, I think, a taste of what’s to come,” she says. “You would never be able to get away with that in an intimate relationship, that kind of notice or consent process that you have with face recognition systems. Why should that be acceptable?”

The fact that facial recognition technology around the world functions as an intensive form of mass surveillance can’t be overlooked: yet somehow this reality is becoming ever-more normalised in public spaces, including at protests, and even in schools.

“It really tips the balance of power in terms of the individual and the state,” says Bradley. “[It makes it] far more difficult, potentially, to take political actions or dissenting actions. Or simply for people to choose what they do or do not disclose to the state, which is a really important part of people’s identity.”

Of course, there are actions that can be taken: many are campaigning for fully-fledged bans on facial recognition, while others urge us to tackle the issue at its roots by demanding greater reflexivity on the part of technologists who create these products.

…it’s about asking ourselves: do we need digital, data-driven technology to solve this particular problem?

“I feel like the question has to be answered much earlier than the development of the technology, and currently it is not,” says Dr Gangadharan. For her, it’s about asking ourselves: do we need digital, data-driven technology to solve this particular problem?

Promisingly, there are cases where the answer has been no. Take the recent landmark ruling in the UK that deemed the use of facial recognition by police in South Wales to be a breach of human rights — a massive win for Liberty, which is campaigning for a ban on the technology.

Take the recent landmark ruling in the UK that deemed the use of facial recognition by police in South Wales to be a breach of human rights

There’s also the two recent cases in Sweden and France, where the use of facial recognition to monitor kids’ attendance at school was deemed disproportionately invasive under the GDPR.

There are chances, it seems, to stop the onward march of surveillance technology.

“Things are moving quickly. That doesn’t mean that it’s all inevitable,” says Bradley. “When we look at just what’s happened over the last few months… in terms of Black Lives Matter, if we look back at Extinction Rebellion, it’s clear that the course of history isn’t sort of linear and fixed. People can intervene to change what happens.”

The Digital Freedom Fund supports partners in Europe to advance digital rights through strategic litigation. Read more here.

--

--

Digital Freedom Fund
Digital Freedom Fund

The Digital Freedom Fund supports partners in Europe to advance digital rights through strategic litigation. https://digitalfreedomfund.org/