The Promise and Perils of Facial Recognition Technology
Lyndsey Jefferson speaks to Emily Taylor about the increasing prevalence of facial recognition technology and whether current laws are enough to address privacy concerns.
By Lyndsey Jefferson and Emily Taylor
One of the first uses of facial recognition was as a military technology in Iraq and Afghanistan. Earlier this year, it was part of a controversial trial by London’s Metropolitan Police. How is this technology evolving to become a major part of everyday life?
We’re already seeing it. Many phones and applications offer the choice of having facial recognition as an alternative to a password, which provides a convenient and easy way to verify that it’s really you.
Of course, with that convenience comes a price and a worry about how it exactly works, who else has access to it and what it can be used for. The UK is quite unusual in that people tolerate very high levels of surveillance. It has the highest level of CCTV cameras per capita in any Western country. Those visiting the UK from more privacy-conscious countries like Germany are often shocked that people don’t seem to worry about it.
A natural progression of that is those cameras become smarter and smarter and they’re able to recognize people in different places. We see in China the controversy over social credit and the fact that as facial recognition improves, and as people are followed around in different contexts, they can be pinned to different locations or activities and be judged by those activities.
Critics of facial recognition technology say that it takes place in a legal vacuum, especially as the lines between public and private sectors using the technology become blurred. Have laws and policies caught up with the rapid advancement of AI and facial recognition?
The General Data Protection Regulation (GDPR) in Europe gives stringent protections for personal data, and the processing of sensitive personal data, like medical records and children’s records.
There are severe sanctions and organizations processing this data, whether they are private or public, must take appropriate security measures and ensure that they are processing the data in accordance with the data protection principles. It isn’t really a legal vacuum, but a sort of societal vacuum.
When it comes to so many technologies, not just facial recognition, we let society test them. We don’t really know the impact of living with these technologies, what could happen if they fall into the wrong hands, or even the insidious harm of an almost 360-degree erosion of privacy.
There are laws and you can normally adapt the enforcement of laws to most things that occur. What hasn’t really happened is a wider social conversation that isn’t just policy wonks, lawyers or techies, but a wider societal base about whether this is a good direction to go in. If people don’t think it is, how do they roll back from where it is now?
The FBI has one of the largest facial recognition databases, made up of around 640 million photos from driver’s licences, passports and mugshots. Around one in three Americans are part of this database. What consequences does facial recognition technology have for privacy and freedom of expression?
It’s part of a general trend where not just the state, but companies, almost know more about people than they know about themselves. Everyone is living with their Alexa, Siri or other devices that are always listening to them.
If you go back to the Cold War era, the Stasi in East Germany were one of the most organized and pervasive surveillance organizations ever. They had tons of files and miles of documents in storage on almost everybody in the population, but they didn’t have a fraction of the information that is now available to free, Western, liberal democracies about every individual.
There’s not just the FBI database, but going back to the Snowden revelations, a very wide-ranging sharing of data between private companies and governments. It’s a strange situation because people know about this as a society, but don’t seem that bothered by it. It might be that societies are changing in what they’re prepared to put up with and their trust in the state. Or that they haven’t yet imagined or been confronted with the harm to individual liberties.
They’re also more trusting in the integrity of computer systems, which is not always well-placed, and data integrity is one thing that people don’t really talk about. If you decided to hack into a system and frame somebody, or plant information about them digitally, hardly anybody will question that. There are studies that show that people are much more willing to obey a computer unquestioningly than they are willing to obey other people, which carries a lot of implications.
Facial recognition is much more advanced and prevalent in China and companies like Alibaba, Tencent and Baidu hold vast amounts of personal data from China’s 1.4 billion people. How do public perceptions of tech and AI differ in China to Western countries?
I think there are different social norms and a different regime in China and so what’s normal is different to what might feel normal in the West. There are also a lot of parallels in the way technology is developing and in the way it’s being used.
There is a grab for as much data as possible. That makes sense when you think about what is required to really get AI away from just being a claim of artificial intelligence into something that is genuinely workable — you have to train it on huge datasets. There’s a real incentive for governments, militaries and the private sector to gather as much data from many different sources as possible so that these technologies can be developed.
There’s almost a bipolar reaction both from the press and individuals, where anything that’s new is automatically to be feared. That’s not a very intelligent narrative. There are many uses for AI and big data which could help solve a lot of the world’s problems, like climate change, understanding the universe a lot better, and basic cybersecurity defence problems that are susceptible to AI solutions.
At the same time, people adopt technologies almost unthinkingly. I worry about privacy online and I have facial recognition enabled on my phone because it’s really convenient. I think even this personal example shows that what people say and what they do don’t always match up.
There’s no doubt that aside from those general recognized patterns of fearing change, we’re living in an era of incredibly rapid technological advancement, the implications of which most people don’t understand. The governance of them is also extremely weak and there isn’t an international governance of all technologies. I don’t know if there even could or should be, but where we are now seems very far away from perfect.
Big conversations, particularly about AI and ethics, are starting, but are they really affecting the rollout or delaying the rollout of AI? It’s around people all the time but they’re not really coping with it in a balanced way.
San Francisco recently banned the use of facial recognition technology by city police and other government departments. MIT and Stanford University research found that facial recognition algorithms are not always accurate and have a race and gender bias. How can these biases be countered?
The whole issue of algorithmic bias is becoming more recognized. One of the major issues is these big technologies are being developed by a group of people who are not as diverse as the society. I was looking at the Stack Overflow developer survey results where they survey 90,000 developers a year. Less than one per cent of developers are in the over 50s age group. A very small minority are women and a very small minority are not white.
We were talking earlier about if there is a legal vacuum. One of the things I’m interested in seeing develop out of the GDPR is if there is a new right to be protected against algorithmic decision-making, or decisions made by a computer without any human intervention.
Right now, there is not much protection. The hype around big data, algorithms and AI is creating this hallucination that these are infallible technologies that are governing everything. Often, it’s more like content moderation on the platforms. You get this idea that some amazing computer is determining whether something is hate speech, but it’s actually rooms and rooms of tens of thousands of people doing the worst and most damaging job.
Those workforces are really downplayed by the platforms, partly because their working conditions are abysmal and the protections for them are almost non-existent. It’s also because there is a real motivation to create this sense that the technology is far more advanced than it is.
A lot of the AI capability is not there yet. The days where you will have a humanoid robot who is indistinguishable from a normal person are still a long way away. It’s quite hard and possibly pointless to try and recreate a human.
There is a lot of overclaiming, a lot of bias and very-far-from-perfect diversity in the people creating these technologies. If you have a bunch of under-30-year-old men having a massive societal impact, then the technologies they create will of course reflect their worldview.
It seems like there is a balancing act between protecting civil liberties and keeping people safe when it comes to law enforcement using facial recognition technology. How can a better balance be achieved?
I think more informed discourse about it and the impact is needed. Having a completely scaremongering approach to these things is also not helpful. The European courts and even the human rights courts have started to acknowledge that if you want modern law enforcement, it goes with the territory that they should be using the best technologies they can to keep everyone safe as a society.
In fact, the human rights framework is strong and should offer the necessary checks and balances to ensure that if you put powerful tools in the hands of law enforcement, there is adequate accountability, human rights are respected and access is done with a proper warrant.
But we know from countless examples that despite those strong human rights frameworks, the accountability doesn’t always happen. It’s also about acknowledging the difficulties of the situation. If you’ve got criminal activity to protect your society against, you need to be able to have those tools, but you also need to make sure they are applied with restraint.