The Rise of Facial Recognition Technology

Impossible
5 min readMar 11, 2020

Keeper of the peace, or eroder of human rights?

Surveillance cameras aren’t new. CCTV began to spread from military to commercial use in the 1950s, and is now widely used to protect private businesses and homes. Accepted by many as a legitimate way to deter criminals, surveillance cameras have become part of the scenery. Most of us have no idea how many times we are captured going about our daily lives, at shopping centres, train stations or walking down the high street. The general consensus seems to be that if we’ve got nothing to hide, then we have no need to worry.

But civil rights groups like Liberty and Big Brother Watch believe that we do have reason to worry and should be very weary about ‘sleepwalking into a surveillance state’. Facial recognition has been gaining traction with the police as a way of picking faces out of a crowd to assess against a growing database. Advances in technology mean that your face is now biometric data and can be used in much the same way as a fingerprint.

Big Brother Watch describe the process as such: “The face of each and every person passing by an automated facial recognition camera will be scanned and analysed, effectively subjecting everyone within view to a covert biometric identity check.”

Worryingly, your image can be captured without your knowledge or consent. The Police National database has approximately 20 million faces stored. The rules surrounding storage of this data is a grey area, with no clear laws on how long the data can be stored and how it will be deleted (unlike fingerprints which must be deleted if there is no conviction). Many of the 20 million have never been charged or convicted. According to Martha Spurrier, the director of Liberty “ There is no basis in law for facial recognition, no transparency around its use and we’ve had no public or parliamentary debate about whether this technology could ever be lawful in a democracy.”

Paul Wiles, the biometrics commissioner, issued a report which warned of the difference between using images as evidence for custody purposes and using them to recognise individuals in a public place. The lines are being blurred with modern surveillance techniques.

The police maintain that facial recognition is a powerful tool of law enforcement, and a good deterrent for anti-social behaviour; but who gets to decide what constitutes anti-social behaviour? In March 2018, police deployed facial recognition technology at a protest for the first time. With Extinction Rebellion and Greenpeace now on a list of ‘extremist ideologies’ as part of the government’s anti-radicalisation strategy, Prevent, there is growing concern amongst human rights groups, that innocent people are being deterred from exercising their right to protest and to ‘freedom of assembly’.

March 2018, police deployed facial recognition technology at a protest for the first time

And for marginalised groups, the stakes are much higher. Multiple studies have found that facial recognition technology disproportionately misidentify people of colour and women. There was huge contention around the use of facial technology at The Notting Hill carnival in 2017, the largest annual Afro-Caribbean event in Britain, where reports suggest the technology was inaccurate 98% of the time.

Stafford Scott, of the anti-racism charity the Monitoring Group, called it “racial profiling”. He explained, “A technique they use for terrorists is going to be used against young black people enjoying themselves.”

According to Silkie Carlo, director of Big Brother Watch, “The notion of doing biometric identity checks on millions of people to identify a handful of suspects is completely unprecedented. There is no legal basis to do that. It takes us hurtling down the road towards a much more expansive surveillance state.”

Of course, facial recognition doesn’t only have to be used for surveillance. As the threat of hacking seems to be growing, biometric data is being increasingly used as a way for people to identify themselves. There are a growing number of companies using iris technology as a ‘living password’, where a user’s iris is scanned to prove their identity, also saving them the trouble of remembering multiple passwords. Many smartphones have integrated some form of this technology, for example to unlock the phone, and in some cases even to authorise online payments.

Iris scanning is recognised as the most accurate form of biometric identification. It is already being used in some airports around the world to identify travellers, and is expected to be rolled out further over the coming years, to improve security and make lengthy queues at immigration a thing of the past.

Iris Scanning

Banks have also embraced biometric data to improve security and enhance the customer experience.

Last year NatWest became the first high street bank to allow people to open a bank account with a photo ID and selfie, in a process they say can take as little as four minutes. They also released a biometric fingerprint credit card, which means no pin is needed. Most importantly for people concerned about privacy, the fingerprint never leaves the card and isn’t shared with any outside party.

People are unlikely to complain about advances in technology that increase their online security and save them time, however there need to be very clear guidelines on how biometric data is to be collected, used and stored, so that consumers feel in control. As Voltaire said, “With great power comes great responsibility”. Those handling our extremely personal and sensitive data, must do so with transparency and according to the laws that have been put in place to protect our human rights. There can be no room for grey areas.

Click here to read our article ‘A Brief Exploration into Transhuman Tech

--

--

Impossible

We are a team of designers, engineers, consultants and communicators who have a passion for preserving our world. www.labs.impossible.com