Why Facial Recognition Problems Can’t Be Ignored

John Murray
Primalbase
5 min readJun 28, 2019

--

If you’re reading this on your phone, what make and model are you using? If it’s any leading smartphone from the last couple of years, more likely than not you’re making use of FaceID to unlock the screen, and it’s a pretty seamless experience.

Yet while it is now enmeshed into our everyday lives and may seem to be a fully-developed technology, the wider use of facial recognition technology poses a significant threat. Such technologies could identify any person for any number of reasons, which is opening the door to a number of problems drawing widespread attention. There are real problems, and these are not just in the moral implications of how people intend to use the technology — the real danger may be that the technology is not even actually ready yet.

Inaccuracy

Efficiency in facial recognition would, on the surface at least, appear to be largely achievable. My own use of my iPhone’s FaceID works like a charm in a fraction of a second (minus those initial morning unlocks in bed where my face is contorted by squinting at my screen). This success rate is because my face is the only one stored on my phone. There is no database being referenced against — it’s just me that has to pass the Face ID test.

Move beyond the closed wall ecosystem of an individual phone though, and the problem of inaccurate facial recognition really becomes apparent. Facial recognition systems are only as good as the quality of data being fed into them, and this quality is woefully lacking in many systems that are being rolled out today — most notably, in law enforcement platforms. The Metropolitan Police used facial recognition at the 2017 Notting Hill Carnival in London, and reported a 98% failure rate, with 102 false flags at the event.

The FBI has reported better success rates, but not at a level that should inspire sufficient confidence in a system being used to identify suspects in cases that require absolute confidence. In a 2017 Congressional hearing, Kimberly J. Del Greco, Deputy Assistant Director of the Criminal Justice Information Services Division of the FBI, stated that the bureau’s systems had only achieved an 86 percent rate of accuracy.

When the issue of racial identification comes into play though, the current shortcomings of facial recognition systems become even more apparent. A report in 2018 from MIT and Microsoft researchers found that facial recognition systems performed well on white men, but were far less reliable when identifying women and people of colour, with darker-skinned females being the most misclassified group, at 34.7 percent. A theorised root cause of this issue goes back to the quality and diversity of the images being fed into these systems, which is a larger problem within the machine learning research and development community.

A Lack of Regulation and Oversight

Despite the lack of diversity in training data being used in facial recognition system, there is no shortage of data itself. Governments’ and private companies’ recognition of the power and potential of these systems has been one of the catalysts for their rapid development. As a result, there has been a lack of cohesive regulation along the way, leading many to see facial recognition development as a Wild West landscape.

A 2016 report by the Georgetown Center for Privacy and Technology revealed that US law enforcement had been harvesting images of American citizens to utilise in facial recognition systems, while this year the FBI revealed that it held over 640 million photos in its database. The UK has suffered its own problems too thanks to this lack of regulation. The Police National Database hosted 13 million faces in 2014, including various individuals cleared of committing any offence. Furthermore, a 2015 Home Office report concluded that up to 40% of these images were duplicated, which paints a picture of a badly constructed, and potentially dangerous system being utilised by a police force.

The Rapid Spread of the Technology

A lack of regulation in the facial recognition sphere inevitably leads to a lack of solid figures concerning how many functioning platforms there already are, and whether or not these share databases. There are public indications about the crossover between public and private entities collaborating in this field, such as the recent story of Amazon shareholders’ attempts to veto efforts by the company to sell facial recognition technology to US police forces, but smaller companies and startups operating in the field are far less scrutinised.

The technology necessary to create facial recognition systems has decreased in price in the past few years, coupled with an increase in attainable computational processing power, and the ease in which training images can be acquired. Thanks to these factors, it has been estimated that the market for facial biometrics will reach $375 million by 2025.

As the technology continues to spread at a rapid place, the aforementioned problems could increase in scope, unless more considered oversight is established. Facial recognition is already being brought in to airports and schools, with Chinese iterations even being linked to the country’s controversial ‘social scoring’ system.

The positive potential for facial recognition also continues to grow, but there is a real need for a greater recognition of its importance in society by all parties concerned — from private citizens who may not be aware of the level of government investment in it, and from governments themselves who should appreciate the need for more concerted standardisation of their systems.

--

--

John Murray
Primalbase

Senior Editor at Binary District, focusing on machine learning, AI, quantum computing, cybersecurity, IoT