Facing up to the power of AI
The DataKind UK book club explores facial recognition
By Christine Henry, DataKind UK Ethics committee lead
Facial Recognition — what is it good for? That was the key theme running through DataKind UK’s book club in May. Thanks to engaged, thoughtful participants from across private sector, academia, government and charity, the book club was a chance to engage in discussion and hear some new perspectives. This blog sums up the discussion had by participants.
Facial Recognition in Context
The discussions covered everything from big picture — how to ground a fundamental right to privacy; what is psychologically different about using our face vs fingerprints or phone tracking — to individual facial recognition use cases. Some participants discussed potentially positive use cases, like disease detection, IDs for refugees, and less-biased airport security than a real human having a bad day. But, on the whole, the negatives clearly out-weighed the positives for most book clubbers.
The chance of blanket surveillance, infringement of privacy and liberty, and even weakening of democracy — would you attend protests or political meetings if you knew you could always be identified? — is troublingly high. In addition, attendees mentioned the problem of inaccuracy, which tends to have a greater impact on vulnerable groups — especially people of colour, as demonstrated in Joy Buolamwini’s work. Further, our ability to challenge decisions by corporations or governments is often limited, with poor accountability and lack of a transparent testing of these systems. The use of facial recognition to target Uyghur by the Chinese government was also seen as an example where the technology furthers persecution of disadvantaged minorities.
This entanglement of facial recognition, law enforcement, state and corporate power is one of the points we all could agree on. It’s impossible to look at the tech as only models in the abstract. Rather, we must situate facial recognition inside the systems where it’s made and used.
The face of indifference
Some book clubbers asked If we think banning facial recognition (in most cases) is good for society, what then? There was concern around whether any facial recognition ban or regulation could be enacted or, if enacted, enforced. Some said the majority of the public in the UK isn’t particularly concerned about facial recognition and may even see it as a good thing for government (“catching terrorists”) and for “frictionless” convenience (paying at shops more quickly). Getting broad public support and political will for limiting facial recognition requires awareness of how facial recognition could be used in all its forms (whether for good or harm), including the potential impacts on less advantaged groups.
We also discussed an individual right to opt-out and whether that would be meaningful. As someone pointed out, this might mean that your data needs to be stored in order for the AI to know not to make use of it!
Moreover, opting out of facial recognition may not be enough to avoid identification. If an organisation decides it wants to track you, your phone and other biometric matches such as your gait can be enough to identify you. Opting out may also look suspicious in itself, potentially triggering police actions — as in the recent trial in the UK where a member of the public was challenged for covering his face.
As for enforcement, we discussed the low barriers to entry for a “good enough” facial recognition system. While Luke Stark’s analogy of facial recognition as the new plutonium is powerful and even hopeful (because, over time, governments did act to decrease risks from nuclear weapons and radioactive material), there was the point made in our group that plutonium was always a lot harder to access than some facial recognition data and models. Some of us had even built our own from public datasets and standard code, in university courses or MOOCs. Bans on government use may reduce some harms, but if private use were ongoing, your (potentially very private) facial recognition data may still be accessible to others.
What do you think?
The aim of our new-for-2019 book club is to talk about some of the ethical and social issues in data science. Thanks to the great contributions from attendees! To find out more, see our open reading list of articles and live-tweeting from DK staff.
Our July 2019 Data Ethics Book club is going to look at Fairness — definitions, applications in machine learning and the technical tools, as well as some challenges and critiques of how the concept of fairness is being used and misused. Interested in joining us in London or online? Get in touch!