“In the end, it is all about power:” DataKind UK’s Coded Bias Watch Party

DataKind UK
DataKindUK
Published in
4 min readMay 21, 2021

By Laura Carter, DataKind UK Ethics Committee member

Photograph shows a screen that reflects a black woman holding a white mask over her face. The screen appears to show face detection technology at work, with lines mapping the mask, and text saying ‘Face detected’ in the bottom corner
Image from Coded Bias

On 27 April 2021, the DataKind UK Ethics Committee hosted a watch party of the film Coded Bias. DataKind UK volunteers and friends watched the film live, chatting about it in a dedicated Slack channel, and then joined a panel discussion with Ivana Bartoletti, privacy law expert at Deloitte, visiting fellow at Oxford Digital Ethics Lab, and Cofounder of Women Leading in AI; Francine Bennett, board member at Ada Lovelace Institute and co-founder of DataKind UK; and moderated by DataKind UK Ethics Committee member Michelle Seng Ah Lee, PhD student on algorithmic fairness at the University of Cambridge and AI Ethics Lead at Deloitte.

Coded Bias, directed by Shalini Kantayya, explores bias and discrimination in data and technology, and how it impacts our lives: some of us more than others. The documentary starts with Joy Buolamwini and her discovery while at the MIT Media Lab that facial recognition software failed to recognise her face because she is Black. Her realisation forced improvements in facial recognition technology from big tech companies. The documentary goes on to explore further ways that technology is biased. Technology, the film reminds us, is made by humans, and so it relies on human input, including human-written code and human-collected data. As a result, it ‘learns’ from biased human decision-making and has the capacity to reproduce and accelerate that bias, at high speed and with little oversight.

As we watched the film, live discussion on the Slack channel covered topics including the use of facial recognition in the UK — such as at some Co-op supermarkets — the inaccuracies in these technologies, and the campaigns against facial recognition in the UK by organisations like Big Brother Watch. Many participants came away from the film with a long list of books to read, and people and organisations to follow: including Joy Buolamwini’s Algorithmic Justice League; Weapons of Math Destruction by Cathy O’Neill; Automated Inequality by Virginia Eubanks; and data rights agency AWO.

Many of us mentioned feeling increasingly concerned about how much data is collected about us, and how poorly regulated facial recognition is, compared with other forms of biometric data like fingerprints and DNA. We also noted — with sadness — that many of the researchers featured in the film talked about expecting to be under-estimated and discredited because they are women of colour.

Portrait of Director of Coded Bias Shalini Kantayya, alongside the comment “It’s never been more clear that the people who have been systematically missing from the conversation have the most to share with us about the way forward.”
Image from CodedBias

The panel discussion picked up both of these themes. Ivana noted that it’s not by chance that the most vocal, thought-provoking leaders in this space are Black women. She commented “it’s because the impact of socio-political tech is dramatic on some people,” and, she argued, leadership has to come from the people who are most affected.

Francine expressed concern that regulations may not be applied, and in practice they have carve-outs, like the proposed EU AI regulations that still allow for uses of facial recognition tech in law enforcement. She noted that AI is a set of tools: “it’s such a broad thing, there are patterns of harms, but it may not even make sense to have a pattern of regulation about it.” Michelle pointed out that the EU proposed defining ‘AI’ as any and all statistical modelling: potentially bringing in many statistical models that currently fall outside of what many people consider to be ‘artificial intelligence.’

Both panellists were optimistic about the work that is happening in the tech ethics space: Francine talked about the benefits of bringing in expertise from many different disciplines, and Ivana noted the recent court case in Bologna where Deliveroo’s algorithm for calculating the ‘reliability’ of a driver was found to be discriminatory since it penalised workers for protected reasons that they might withhold labour including childcare, illness, or the legally protected right to strike.

The panel finished by discussing transparency in the use of automated tools, and about who makes the decision to deploy them. In theory, AI tools are available to many technologists, but the ability to deploy them at scale is in the hands of big tech and big governments. Ivana expressed concern that current anti-discrimination law in the UK may not be sufficient, and supported the idea of a fitness test. She pointed out that last year’s A-Level algorithm debacle was reversed because we knew an algorithm had been used, and asked what happens in cases where we don’t know. Michelle noted that we need to have a proper conversation about what this tech is doing and how it aligns with what our vision for the future looks like. And Francine pointed out that in the end, “it is all about power”: who is in charge, and do they care about the impact of their decisions?

You can watch Coded Bias on Netflix in the UK. The film’s website has more resources on bias and discrimination in technology, including a discussion guide and an activist toolkit.

If you are interested in more data ethics events, or want to know more about the work of DataKind UK and how you can get involved, sign up for our newsletter to hear about what’s coming up!

--

--