Doubling Down on Diversity in Face Recognition AI
As always, SXSW was amazing. The few days I attended this year were so crazy, I left feeling like an inadequate human for not possessing the actual ability to be in more than one place at a time! Really, that much going on.
One thing I did have the chance to do was speak on a panel titled ‘Face Recognition: Please Search Responsibly’. I was happy to lend my thoughts to the subject, particularly in such a public platform, because at Kairos we are firmly rooted in our belief that the Face Recognition industry has a cultural responsibility to build technology that is respectful, trustworthy, and transparent.
As the technology is rapidly spreading into mainstream consciousness, issues around industry practices like bias and privacy are at the forefront of conversations around Face Recognition — and they should be.
What’s the problem?
The more we interact with machines, the greater the need to actively monitor and regulate functions which could lead to the compromise of human safety and security.
A few weeks ago, in “Face Off: Confronting Bias in Face Recognition AI”, I responded to a study published by MIT indicating that African Americans, specifically women, are more likely to be mis identified by Face Recognition.
While we acknowledge that this kind of bias has the potential to be dangerous, we believe that rather than being an “unseen hand” version of systemic racism — it is the inevitable, yet reparable, consequence of the newness of the mainstream use of the technology. What does this mean? It means that the machines have not yet been taught on enough of a diverse data set.
Face Recognition is the technological version of a biological capability. Our own human ability to recognize a person, or a thing, is directly influenced by the number of times we have already seen them. If the algorithms have not seen enough people of color (for example), they do not have enough of a reference base by which to compare an individual, when tasked to do so. We see this, the lack of diversity in the data set, as the variable in the bias issue currently associated with Face Recognition.
“One of the challenges that developers face is an absence of good diverse data sets… The idea that Kairos or anyone would take the initiative to create a diverse dataset and then make it available to anyone who needs that is incredible…” — Clare Garvie, co panelist and founding executive director of the Center on Privacy & Technology at Georgetown Law
How do we fix it?
As I spoke about this at SXSW, I became more passionate about my commitment to become part of the solution. Yet I am realistic in my understanding that development of a data set truly representative of all of humanity, while possible, is a very tall order.
Gathering data sets that can be used to create a working “bias standard” will take time, requiring us to lend our energy, talent, technology and other resources to execute — and as leaders in building confidence in our industry we are committed to following through — because when people become distrustful of technology that can positively impact global culture, the consequences can be damaging to us all.
At Kairos, we pride ourselves on building products that democratize access to Face Recognition technology. And if making it available to everyone is central to our business model — so then should making it safe and fair for everyone, too.
You can check out coverage of the SXSW panel discussion here and here.