Amazon is Right: Thresholds and Legislation Matter, So Does Truth

Joy Buolamwini
4 min readFeb 7, 2019

--

Today my op-ed on racial and gender bias in facial analysis and recognition technology was published by TIME magazine. Again, I highlighted that Amazon like their peers demonstrated racial and gender bias in the Amazon Rekogntion gender classification feature we audited in August 2018. We did this audit months after sending preliminary audit results to the company in June 2018 and receiving no response from the company at the time. The methodology for the audit study, the parameters used, and full instructions to replicate the dataset have been available since last year. In fact this information has been available since 2017 when I published my MIT thesis on the subject.

Hence Amazon’s claim that “These groups have refused to make their training data and testing parameters publicly available, but we stand ready to collaborate on accurate testing and improvements to our algorithms, which the team continues to enhance every month…” is a lie as it relates to the Algorithmic Justice League’s Gender Shades and follow up study .

Since our paper came out, Amazon has tried to undermine the study repeatedly updating a corporate blog post as I countered their misleading statements one by one.

Biometric Updates summarizes my response to the initial criticisms well stating:

The first point of the rebuttal is a reiteration of the potential for abuse that even the most accurate facial recognition technology has. This potential includes the use cases that support mass surveillance, or discrimination of certain groups, Buolamwini argues. To Wood’s assertion that the study results are misleading because they are based on facial analysis rather than facial recognition technology, she contends that all systems analyzing faces need to be assessed for harmful bias, and that the two technologies are related in a very significant way.

“The failure to even detect faces of color in the first place has been a major problem for studies around facial analysis technology, because often these studies are based on results on faces that were detected,” she writes.

The lack of information about the dataset used by Amazon for its algorithm tests, and in particular a lack of information about its demographic breakdown, prevents assessment of its bias, or lack thereof, according to Buolamwini. She also notes that while confidence scores can be helpful, they can simply indicate confidence in a falsehood, and that NIST research shows that those who use facial analysis often use the default settings. The response says Amazon’s assertion that its newest algorithm was not used ignores the fact that older versions are sometimes used, and Buolamwini asks what the adoption rate is for the new version. To criticisms that attribute or gender classification is not relevant to law enforcement considerations, she points out that it can be used to narrow search fields, and that NIST even provides information about how the technology can be used by police. — Biometric Update

The problem for Amazon is that they are in a losing battle with the truth. In an earlier post , I responded to their erroneous claims about the study which has been replicated by their peers like Microsoft and IBM leading to substantial industry change.

I also explained that their statements pertaining to certain settings for the tool have been inconsistent. Much ink has been spilled over the issue of confidence thresholds. The following twitter posts provide an overview.

There have been a number of calls for the legislation for facial analysis and recognition technologies as Matt Cagle of ACLU Northern California summarizes in the following tweet:

Last year I wrote an op-ed for the New York Times on the Dangers of Facial Analysis Technology that also called for legislation. Given the repeated demonstrations of bias in facial analysis and recognition tools, the lack of transparency from companies like Amazon, I wholly support a moratorium on the use of the technology. Lives are on the line. We do not fully understand the risks of these tools and as such — unproven, unregulated, and unwanted by many communities — they should not be in the hands of the police. For companies who want to commit to the ethical and responsible development of facial analysis technology we have launched the www.SafeFacePledge.org .

Since Amazon says they are committed to responsible use, I invite the company to become the first major tech company to sign on. The pledge prohibits the lethal application of the technology, requires public inspection and scrutiny for accountability, and prohibits police use where there is no legislation (which the company has called for). We are open to collaborate on ways to protect human life, dignity, and rights. We are not open to supporting any corporation in developing weaponized AI systems that include facial analysis and or recognition capabilities.

--

--

Joy Buolamwini

Founder Algorithmic Justice League. www.ajl.org | www.poetofcode.com | Telling stories that make daughters of diasporas dream and sons of privilege pause