the ACLU conducted a test of Amazon’s facial recognition tool, and the software incorrectly matched 28 members of Congress with people who have been arrested for a crime. The ACLU reported that the false matches had a disproportionate number of people of color, including six members of the Congressional Black Caucus.
AI’s Leading Developers Share Their Fears About the Tech Developing Too Fast
Fast Company
3587

AI learns and expands bias. AI is not born to hate.

In the case of Amazon and Rekognition, did the system simply inherit the bias from the training data is was tasked to learn from?

If the criminal justice system carries heavy discrimination against people of color, Rekognition may have simply carried over the discrimination. The training data’s racism was passed down, taught early, to the next generation. The algorithm was not intentionally designed to be biased, it became yet another AI-as-a-mirror for our own failures and lingering, systemic bias.

Amazon and all its resources is not excused, for the outcomes clearly show wrong and need for redesign, for proactive adjustments and cleaning the injustice from databases.

If you give a child a book on Eugenics with no context, telling them the book is what they must use to judge the value and match the identity of other people — what outcome to expect but another Boy From Brazil?

Building ethical AI is to face history in all its forms.