Signal 4 : Humans need implicit bias training and so do machines.

Mei Guan
Civic Analytics 2018
2 min readOct 20, 2018

Scientific American(SA) describes “implicit bias” as “tendency for stereotype-confirming thoughts to pass spontaneously through our minds”. When humans with implicit biases train algorithms like COMPAS to “assist” judges at sentencing we’ve arrived at explicitly biased machines.

Machine Bias from ProPublica broke down COMPAS into human terms. COMPAS employs a model that issues a recidivism score whereby a higher score supposedly correlates with a higher probability of returning to prison and defendants with higher scores tend to receive longer sentences.

It reviewed 10,000 criminal defendants’ scores in Broward County, Florida — studying the difference between who COMPAS predicted would get re-arrested versus those who were re-arrested. The findings were bleak — the prediction failed differently for Blacks vs Whites. Black defendants were labeled higher risk but didn’t re-offend at 45% compared to 23% for whites.

These kinds of misclassification is impactful, especially, considering judges may rely on machines to be impartial. Sterilized by the “math”, there’s a false sense of security and justice. Before wider adoption of these tools, we need to demand for greater transparency — open source. Just as how humans have implicit bias, as we train machines with historical data we have to understand that the historical data was biased to begin with.

Work Cited:

Angwin, Julia, et al. “Machine Bias.” ProPublica, 23 May 2016, www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.

Payne, Keith. “How to Think about ‘Implicit Bias.’” Scientific American, 27 Mar. 2018, www.scientificamerican.com/article/how-to-think-about-implicit-bias/.

--

--