Feedback loops bias.

johny cabrera
unpack
Published in
2 min readOct 19, 2020

There are feedback loops that can have biases and need to be used carefully. One of the examples of this is the use of recidivism models. This has been discussed widely. I was reading the research article: The accuracy, fairness, and limits of predicting recidivism written by Julia Dressel published in Science advances. These algorithms predict the likelihood of an inmate to committing a crime in the paper she compares the use of COMPASS and the concludes that it is no more accurate than people with no criminal justice expertise. It has been argued that the use of big data together with big learning will increase the accuracy of the prediction since the bias will be reduced.

A good example of bias that can be produced by feedback loops have been given by David Blaszka in a reflection of the dangerous feedback loops in Machine learning. The example shows that if we train a model a recidivism model using as parameters the location, sex, probability of family members of going to jail, and crime committed the models will be biased towards some specific race and ethnicity, to have a further explanation of how this particular bias is produced I would recommend to read the article written by David Blaszka or the report of Nicol Turner Lee Algorithmic bias detection and mitigation: best practices and policies to reduce consumer harms which explain how the use of bi data and machine learning can amplify human bias.

In general, I think as machine learning uses data that as bias the output of the model can have bias as well. This can have an impact in the decision-making process where it is being used. Including the criminal system, financial institutions, among other areas. This is an area of research that is quite active today and I would be interested to understand how this is corrected.

--

--