PART 5: Predictive Policing, and Algorithmic Transparency as Anti-Discrimination

How to structure more transparent and fairer solutions

David Chang
4 min readAug 5, 2017

By fairness, then, I suggest a normative principle akin to “the original position”, articulated by Rawls — the veil of ignorance. In an environment girded by such a principle, transparency is fairness is accountability. Building interpretable results with a clear understanding of how inputs influence the decision-making process increases the potential for users to be treated equally and freely. The effect of incarceration and wrongful sentencing — the consequence of arrest and labels of offending — have a largely detrimental effect on re-entering a normal lifecycle and seeking employment[53]; all citizens should have an equal opportunity in accordance with Rawls’ second principle of justice[54].

It is possible, too, that predictive policing algorithms themselves can increase transparency. Adam Gelb, director of the public safety performance project at the Pew Charitable Trusts says that using risk assessment metrics in court creates a record of how officials are making decisions.

‘ Anything that’s on paper is more transparent than the system we had in the past. In many cases, you had no idea from probation officer to probation officer, let alone from judge to judge, what was in people’s heads. There was no transparency, and decisions could be based on just about any bias or prejudice.’

By this, Gelb posits a sort of intermediate transparency. He implies the term accountability in the decision making of authority figures (i.e. the judge) — but there is no accountability boost from the algorithm itself if it remains opaque.

As we saw with Loomis and his Wisconsin Supreme Court decision, and from the FiveThirtyEight article, if we are to design solutions for potential bias in predictive policing systems, we need to do so by considering transparency (and through it, interpretability). Algorithmic transparency forces accountability, crucial when the labels used by risk assessment algorithms to categorize offenders are supremely powerful, resist change, and are largely impermeable. The trade-offs we mentioned earlier as being some of the benefits of predictive policing systems thus pale in comparison to how the rights of individuals are being threatened. They did not get to opt-in to having their social media, or the criminal statistics of a region as a whole, used against them. It places an unfair burden on individuals to behave consistently with rules that they might not necessarily agree with to “game” an algorithm — especially given that such rules often play on implicit bias. I think we can all agree that it is unfair to expect people to be “civil” at all times, even on all social media, in the same way that it is unfair that black pedestrians are disproportionately stopped without being arrested. Social media should not be a “hot-spot” where the police can find the seeds for crime.

It is imperative that law enforcement groups that utilize these AI systems move away from using features that infringe on these rights, and think about the power that they are exerting on the populations they are meant to protect. Specifically, officers should not use systems that base their recommendations on hot spots and likely crimes on historically biased data. Perhaps police can feed the system more accurate, recent data sets, or move away from making hot spots a launching pad for patrols, with the intent to de-escalate. A thorough understanding of the control and treatment populations would do much to make progress in this area, and to assess reliability and validity of predictive software[55].

In any case, normative principles about what data is relevant to crime, as well as how criminality should be predicted, should be established in a fair and transparent way. This fairness should be understood as a manifestation of a Rawlsian equal distribution of social values (54). Such a social order is key for predictive policing algorithms, because the variables that the latter rely upon proxy too closely with race and yet are the only variables that result in accurate and interpretable outputs from these algorithms. We must consider the effects that these outputs may have on us — i.e. reinforcing existing biases ala the “broken window”/”results-oriented” mindset — and understand that algorithms often incorporate human bias because of the proxy effect mentioned above, as well as the human error involved in choosing predictors. We must refuse the illusion that algorithms are “objective” because they find surprising patterns in large quantities of data.

Link to Part 4

References

[53] Geller, Amanda, Irwin Garfinkel, and Bruce Western. “The effects of incarceration on employment and wages: An analysis of the Fragile Families Survey.” Center for Research on Child Wellbeing, Working Paper 2006–01 (2006). <http://www.saferfoundation.org/files/documents/Princeton-Effect%20of%20Incarceration%20on%20Employment%20and%20Wages.pdf>.

[54] John Rawls, A Theory of Justice, revised ed. Cambridge, Massachusetts: The Belknap Press of Harvard University Press, 1999. p. 52–53.

[55] Perry, Walter et al. Predictive Policing: The Role Of Crime Forecasting In Law Enforcement Operations. RAND Corporation, 2013. p. 94.

--

--

David Chang

I write about technology and policy, and sometimes culture.