What’s past is prologue, present, and future

Predictive police technologies create the future they foresee by extending the present.

Open book, published 1672: left-hand page has a black and white illustration of Nostradamus sitting at a desk; right-hand page includes the text: “The True Prophecies or Prognostications of Michael Nostradamus, Physician to Henry II. Francis II. and Charles IX. Kings of France. And one of the best Astronomers that ever were. A Work full of curiosity and learning.”
“The True Prophecies or Prognostications of Michael Nostradamus”

“It’s tough to make predictions, especially about the future.” — Yogi Berra

One day, in 2013, a strange, alarming thing happened to Robert McDaniels. A group of police officers showed up at his door and informed him there was a high probability he would be involved in a shooting. They couldn’t say if he would be the shooter or the one shot, but because of who he was and where he lived — even with no background of violent crime — he was at risk. As a result he was subjected to constant surveillance, with officers repeatedly showing up at his work and around his neighborhood.

Robert did end up being involved in a shooting, but not in the way police anticipated. To some in his neighborhood who were distrustful of law enforcement, Robert’s constant interaction with police officers was suspicious. They worried he was a police informant. On two separate occasions, Robert was shot by neighbors who thought his constant interaction with police was a sign he was a “snitch.”

Robert was one of the first people to be included on the Chicago Police Department’s “Strategic Subject List,” a now-defunct CPD program to determine which people were “at risk” as victims or perpetrators of violent crimes. The CPD developed the list in consultation with an algorithm that ranked an individual’s “risk” level based on factors like criminal history, whether they had been shot, and other police data. In Robert’s case, the algorithm had seemingly “predicted a shooting that wouldn’t have happened if it hadn’t predicted the shooting.

Technology critic and writer Michael Sacasas powerfully conceptualizes prediction as “a phantom of ourselves that ventures into the future, acts on our behalf, and whose actions have consequences we must bear.” Prediction is “not so much the work of reporting as it is the work of producing” — it does not exist in a vacuum, and its existence impacts the present, which in turn impacts the future it purports to forecast.

The same is true for predictive law enforcement technologies. When predictive policing labels a particular neighborhood as “dangerous,” or a risk assessment algorithm scores an individual as “at risk,” these designations prime law enforcement to treat them as such. This compounds when combined with police officers’ implicit racial bias, which makes them more likely to connect a Black person with the threat of danger or violence. When law enforcement labels a person “dangerous,” and that label is in turn used to justify higher bail or lengthier sentencing, for instance, it can actually increase recidivism rates, which are then used to retroactively justify the “dangerous” label. The potential future these algorithms portray changes how police act in the present, creating a self-fulfilling prophecy that aligns reality with the machine’s prediction.

Predictive law enforcement technologies codify Sacasas’ phantoms of ourselves as algorithms that inform critical decisions about people’s rights and liberties. The phantoms these algorithms generate bring the conditions of the past and present, however inequitable, into the future. They reinforce existing disparities, by race and class, in arrests, sentencing, convictions, and parole. These specters are not otherworldly — they are built by private companies with little to no public insight into how they work.

Because these technologies exist within the context of an inequitable criminal legal system, the people using them will reproduce these inequities. When risk assessment algorithms, for example, calculate an individual’s supposed likelihood of being arrested again in the future, this calculation also includes future biased policing. Low-income communities of color are policed more heavily and harshly, and thus these individuals are more likely to be re-arrested. This bias is then reflected in a risk assessment algorithm’s calculation.

In light of a growing backlash against potentially unconstitutional “predictive” technologies, developers have begun rebranding their algorithms as technologies of “risk” management,. But there’s a deeper question here about what kind of risk, and risk to whom, these algorithms are managing. Police use them to estimate very specific, narrow kinds of risk — the risk a person will be arrested, for example, or that an area will be the site of an arrest. Police do not use them to estimate the risk that results from an individual’s interaction with the criminal legal system — that a traffic stop will lead to a civilian death, or that a longer criminal sentence will lead to a higher chance of rearrest later on. In only anticipating and managing the former type of risk, and excluding the latter, law enforcement will continue to engage in measures — such as more police interaction with civilians and lengthier criminal sentencing — that disproportionately harm low-income people of color.

Robert McDaniels experienced this first-hand. The “risk” that he would be involved in a shooting came true, but only by virtue of police designating him “at risk” in the first place. The algorithm was “accurate,” but only because its calculation compelled police to act in a certain way, which brought about this outcome. In this way, algorithms of prediction themselves shape the future: by taking data from the past, however inequitable, and making forecasts that change behavior in the present.

Jameson Spivack is an associate with the Center.

--

--

Jameson Spivack
Center on Privacy & Technology at Georgetown Law

Associate, Center on Privacy & Technology at Georgetown Law. Focusing on the policy and ethics of AI and emerging technologies. Hoya + Terp.