Buyer Beware: A hard look at police ‘threat scores.’

Police in Fresno, California are piloting Beware, a proprietary commercial system that can automatically assign “threat scores” to local residents. The system’s judgments — which can be summarized as a red, yellow, or green threat level — aren’t based on police department or city records, but instead on information culled from commercial data brokers. According to promotional materials and public statements, these threat scores may reflect everything from criminal histories to social media activity to health-related history. When a high-priority 911 call comes through, analysts in a central office automatically see threat score information for the people who live at the location that made the call (together with some of the particular factors that played into the threat rating), and they can relay that information to the responding patrol officers.

After a brief but intensive dive into the available information surrounding the Beware system, I’ve found no public evidence — and little if any reason to suppose — that Beware’s threat scores and massive datasets are meaningfully more useful than the simple and timely delivery of relevant criminal history data to officers in the field. Without a rigorous independent finding on that front, no police department should pay for the privilege of using such a system — and no grant making authority should subsidize its use — except as part of a rigorous, controlled trial.

Do threat scores make anyone safer?

In principle, a system like Beware might make people safer if it improved officer insight into which situations are high risk, and if it were deployed in a way that led to more cautious, deliberate responses. But a more troubling and similarly plausible possibility is that the system may trade on and amplify the longstanding fears felt by cops on the beat, prompting officers to see the public as more threatening and, consequently, sparking more frequent uses of force. The system’s marketing — starting with the “Beware” name — certainly implies that officers need to become more concerned.

But it’s not clear whether more officer fear will translate into more officer safety — partly because policing today is already a remarkably safe occupation. As Radley Balko has written, the risks associated with being a police officer are at or near all-time lows, even after accounting for the events of 2015. At the same time, as Samuel Sinyangwe observes, the escalating rhetoric of a “war on cops” creates the false impression of dangers out of control.

Regardless of whether more fear among officers makes officers safer, heightened officer fear clearly can make other members of the community less safe. As former Madison, Wisconsin police chief David Couper has written:

[T]he U.S. Supreme Court’s decision, Graham v. Connor has a lot to do with this problem. Their decision effectively permitted a police officer to legally use deadly force based on whether the officer reasonably believed his or her life was in danger; called “reasonable objectiveness.” Before this decision, officers were expected to use only the minimum amount of force necessary to overcome resistance. Add to this decision the fear every police officer has that he or she could be disarmed and shot you have a “perfect storm” of police using deadly force in almost any situation involving resistance.

The devil is in the details — information’s accuracy, tone, and context matter greatly in shaping whether it makes officers and members of the public more safe or less so. As Conor Friedersdorf observes at The Atlantic, Tamir Rice lost his life partly because a 911 dispatcher “failed to convey the caller’s observation that the male was ‘probably a juvenile’ and that the gun was ‘probably fake.’” On the other hand, Friedersdorf writes:

I can imagine a “smart” emergency-response system that incorporates publicly available data in a way that enhances public safety without infringing on civil liberties. If I had a schizophrenic son, I would love a way to register that fact with the emergency-response system so that dispatchers would know to send folks trained to deal with the mentally ill. In the event of a fire, I’d love for my local fire department to know automatically that my wife and I have a friendly dog in the house.
In short, it’s an open question whether Beware or systems like it actually help or hurt safety — both for officers and for the public.

We don’t even know what Beware is trying to predict.

The company selling Beware (Intrado, a unit of West Corp.) has disclosed little to the public about how the product works. In fact, even the police departments relying on the product have limited insight into its workings. But by piecing together the publicly available clues — public records that the ACLU obtained from Fresno, job descriptions and employee reviews, securities filings and a range of other sources — it’s possible to get a composite picture. And that picture is concerning.

Let’s start with a basic question: What, exactly, are these “threat scores” even trying to predict? What would it mean to say that the scores are, or are not, accurate?

None of the publicly available material I have seen offers a clear explanation of what the scores are intended to measure, much less provides meaningful evidence that the predictions are accurate.

Here are three possibilities:

  1. Maybe the scores try to predict how likely someone is to threaten or attack an officer. Beware’s marketing primarily emphasizes risks to officer safety. For example, the ACLU obtained a copy of a brochure Beware gave to the Fresno PD, pointing out that ambushes or surprise attacks staged by people previously arrested for violent crimes account for a large fraction of officer fatalities. A busy police executive might reasonably imagine that Beware’s threat scores represent the threat to officers. But Beware doesn’t just claim to be offering officers a timely heads up about prior violent arrests. It claims to derive “intelligent insight from billions of commercial records and thousands of website hits about people, places and properties,” thanks to “a comprehensive set of patent-pending algorithms.”
    Has Beware assembled the “commercial records” and “website hits” of people who actually have assaulted police officers, and found enough such people that it can predict, in a statistically responsible way, which out of thousands of consumer purchases or web site visits make a person more likely to assault an officer? I doubt it. In 2014, the most recent year for which statistics are available, Fresno police answered 408,718 calls for service, and there were, remarkably, a grand total of only 363 assaults on the city’s officers. Even if those 363 assailants shared certain traits — perhaps they were more likely to use Twitter, or to buy Cornflakes, than other Fresno residents — such trends are more likely to be spurious correlations than a meaningful statistical signal. Perhaps Beware uses national statistics on assaults or violent acts, which would offer a much larger sample size from which to draw conclusions. That, however, would also discount any variation across different localities.
  2. Maybe the scores try to predict violence against anyone while officers are on the scene— not just against the officers themselves. Even then, the numbers are vanishingly small: There were fewer than 3,000 “crimes against persons” in Fresno all year, and in many of those cases, police were dispatched because the crime had already occurred, rather than because it was imminent.
  3. Maybe the scores try to predict something else, even less relevant to officer safety, such as the likelihood that a person calling for service — or residing in a place to which the police have been called — will at some time in the future be arrested in connection with a violent incident at all (whether or not it’s with the officer, whether or not it’s while the officer is on site). Out of the three predictive possibilities outlined here, this scenario is the most plausible one to imagine aiming at, because there’s a much larger sample size of people who’ve been arrested for violent crimes. However, this is also the least useful kind of prediction for the officer or dispatcher, since it’s least focused on the actual situation that they are dealing with.

There may well be some facts about a person (like past convictions for violent crimes) that can meaningfully inform officers about the risk of a confrontation. And, it’s also easy to imagine just-so stories in which it could turn out that a seemingly innocent behavior (like buying a pressure cooker) is instead part of a dangerous or malicious plan (like making a bomb). But we still have no reason to believe that Beware, or any system like it, really can distinguish who is making a bomb, and who’s just making dinner.

Avoiding the “public eye.”

If Beware’s analysis of “billions of data points” is adding cost and complexity (and invading privacy!) without actually offering a useful improvement over much simpler methods, it wouldn’t be the first or only offender. A rigorous RAND study of other “predictive policing” tools — systems that predict geographic “hot spots” of future crime — found that “the increase in accuracy in moving from fairly simple algorithms to the most sophisticated and computationally intensive algorithms tended to be marginal.”

Police departments aren’t experts on algorithms, and each tech vendor is happy to imply that there’s a unique “special sauce” at the heart of its product — a sauce so secret that it cannot be explained to the police department using the system, or to the public, much less subjected to rigorous testing. Email correspondence released by the Fresno PD, for example, indicates that before a career detective on the Fresno force could even attend a training session about Beware, he had to sign a waiver (likely including confidentiality rules).

Another “big data” tool tested by the Fresno department was the social media monitoring service MediaSonar — which provided a “Keywords List” encouraging Fresno police to monitor online discussion of Mike Brown and uses of the word “dissent.” MedisSonar demanded that the police obfuscate their use of its service, and even offered advice on how to do so:

The effect of such maneuvers is to minimize the overlap between people who really understand what a system does, on the one hand, and those with the incentive and motivation to subject it to critical scrutiny, on the other.

“How does a person get to red?”

When the Fresno police briefed the city council about the Beware system, the council president asked a simple question about those color-coded threat levels: “How does a person get to red?” His police force, it turns out, doesn’t know, because the vendor won’t tell them. As the officer delivering the briefing explained: “We don’t know what their algorithm is exactly… We don’t have any kind of a list that will tell us, this is their criteria, this is what would make a person red, this is what would make a person yellow. It’s their own system.” Later in that meeting, one of the council members asked the officers to run a live threat assessment on the council member’s own home. It came back yellow, apparently indicating a medium threat in that home (though the council member himself came back green).

After the meeting, Fresno Police Chief Jerry Dyer “said he is working with Intrado to turn off Beware’s color-coded rating system” and is considering moving away from social media monitoring, the Washington Post reported. But, as Intrado’s literature makes clear, there is nothing special about using three colors as the format of its automated judgments — it can just as easily offer a number, or some other format. In fact, as the ACLU’s Jay Stanley explains, Beware already offers a spectrum of different delivery methods for its automated threat assessments:

[The] software can target an address, a person, a caller, or a vehicle. If there’s a disturbance in your neighborhood and you have to call 911, the company would have the police use a product called “Beware Caller” to “create an information brief” about you. Police can also target an area: a product called “Beware Nearby” “searches, sorts and scores potential threats within a specified proximity of a specific location in order to create an information profile about surrounding addresses.” So the police may be generating a score on you under this system not only if you call the police but if one of your neighbors calls the police.

Chief Dyer also suggested that the system might need more civilian oversight — but it’s hard to imagine a committee of civilians getting answers that apparently aren’t available even to the chief. He also suggested that he might instruct Intrado to rely solely on criminal history of information — but did so with a “maybe” that fell far short of commitment.

“We’d like to act now.”

Given the way Beware’s algorithms are marketed, it’s worth asking: Is the company a magnet for talented engineers who are building uniquely valuable technology? It’s hard to be sure, but one signal may come from current and former employees. On the job information site Glassdoor (which lets people to describe their experiences applying or working for various employers), 94 users claiming to be current or former employees of Intrado paint a picture of a struggling business. One typical account, provided last month by a user claiming to be a software engineer at the company’s Longmont, Colorado headquarters, said in part: “Severe lack of tactical planning in many critical areas has led to many working on nothing but putting out constant fires. Marketing and Sales are the top dogs, with Product Management and Engineering — what and how we build our products, at the bottom.” Several say that Intrado’s corporate parent, West, has put Intrado under intense financial pressure to meet short term revenue targets.

Despite all the questions raised above, Beware may indeed have a bright near-term future in the law enforcement marketplace. Several of the email exchanges released by Fresno focus on company representatives’ efforts to develop “boilerplate” language that police departments can cut and paste into state and federal grant applications, for grants to fund Beware deployments. As one of the emails noted, “with appetite to award these type grants [sic] growing, we’d like to act now. (There is growing interest in Big Data applications as this science is becoming more popular).”