Determining “Key” Metrics

Justice Innovation Lab
6 min readJan 25, 2024
JIL Data and Research Scientist Kevin Himberger leading a Data 101 training session for the Shelby County (TN) District Attorney General’s Office.

Justice Innovation Lab partners with prosecutor offices that have an interest in using data to fix issues and bottlenecks in the criminal justice system. Often when we begin a partnership, the prosecutor’s office lacks a set of “key” metrics which they track with the intent to take specific action. In this Q&A article, the process of defining & using metrics is explained, with the goal of facilitating offices’ ability to move from creating numbers to deliberately setting actionable metrics.

Why define a set of “key” metrics in a prosecutor’s office?

Shelby County (TN) District Attorney Steve Mulroy likes to say “You can’t change what you don’t measure.” Implicit in this statement are two key points: (1) Understand what requires changing, and (2) Determine which metric(s) would indicate a change. Defining “key” metrics is a process that is not about the data available at the time of brainstorming, but rather a downstream exercise after deciding the points in the system that need to be monitored (and potentially changed).

Most offices have reports on caseloads but rarely do they have much more than that. Even when caseload reports are produced on a recurring basis (e.g. monthly), there’s usually no objective criteria set to determine when (or what) action should be taken. Caseload statistics are an excellent start for assessing workload in the office, but they are insufficient. If caseloads rise or fall, how can one assess if that is due to seasonal variation in crime, changes in underlying criminal activity, or indicative of issues in data quality or productivity?

When a new District Attorney, State’s Attorney, or Solicitor takes over responsibility for a jurisdiction they are likely to make changes. Whether it be to personnel, policies, or relationships with external agencies, they will have a reason for those changes. By adopting a “hypothesis mindset” to first establish a causal prediction of how those changes will impact the system, they will help define the metrics necessary to show if the changes they made did (or did not) have the hypothesized effect.

Why are key metrics often untracked?

The most common reason for not having a set of key metrics is that, until recently, the data was too difficult to collate. The legal system has been notoriously slow to adopt digital methods. While the HITECH Act spurred the healthcare industry into the digital age, there has been no such legislature for the practice of law. As a result, most prosecutors are focused on their work (i.e. individual cases) with little regard to collecting data for the benefit of the broader office. Consequently, the office-wide data is unavailable at scale, resulting in no reason to have an office’s leadership team discuss which metrics would be the most useful.

What is the correct number of metrics?

The correct number of metrics to define is the minimal number required to assess changes of interest in the system. There are costs for too little data and too much data. While too little data prevents the assessment of policies and practices, too much data becomes a productivity killer. The balance is struck upon thoughtful deliberation of what should be tracked versus what can be tracked.

What do you do with the metrics?

Transparency is an important aspect of metric use. Some metrics may be used for internal purposes (e.g. Are we assigning a similar number and type of cases to teams with the same responsibilities?) and others for external purposes (e.g. How many people were prosecuted last year? Were there meaningful demographic differences in prosecutions and dispositions?). The actions taken depend upon the audience.

For internal metrics, a leadership team in the office should develop a schedule for metric review and follow-up. This regularly scheduled meeting will allow the office to determine if the metrics chosen were the “correct” ones and permits an iterative process of metrics development and determining appropriate follow-up actions.

For external metrics, specific reports and dashboards are the most effective communication methods. In a previous post, we explained how to make data actionable and suggested tailored, automated reports for specific purposes. The most ambitious action would be to make case or charge-level data available (as opposed to aggregated reports or dashboards), but there is a considerable time and labor commitment for ongoing maintenance and feedback.

What are some examples?

Metrics can be idiosyncratic to an office, but two representative examples of standard approaches are Prosecutorial Performance Indicators and JusticeCounts.

Prosecutorial Performance Indicators developed several indicators in three sections: Capacity & Efficiency, Community Safety & Well-Being, and Fairness & Justice. Within each indicator is an explanation of how it’s measured, why it’s included, and the desired change over time. For example, tracking violent crime requires having two data points: the referral charges and the offense date. With that information, one can calculate how many cases had a violent felony charge in a particular month. With that metric, the office can determine if their policies and practices are leading to a reduced number over time and/or relative to a peer jurisdiction.

JusticeCounts is an initiative between several organizations to produce a standardized list of metrics across several actors in the criminal legal system. Metrics are split into tiers, with each tier having sets of metrics for each “sector” (e.g. “Law Enforcement”, “Prosecution”) and area (e.g. “Capacity & Costs”, “Public Safety”, “Fairness”). After reviewing these examples, an office leadership team will be able to define an initial set of metrics.

How do you update the metrics?

As mentioned in the “What do you do with the metrics?” section above, metric creation should be an iterative process based upon feedback from within the office and with external partners. Beyond this principle, whenever a change is enacted (e.g. new policy), a consideration should be required to determine what metrics are necessary to assess the impact of that change. It’s possible that the necessary data is already being collected, but a review is necessary to determine when additional tracking is merited.

One illustrative example of this process is assessing line prosecutor workload. District Attorneys are often interested in ensuring that caseloads are appropriately balanced across assistant prosecutors. To assess this, most offices devise caseload metrics. The calculation is simply the number of cases assigned to a specific prosecutor compared to the average caseload in the office (total office caseload divided by the number of prosecutors). Almost all offices using this metric will quickly find that caseloads are highly variable across prosecutors.

Offices using a basic caseload metric quickly realize that the variability is due to prosecutors specializing on certain case types or working with specific law enforcement agencies. Caseload metrics need to reflect the expected caseload given the role of the prosecutor. Generally, this is done by calculating caseloads on a prosecutor team basis. After accounting for this issue, leadership can then identify prosecutors with higher caseloads relative to their teammates and who could then benefit from transferring cases. Additionally, creating this more nuanced metric helps offices find cases assigned to prosecutors who had left the office without transferring their cases and thus did not have any active oversight. The careful review of the produced metrics not only accomplishes the original goal (of comparing prosecutor caseload), but also enables the office to find unknown issues.

Conclusion

Any prosecutor office wanting to enact change will require both action and measurement. Deliberatively developing and monitoring key metrics allows an office to:

  • Hypothesize about the impact of proposed changes;
  • Create systems to automatically (i.e. without a human) track progress; and
  • Identify issues in current office practice that may be hindering data collection or even the functioning of the office.

Often prosecutor offices are understaffed for data development. Justice Innovation Lab and our partners work through the challenges of metric development — determining how to interpret data, identifying what data is missing, and thinking critically about expected impacts — because proving that policies work is rewarding. Our experiences directly contributed to this aggregated collection of suggestions and, hopefully, serve as a beginning or continuation of a data-informed journey.

By: Kevin Himberger, Justice Innovation Lab Data and Research Scientist

For more information about Justice Innovation Lab, visit www.JusticeInnovationLab.org.

--

--

Justice Innovation Lab

Justice Innovation Lab builds data-informed, community-rooted solutions for a more equitable, effective, and fair justice system.