AI Legislation Is Criminal Justice Legislation

Justice Innovation Lab
9 min readFeb 22, 2024

--

Credit: mattjeacock

Despite having a law degree, I, like most lawyers, understand a narrow part of the law. The part of the law I know the most about is not the federal law making process, including the creation of regulations that are the heart of how law is implemented. Rather, I have spent my time working with data in the criminal justice system, including helping with investigations and observing the use of technology across the criminal justice system. I give that preface because my thoughts below — spurred by recent proposed legislation and judiciary committee hearing on AI in criminal law — may be addressed by that regulatory, rule-making process and additional legislation. There are also other efforts at AI regulation through bills focusing on civil rights reporting and from the executive (the White House executive order includes ordering the U.S. Department of Justice to conduct a review of use of AI in criminal law that hopefully should address many of these issues) that might have knock-on effects that I don’t discuss below. I am only providing some of my thoughts on larger legislation I’ve read.

Software and Algorithm Transparency

Both proposed bills, linked above, seem to prioritize protecting proprietary algorithms and the underlying training data. In the criminal law context, protecting companies’ intellectual property interests over disclosure of important information to other actors in the criminal justice system is a balancing act. But as Dr. Karen Howard and Professor Rebecca Wexler pointed out in the recent committee hearing, also linked above, disclosure and greater transparency is highly desirable and can likely be accomplished without significant harm to property rights.

A tangential and important point that legislation and rulemaking should consider is national, public, independent testing data and public testing methodologies. With regards to public testing methodologies, the National Institute of Standards and Technology (NIST) and other federal testing bodies do develop these and make them public. But less attention has been given to having universal testing data that federal regulators and other independent researchers can use to test tools. Such datasets are important in order to properly compare similar software and algorithms and provide baseline comparisons and can be validated as representative.

These datasets would need to be continually updated to avoid developers training to pass tests without critically examining whether bias might arise in other forms. Census already creates large, synthetic datasets and such datasets would be analogous to the testing datasets already used by websites like Kaggle. If made publicly available with researchers able to suggest changes, then all interested parties — regulators, researchers, and developers — could contribute and measure how those changes affected individual algorithm output. Creating these datasets will be difficult, especially as developers may argue that these datasets are not applicable because they lack some key feature of a given model. To deal with this, NIST might create standards for developers to add features to the synthetic datasets for testing purposes and that can be kept confidential from disclosure.

In the case of generative AI, such datasets would possibly be a sample of representative prompts with an accompanying method for evaluating responses for bias. Creating comprehensive evaluative methods that can catch issues like illustrating racially diverse Nazis will be very difficult, but it seems plausible that if companies can develop generative AI then with similar levels of investment they can determine evaluative safeguards.

Setting Standards and Measuring Bias

I don’t know anyone working at NIST, but my general belief is that they are really smart. The most recent AI bill pushes a lot of regulation to NIST, which given their expertise, seems like a good idea because, as pointed out by Dr. Karen Howard, NIST includes experts at measurement. But there are a few challenges to simply turning the keys over to NIST that should be considered.

  • Does NIST have the capacity to do the necessary measurement? This depends partly on the scope of what they would be tasked to measure — is it the use of all algorithms, including those that utilize humans as decision makers or just the narrower scope of something like generative AI. It is likely that NIST will need additional resources, but how much is not clear.
  • Who should set and what are the standards that are being measured? The proposed legislation barely touches on bias, but in the criminal justice space racial bias is one of the most important considerations with regards to these tools. Regardless of whether one believes the current system is biased or not, it seems prudent to ensure that modifications to the system using opaque technologies should not add to or create bias. These issues raise questions like if facial recognition software has differential error rates by race, what is an acceptable differential? Who gets to determine an acceptable differential — NIST, Congress, or judges through lawsuits?
    In addition to concerns over racially biased results, there are threshold questions for any algorithm as to whether it provides sufficiently accurate results and whether it is applied in a disparate manner. For instance, where AI is used to review incoming cases and identify those without legal sufficiency — how good does the algorithm need to be and does the system identify cases that are too weak or cases with enough strength? Does the choice of what to measure have implications for how it is applied such that there might be racially disparate outcomes from its use because of racially disparate inputs (e.g. different arrest rates for certain crimes)?
  • Does NIST have the expertise to do social science measurement? NIST specializes in the regulation of the internet, which does include setting clear standards and expertise in statistical methods, but does not necessarily mean they have the expertise to measure issues such as racial bias.

In addition to challenges in setting standards, legislation will need to consider whether defendants can use statistical evidence of bias to challenge algorithmic-based decisions or inputs (motion, briefs, evidence) in their case. For instance, providing a statutory basis for a defendant to challenge a pre-trial custody decision based on an algorithm using statistical evidence of racially dependent error rates. This is a different issue than when defendants have challenged the validity of certain tools like ShotSpotter because it involves a discriminatory intent element to the claim. While statistical evidence of discrimination is often used in civil cases to demonstrate a pattern or practice, McCleskey (podcast: 5–4 discussion of the case) made such claims in criminal cases functionally insufficient in demonstrating discriminatory purpose.

Proposed legislation will also need to consider what can be measured and challenged. All legislation will likely allow challenges, in some form, of “consequential decisions” made by algorithmic tools. Generally, consequential decisions appear to be binary or categorical decisions that affect an individual’s rights. So, a decision by an algorithm that recommends detaining an arrestee or not would almost certainly be subject to review. But what about tools like generative AI tools for writing that may exhibit bias or facial recognition software that pulls out a list of possible suspects for police/prosecutors to pursue? Such decisions do not directly impact an individual’s rights as a categorical decision, but they do possibly affect downstream decisions in a case.

Data and Evidence Provenance

Having read the most recent proposed AI bill which touches on data provenance, but understandably does not specifically cover ownership/custodianship from an evidence perspective, I thought I had a great insight. Then at the recent hearing, this exact issue was raised by Senator Jon Ossoff to Professor Wexler. While they did not discuss it using a watermarking or data provenance lexicon, laws that consider data provenance will likely cover questions of evidence and custodianship in criminal law.

A lot has been made of technologies that can automatically review evidence. These tools can summarize long documents, redact information, extract specific clips from long videos, and enhance or redact videos or photos. The rules around marking materials as produced or altered by AI will be pivotal in the production and verification of criminal evidence. Senator Alex Padilla, during the hearing, actually provided a wonderful example of the complexity of this issue where an AI generated model was then used by a subsequent AI system to identify a suspect. What are the evidentiary rules around disclosure of this chain of AI generated materials to the court and the defendant? Ensuring they are followed likely depends on the laws regarding data provenance.

Legislative Definitions — Developers, Deployers, and the Government

Legislative definitions are very important — I know that from law school. With respect to algorithm and AI legislation, three of the most critical definitions to determine — though different terms may be used — are who are the developers and deployers, and is the government excluded from the legislation in various ways?

I believe from reading the bills that often private companies will be considered both developers and deployers. One of the challenges with these definitions though is that they often limit whether regulations apply to the developer or deployer based on their size — number of employees or revenue often dictate this. Technology and AI algorithms depend upon a robust open and collaborative environment. Given this, it is possible for me as an individual to use algorithms from ChatGPT — a product of Open AI, a massive company — to create specific tools that can be used in the criminal law context. Given my size, I will likely not be subject to regulation, but is Open AI for my use of their algorithm? If the regulatory framework requires independent testing and verification by a body like NIST, does that mean Open AI needs to submit all the tools from third-parties marketed on their platform to the regulator for testing?

This is especially tricky as local governments increase their technology capacity. Police departments and prosecutor offices are adding data scientists to analyze local data and help with crime prevention and prosecution. In some cases, these individuals will or have built tools using publicly available algorithms that do things like redact police reports automatically or identify possible suspects in crimes. Will these be excluded from regulation and does it matter if they utilize other resources like ChatGPT?

Missed Opportunity

The recent hearing on AI in criminal investigations started with opening statements from Senator Cory Booker and Senator Tom Cotton that lauded the potential of AI to help law enforcement. Both described the potential for AI to help reduce serious, violent crime. Two of the witnesses, Dr. Howard and Professor Wexler — as well as the Senators during questioning — raised serious concerns about the use of AI in law enforcement. The initial optimistic statements and the later concerns all seemed to center around what most people probably think of as law enforcement. Yet, I think, this misses the greatest opportunity for AI to aid law enforcement — addressing large-scale financial crimes.

The U.S. Department of Justice, U.S. Securities and Exchange Commission, and Internal Revenue Service are all drowning in paperwork required to identify and prove financial crimes. Crimes such as insurance fraud that ultimately harms thousands of people, tax evasion, insider trading, wage theft that can indenture employees, and even human trafficking are all crimes that produce a large amount of data to sift through. It is hard to identify the criminal acts and those behind them. It requires highly trained forensic accountants and investigators to deal with these crimes. AI can help with this. Perhaps, with proper incentives, companies would develop technologies such that we did not have to rely upon the federal government to enforce all these crimes, but rather some of it could be passed to local agencies. My hope is that legislation might encourage such use, but at the very least put in safeguards to ensure that AI can be used in this way and avoid having similar laws arise that hinder investigations into gun trafficking.

Conclusion

Congress has a hard, but necessary task in front of them to regulate use of algorithms and AI. Most of that work will focus on consumer protection and industry regulation, but the criminal legal system is already using these tools and has a complex set of “consumers” to consider. The uniqueness of the system, especially since it affects the fundamental right to liberty, should be a consideration in all of this work.

By: Rory Pulvino, Justice Innovation Lab Director of Analytics. Admin for a Prosecutor Analytics discussion group.

For more information about Justice Innovation Lab, visit www.JusticeInnovationLab.org.

--

--

Justice Innovation Lab

Justice Innovation Lab builds data-informed, community-rooted solutions for a more equitable, effective, and fair justice system.