How To Create An AI Office Policy

Justice Innovation Lab
9 min readAug 26, 2024

--

Credit: NicoElNino

Criminal justice agencies and local governments are adopting machine learning and AI tools at a rapid pace. In many instances, these tools are deployed on an ad hoc basis, without formal policies for reviewing tools, implementation, and safe or prohibited uses. Agencies need policies to guide IT departments and other staff as more and more technology begins incorporating AI. For instance, should law enforcement agencies block the use of text recommendation features in Microsoft Word if allowing the tool permits Microsoft to inadvertently collect sensitive defendant data? Or should agencies require that a software company perform specific racial bias impact assessments before purchasing a license? To answer these questions, agencies need to create and implement AI policies.

Working at a small non-profit that works with local criminal justice agencies, I understand that creating any comprehensive policy in a resource and time-constrained environment is easier said than done. Thankfully, there are resources available if you know where to look. Below is a list of resources I reviewed with my thoughts as to their usefulness in an office devising an AI policy. Inclusion in this post does not mean I endorse the material! I am not an expert in drafting government office policy. This list is intended only as a resource for those working on AI office policies in government.

  1. Understanding AI in your office. AI, machine learning, algorithms, etc. are popping up in many different technologies including many that an office may not realize. A lot of public resources, including those from places like the National Institute of Justice (NIJ) focus on AI in the policing technology context.1 This is helpful, but offices need to be aware of other software they might be using that incorporates AI and are not clearly criminal justice related, i.e. legal research and writing tools. To the extent that there is a compilation of research of AI uses in the legal field, I found the Stanford Legal Design Lab and their bibliography of AI research helpful. To recognize these technologies and understand how outside advocates are thinking about AI in government, I suggest reviewing this toolkit from the group AINow. It’s a broad document that will hopefully give the background needed to recognize software using AI and the language to communicate clearly between technologists and lawyers. This Stanford University group also tracks the current state of AI uses and provides some resources — though there is more focus on AI in health.

2. Specific guidance for lawyers. The American Bar Association (ABA) has issued Formal Opinion 512, which the organization’s ethics committee takes on proper evaluation and use of generative AI tools. Some individual prosecuting attorneys are also engaging with thinking about how AI should be used within prosecutorial practice. Finally, the National Association of Criminal Defense Lawyers (NACDL) has a robust AI research wing — certainly ahead of the prosecuting professional associations — that offers resources and thoughts on AI uses in the criminal legal context.

The Department of Justice seems to be working on their own internal use of AI which will likely produce a DOJ policy, but I did not find any such policy online.2 The White House Blueprint for an AI Bill of Rights is a fairly comprehensive guidance document with proposed “rights” and sections for technical guidance/implementation. There is significant emphasis on ongoing assessments to ensure that there are not discriminatory impacts from AI tools, which will likely face resistance, but if included in government policies on technology acquisition then industry may be pushed into self-monitoring. The blueprint also is helpful for covering topics related to use of AI such as data privacy that need to be considered in creating a policy.

*A special note on definitions and language — as more and more resources are published, there are new, nuanced definitions being used across government entities. While I don’t recommend any given language framework, I did find this article by the architects of the EU AI bill helpful in understanding how that body developed various definitions.

3. Learn more with NIST resources. National Institute of Standards and Technology (NIST) resources are generally dense, but are great for laying out industry standards and identifying risk. NIST released an initial AI playbook in 2023 with a number of helpful resources that explain AI and how to perform policy gap analyses. In particular, there is a Risk Management Framework (RMF) that provides helpful definitions of various risks in using AI and has a section, “Suggested Actions to Manage GAI [generative artificial intelligence] Risks.” That section is a good step-by-step guide for all the policies and procedures an office should create without providing tidy templates of policies.

Were an office to work through the RMF document then a natural output would be an internal report on the greatest risks that AI tools present to an office and thus an office should be able to extrapolate needed policy safeguards. In particular, offices should consider what will be needed with regards to data collection on the use of AI tools in order to comply with regulations concerning impact assessments. NIST also provides an example from San Jose, CA, of a jurisdiction using the playbook, which is helpful for seeing what the output might be from working with the playbook.

4. Check out how other local governments are approaching the issue. This is rather obvious advice, but copying others is generally a good path forward. I did not find any such policies specifically for criminal justice-related agencies — police, prosecutors, or courts — but if you know of any, please reach out, and I will edit this post accordingly. Jesse Rothman helpfully pointed me to this local government AI policy tracker. I also found this aggregator resource that pulls together updates and policies at the state level — a better programmer than me should probably scrape the internet to pull together all current local policy examples.

  • Ben Packer sent me California’s recent GenAI procurement guidelines, which I like as a framework for guiding those responsible in acquiring technology in both required training to understand AI and considerations based on risk in acquiring software. Ben also pointed me to healthcare resources that lay out principles for use of AI given a similar-ish structure where the AI tool is used by a service deliverer.
  • Washington, DC, is taking AI in city government pretty seriously by forming an Advisory Group that will create a strategic plan. While individual district attorney offices may not be able to muster such resources, offices should ensure that they are a part of any such local efforts. DC has put forth an initial AI/ML policy though actual implementation of control measures for items like “implement robust evaluation methods to assess performance, fairness, and potential risks associated with the models” are likely where offices struggle the most — what is a robust evaluation method and how do you implement it?3
  • Seattle, WA, created a Responsible AI Program and promulgated a policy in 2023. I like this policy because it includes clear values that guide AI adoption and lays out a clear process to evaluate tools. To what extent the mechanisms referenced in section 4 work to mitigate bias, I am not sure, but it’s a clear priority and an understanding of the risks of AI.
  • This example from St. Louis, MO, shows the ease with which a jurisdiction can start a policy and plan to expand and build upon it.

5. Employ outside counsel and experts. This isn’t a resource, but I would suggest to anyone that is working on an AI policy to hire an outside law firm to at least review the policy and provide guidance.4 Unless you are a lawyer with a lot of experience in data privacy there is the potential for a lot of gaps in policy that you may not be aware of. While law firms are good for reviewing policy, it is possible that you also want to hire an outside audit firm to help with designing the actual controls that any policy calls for. There is generally a disconnect between policy-makers and practitioners, to bridge that gap, audit firms can help to institute the specific control measures called for by the policy. There are also specialized organizations like this one popping up to offer support in policy creation — this is not an endorsement of Responsible AI as I have never spoken with anyone at the organization and am not a member.

If you are creating an office AI policy and do spend some time reviewing the resources provided above, then a couple of items will become clear pretty quickly:

  1. Good policies and procedures set up a framework to make decisions and evaluate the risks associated with adopting an AI tool — especially risks to privacy given the sensitive nature of government data.
  2. Policies for public agencies should start from a values-based perspective of how to serve the public better and not from an efficiency perspective that passes over individual rights.
  3. Impact assessments are a fundamental component to good AI governance — even if we are not quite sure what impact assessments should be yet. There are plenty of things to be concerned about with regards to AI and bias and we can only protect ourselves from such outcomes if we commit to measurement and transparency. A really interesting take on this is Axon’s public research on their DraftOne technology. While I don’t think this evaluation goes as far as I would like from a public policy perspective, I applaud the transparency and effort.
  4. We are in need of federal regulation, but while we wait, offices should begin creating policies because individuals in the office are using AI technologies.
  5. It’s easy to ban AI from making a final decision, it’s harder to set the proper line where AI might be incorporated into workflow without overly influencing the human decision maker. For instance, if an agency uses an AI assisted research tool, while the ultimate decision may be left to a human, if the tool only returns biased results, then surely that will dictate the decision. Good policies will account for this through vetting and required monitoring and testing.
  6. Finally, one last thought based on what I know best — district attorney offices. Independent, smaller government agencies like district attorney offices and local courts are going to struggle with setting good AI policy, because they often lack the technological resources to carry out best practices. Without the support of larger IT departments and subject matter experts, these agencies are going to have to do the work of setting and implementing AI policies without office expertise. This will lead to ineffective implementation. For instance, how will a district attorney office carry out a racial bias audit of a legal drafting technology if that is required by policy? The office simply won’t have the expertise to do so, and thus, the policy will either be ignored or meaningless. This is a good reason why national legislation is needed to create a standards and vetting framework.

If you find these resources useful or if there are other resources available that I’ve missed, please let me know. Thank you again to Jesse Rothman and Ben Packer for reviewing and giving thoughts and resources for this post.

Footnotes:

  1. Here’s an interesting report from NIJ that lays out AI uses but focuses on uses in policing more than prosecution, courts, or probation/parole.
  2. The agency is clearly concerned with the use of AI by government and industry to commit violations of existing laws — including discrimination violations — but does not seem to be focused as much on guiding criminal justice agencies in the proper use and adoption of technologies.
  3. In my limited experience drafting and then implementing policy, it’s best to draft and then immediately craft examples and documentation. For instance, in dictating that there be “robust evaluation methods to assess performance, fairness, and potential risks associated with the models” I would do something like lay out a template form for agencies to fill out as a pre-analysis plan as to the impact of the model and require an outside committee to review the plan before the model was deployed. Such a process gives clear guidance on how an agency could satisfy the “robust evaluation methods” through creating the plan while the outside committee review would assess whether “performance, fairness, and potential risks” were sufficiently considered. Finally, the agency would report on the results in a standardized format to the governing body for final assessment.
  4. We hired an outside firm to review our data security policy, and I have never been so grateful for a lawyer’s edits. Please contact me if you would like more information.

By: Rory Pulvino, Justice Innovation Lab Director of Analytics. Admin for a Prosecutor Analytics discussion group.

For more information about Justice Innovation Lab, visit www.JusticeInnovationLab.org.

--

--

Justice Innovation Lab

Justice Innovation Lab builds data-informed, community-rooted solutions for a more equitable, effective, and fair justice system.