AI Case Study: MoveOn’s AI Approval Process

Kate Gage
Cooperative Impact Lab
8 min readMar 21, 2024

--

By Han Wang, Kate Gage, and Oluwakemi Oso, Cooperative Impact Lab with Ilona Brand, MoveOn

In the fall and winter of 2023/2024, Cooperative Impact Lab worked with a cohort of 13 organizations to support experimentation with AI tools ahead of and after the 2023 election. This post is part of a series of AI Case Studies documenting that work, highlighting lessons, best practices, and recommendations for organizations — especially those that organize and campaign — as they consider incorporating AI into their work.

Read our first AI blog post for more learnings: Unleashing AI’s Potential in Campaigns and Organizing: Lessons from the Front Lines

Background

MoveOn is a leader in the use of AI in the progressive political space, utilizing it across multiple workstreams and dedicating time to experiment with new ideas. Because of the organization’s large size and broad scope of activities, it proactively created an approval process for staff who want to use an AI-enabled tool or chatbot. Cooperative Impact Lab interviewed MoveOn’s technical director, Ilona Brand, to better understand how MoveOn is thinking about AI, its current approval process, and opportunities for improvement.

Current use cases for AI at MoveOn fall into three workstreams:

  1. Data Analytics
  2. Generating/editing text
  3. Writing code (early exploratory phase)

There are many existing and potential use cases in each of those workstreams. For example, an initiative using AI to help sort an archive of their emails created better curated datasets useful to MoveOn as they work to develop an email targeting machine learning pipeline. Campaigners are using AI to vet public comment campaigns, and people across departments are using AI to help edit their internal documentation and grant writing.

Suggested Principles for Thinking about AI Use in Organizations

CIL worked with MoveOn and observed their implementation of AI across their organization from August 2023 through February 2024. The following are CIL’s conclusions of best practices an organization should consider as they work to integrate AI into their work based on those observations, as well as our work with 12 other organizations over that time:

1. Balance curiosity to explore with assessing value and appropriateness

Generative AI is a rapidly evolving technology, with new capabilities and potential hazards constantly emerging. To keep up with changes, employees should stay informed about new tools & processes as they are developed. An organizational culture where folks are encouraged to be curious about new use cases is beneficial, as it allows people to explore new methods of doing work.

“I feel like it’s so valuable to encourage curiosity and exploration: Oh, I didn’t think to try it here. Maybe I should try it here. Inject bits of AI into your workflows and let’s test value together.”

“There are some anecdotes floating around MoveOn where people are having pleasant experiences using these tools and it’s making their work life better. And it’s not really harming anyone else or impacting their team much. It’s really just like, I hate writing paragraphs like this, and now I don’t have to.”

At the same time, it is equally important to assess when it makes sense to use AI and when it doesn’t.

2. Just because you can doesn’t mean you should

It’s critical to ask not only “what can we use AI for?” but also “what should we use AI for?” Maintaining a healthy skepticism to avoid the temptation to plug AI in wherever it might fit is also part of the process. As individuals and teams begin to integrate this technology into their tech stacks, it is crucial to think through not only the use cases but also the ramifications (short, medium, and long-term) of adoption.

When determining which tools & capabilities to begin experimenting with, target workflows that are hardest for staff — especially ones that are time-intensive or repetitive. They can serve as an excellent test to determine if AI fits into the picture:

“Listen carefully to what is happening for staff. For example, you don’t necessarily want to use AI to automate writing emails if your campaigners like to write emails. Like why, right? But you might want AI to write emails if your campaigners love doing a ton of research and having conversations with partners but hate having to write those round-up emails. Yes, Claude can write this email. Does that mean Claude should write this email? I don’t know. Only the staff members know the answer to that.”

3. Assess the value of use cases

When evaluating the value of work, the default often focuses on measurements of time saved or efficiencies created. However, as this new sector develops, there are alternative ways to assess the value of using AI in workflows — most importantly, how it changes the nature of the work.

A new use case or workflow could provide an organization with capabilities that were impossible before. It could also affect an existing workflow, so it’s crucial to assess the impact on staff or other stakeholders. Sometimes, incorporating AI doesn’t increase efficiency at all — in many cases, the work needed to develop the workflow or to get the result you want may take more time or expertise.

A more holistic view of effectiveness also includes metrics beyond productivity or capacity. Increasing happiness or decreasing stress as a result of AI assistance is equally or even more impactful:

“It’s not like you’re going to measure 20% efficiency on writing grants. Well, maybe you could, but [what might be] more relevant [is if a staff member] can just decide, ‘I don’t want to spend hours of my life learning how to write grants and instead can get ChatGPT [to do that] stuff instead.’ That’s very satisfying for me.”

MoveOn’s Use Case Approval Process

Although it’s still a work in progress, the current state of MoveOn’s use case approval process may be a valuable guide for other organizations. MoveOn’s process allows folks to experiment with new things while building guardrails to protect against risk. It is also a way to help determine how AI-forward you want your organization to be.

While MoveOn does have a very advanced technology team and large staff, this process can be adapted to smaller teams.

  1. Post an idea to Slack
    A staff member posts a proposal for using AI in their work in MoveOn’s AI Slack channel.
  2. Notify members of the group responsible for approving AI use cases

MoveOn has an “@ai-approvers” group, which consists of a group of subject matter experts from across the organization. This group is somewhat ad-hoc, but often it includes staff from:

  • Technical team, for security review
  • Analytics team, to advise on strategies to validate outputs
  • Equity advising team, for equity review
  • Legal team, for potential concerns around terms of service or other issues
  • Department heads, who occasionally offer their opinions, especially if they think it’s going to impact how work happens within their team

The technical director of MoveOn tags the @ai-approvers group in the Slack thread so they can weigh in on the proposed use case.

3. AI approvers group discuss publicly in Slack

The @ai-approvers group discusses potential pros and cons for the use case in the Slack thread. There is no defined structure for the conversation, but the informal guidance is that people should ground any concerns in the guidelines as stated in the AI policy document created by the org (You can find MoveOn’s AI policy here as reference!)

Personalities and feelings about AI can play a significant role in discussions because people have reservations about AI for different reasons. For example, a department head can shut down the conversation due to specific concerns about their department.

“I think feelings enter the room a lot with this sort of thing. You know, it has to do with everyone’s jobs.”

4. AI approvers come to consensus, or at least try to

The discussion concludes when the group reaches a consensus to approve the use case or if there is sufficient dissent to deny approval.

Sometimes, the result of the process isn’t so clear cut because use cases can get approved but with conditions:

“The [MoveOnAI policy doc] says you can use GitHub Copilot to write automated unit tests for our code, but the terms and conditions are listed as sub-bullets. As long as you’ve checked this box and this box and this box inside the settings panel of GitHub — those kinds of messy little details — need to be documented, too.”

Hypothetical example

  • A staff member posts to the AI Slack channel: “I want to use ChatGPT to write subject lines for email. Can I do that?”
  • The security team comes online and asks: “Is there any member data in this? Can you do it without member data? Here are some suggestions on how to do it without member data.”
  • A department head might voice concerns related to their team’s work.
  • Several parties discuss this extensively in the Slack thread, but what ends up in the appendix of the Google doc as an approved use case is simply something like “Using ChatGPT or Cloud to write subject lines for emails.”

5. The approved use case is added to the policy document

The time for public discussion on a new AI use case can vary. If stakeholders disagree, it can take weeks or be revisited over months. Other simple processes can be approved in 2–3 days.

If the use case is approved, the staff member who proposed the idea can proceed. The use case is added to a slowly growing list of ideas in the appendix of the MoveOn AI policy Google Doc, which makes it a living document. This creates a paper trail that records the approval date and any related caveats and establishes a reference for people to consult in the future. Ideally, this reduces the number of approval interactions because use cases similar to those already in the Google Doc can proceed without going through the full approval process.

Move On Approval Process Diagram. Hi-res process diagram PDF Here

Pain Points & Possible Solutions

MoveOn identified the following issues and opportunities for process improvement:

  • Creating a more robust structure for discussion: Discussion can get derailed if one person with positional power voices specific concerns about the impact on their department. MoveOn still needs to identify a solution to this problem
  • Forming a more holistic view of AI at MoveOn: To evolve from the current case-by-case process, MoveOn would like to stand-up an AI working group to evaluate their usage of AI across the board and conduct testing and measurement to see how use cases are performing.
  • Naming a point person to manage the approval process: The unofficial responsibility for initiating the group discussions currently falls on the shoulders of a few individuals. Naming a person to manage this process or creating a rotation schedule would mitigate this imbalance.

Recommendations for Setting Up an Organizational Approval Process

Based on MoveOn’s experience, here are some things to consider when setting up an approval process for your organization.

  • Create a policy document to document high-level guidelines for using AI in the organization.
  • Keep a record of approved use cases as a reference for staff. Use cases similar to approved ones can move forward without requiring the entire approval process.
  • Balance curiosity to explore use cases with careful consideration of when it is or isn’t valuable to use AI tools
  • Maintain process transparency: Conduct discussion in public when possible
  • Be inclusive: Different people have different views on AI. Make sure voices from different parts of your org are heard.

Resources

Please contact CIL at ai@cooperativeimpactlab.org or at cooperativeimpactlab.org with questions or to reach a team member.

Thank you to our partners at MoveOn and Trestle Collaborative for their work on this project.

--

--