Do you need an AI Mandate to kickstart ChatGPT adoption in your nonprofit?

AI Hesitation could be stalling innovation at this important early stage of AI evolution

George Irish - Fundraising with AI
Fundraising with AI
5 min readSep 20, 2023

--

ChatGPT might be the most ground-breaking new technology to appear since the rise of the Internet in the 90s. But nearly a year into the AI revolution, the anticipated benefits of AI are still more promise than reality for most of the nonprofit sector.

It’s not at all surprising — the nonprofit sector is typically slower to adopt new technologies than the commercial sector. However, the public discourse highlighting the risks and ethical concerns about ChatGPT & AI has added an additional hurdle for nonprofit professionals eager to explore the potentials of the new technology.

When it comes to nonprofit staff and Chat GPT. two common scenarios seem to be playing out:

  • using it unofficially because of concerns about risk, bias, and lack of internal endorsement. (i.e. “I’ll use it on the side, quietly”)
  • not using it at all because of concerns about risk, bias, and lack of internal endorsement. (i.e. “I don’t think we’re allowed to use it.”)

Neither of these is really a good way forward, and ‘AI Hesitation ’could result in missed opportunities.

It’s also concerning if you consider the role that the nonprofit sector should be playing in helping to guide the development of AI at this early stage. Are we going to leave it up to the Googles, Microsofts, and Facebooks to decide what kind of AI we have in the future?

Is my organization ready to start using ChatGPT & AI?

AI isn’t a must-have technology yet, and it’s likely best for most organizations to avoid making big investments or commitments for now. But this could be an opportune time to begin exploring the AI landscape on a smaller scale. How can an organization strike a balance between risk and opportunity?

One place to start is to get some clarity about where your organization sits on the innovation vs. risk curve.

Here are a few questions to ask:

  • Does your organization have a learn-while-doing culture that is accepting of occasional failure or disappointment?
  • Do you have internal staff with an innovation mindset who can lead on an AI experimentation?
  • Do you have management backing to direct some organizational time and energy into innovation projects that may not have a clear ROI?

Take a cautious approach

Agree on some general parameters to keep exploratory AI work within accepted boundaries — recognizing the reality of innovation risk-taking while keeping the compass on track with the organization’s mission.

Here are some possible starting points:

  • Build around your champions.
    There are likely people in your organization who are already using/exploring ChatGPT, and could form the core of a working group — formal or informal, preferably cross-team — to start sharing what works. Getting a few voices together to talk can help clarify the risk vs. opportunity balance.
  • Be cautious, but don’t overthink it.
    The landscape of best-practice AI use policies and ethical red-lines is evolving rapidly, so avoid getting locked in a fixed position. Stay flexible, and lean on your existing procedures and policies for guidance about what is acceptable/permissible.
  • Humans stay in charge
    This should go without saying, but always ensure there’s human oversight of AI products, whether it’s marketing content, data insights or document analyses. AI chatbots and text generators are unreliable right now, but they will get better.
  • Stay focused on your mission.
    There’s a lot going on right now in the AI space, and it’s easy to get distracted by the latest shiny announcements (“Now with 3D video!”). Try to stay focused on what can actually help you deliver your programs now.
  • Prepare for disappointment first, then success
    Innovation rarely follows a linear path forward. Expect to hit the ‘trough of disillusionment’ along the way, and avoid putting too many eggs in one basket. Try to manage expectations — and risks.

These ground rules can help create a predictable, supportive environment for staff to begin to push the envelope.

Do you need an organization-wide AI mandate?

AI Hesitation may not be so easily overcome. Internal champions still have to justify devoting some of their time and resources into AI projects that may not be in official plans or strategies.

It may be helpful to consider adopting an organizational ‘AI Mandate’ that formally recognizes the opportunity presented by new AI technologies and intentionally directs staff to begin explorations, within reasonable boundaries.

An AI Mandate could be a simple single-paragraph statement, or a more complex document. Its goal is to empower staff to move ahead with learning and experimentation, understanding that the organization is comfortable with the uncertainty and risks.

— — —

Here is a sample Mandate statement generated by ChatGPT — modify as needed to suit your organization’s priorities and circumstances:

AI Mandate Template

AI for Good: Nonprofit Organization Mandate

Mission Statement: Our nonprofit organization is dedicated to harnessing the power of Artificial Intelligence (AI) for positive social and environmental impact while prioritizing careful risk assessment and mitigation. We will seek to use AI technologies to further our mission, promote ethical AI practices, and ensure that the benefits of AI are accessible to all while actively preventing potential negative consequences.

Core Values:

  1. Innovation: We will continuously explore and promote innovative AI solutions, while rigorously evaluating potential risks and challenges and actively seeking to mitigate them.
  2. Ethical and Responsible AI: We will adhere to the highest standards of ethical AI development and deployment, emphasizing the proactive identification and mitigation of biases, discrimination, and potential harm.
  3. Accessibility: We are committed to democratizing AI, ensuring that AI tools and knowledge are accessible to underserved communities, nonprofits, and individuals. Simultaneously, we will carefully evaluate and minimize risks associated with AI accessibility.
  4. Collaboration: We believe in the power of partnerships and collaboration, not only to amplify our impact but also to collectively assess and manage the risks associated with AI projects.
  5. Transparency: We will maintain transparency in all our activities, including risk assessment and management, from project selection and funding allocation to AI development and data usage. We will be open and accountable to our stakeholders.

With this mandate, our nonprofit organization is committed to investigating the potential of AI and striving to further our mission through the responsible and impactful use of artificial intelligence, while taking a proactive approach to risk assessment and mitigation to prevent potential negative consequences.

--

--

George Irish - Fundraising with AI
Fundraising with AI

Views and analysis on charity fundraising using Artificial Intelligence