Rethinking innovation funding in the age of AI
Applicants can now use generative AI to craft powerful funding proposals.
What does it mean for organizations running competitive grants and innovation funds?
A significant shift is underway in the ever-evolving landscape of impact investing and competitive grant-making programs. In recent years, artificial intelligence (AI) has become a buzzword in many domains, including in donor funding landscapes. It is pushing funding organizations to rethink how they approach innovation funding and how to ensure the “do no harm” principle applies when delivering innovation for social, environmental, and economic impact.
At Caribou Digital, we’re keenly focused on how generative AI can impact, modulate, and drive an inclusive and ethical digital world. Large language models (LLMs), like ChatGPT, have particularly piqued our interest in our fund management work and are causing us to reflect on our approaches and practices. This blog post highlights some of these reflections.
Embracing LLMs in grant-writing: A double-edged sword
ChatGPT’s emergence has brought about three critical lessons for consideration:
1) Generative AI can break down barriers to applying for grants (like time and skill gaps)
ChatGPT and other LLMs are impressively proficient in writing grant applications. There are even some LLMs focused specifically on grant writing, like Grantable and others. The “traditional” grant application process has been a grueling task. It can be complex, time-consuming, and disempowering for applicants. It often takes senior staff away from their day-to-day duties and regularly offers no reward for their efforts. Applicants are commonly unsuccessful because they fail to clearly and effectively convey their idea, innovation, or project plan. However, new tools — from Grammarly to grant-writing LLMs — have the potential to save applicants time and money in this process. They can make grant-writing more accessible and less intimidating, as well as reduce language barriers or address accessibility issues for applicants with disabilities.
2) Generative AI makes it easier to communicate compelling ideas clearly
Encouraging AI in grant proposals can democratize idea sharing, allowing for a broader range of applicants to present their visions compellingly and coherently. AI could level the playing field for small organizations with limited or no access to experienced grant writers. Or, fund managers may see that grant applicants with disabilities and those who are neurodiverse are better able to write applications without worrying about how their dyslexia (for example) might limit their chances of funding success. So, it is Caribou Digital’s theory that a more diverse pool of applicants can now complete grant applications quickly and unlock critical funding.
3) Generative AI, if used effectively by fund managers, can encourage “unusual suspects” to apply to their grant programs
By lowering the traditional barriers to entry for grants, like time and language costs, LLMs open doors for a more diverse pool of innovators.
Here’s a case study to demonstrate how LLMs could reach “unusual suspect” innovators.
- As a fund manager, Caribou Digital usually requests grant applications in a single language: English. This is mainly because we manage grants in English, so all our policies, templates, and tools for tracking require input in English.
- We understand this immediately creates a bias against non-native English speakers, who have to convey complex, often technical ideas in their second or third language.
- If innovators could apply for community-based projects in more relevant languages (e.g., Swahili, Luganda, Arabic, Bengali, etc.), would more people apply with truly exciting and/or community-based ideas?
- Today, even basic LLM translation services can enable small, community-based organizations to quickly submit quality applications. Hypothetically, these tools would allow us to receive applications in local dialects and engage throughout the grant period in some of those languages, even if our team doesn’t have fluency in the selected language.
- But we also need to be highly conscious that these bold changes to processes could also contribute new biases, as LLMs are well known to be poor advocates for generating high-quality content in non-English languages. (See, for example, this article on AI language equity issues from Rest of World.)
How can we identify authentic talent? Why we are rethinking our practices
In the context of generative AI and grant-making, fund managers need to be acutely aware of how biases could get built into project design. Even without the widespread use of LLMs, there is almost always bias in the selection of grants. It is therefore logical to assume LLMs can exacerbate existing (or even create new) bias in grant-awarding processes.* This selective bias makes it incredibly challenging to engage with grant-making tools. It is our responsibility as fund managers to actively ensure no conscious or unconscious bias is introduced into the process.
If, for example, fund managers allow AI tool use in grant applicants, we must also invest in a rigorous evaluation of bias, perhaps even involving undercover critical colleagues as independent teams to reduce bias in application processes. By doing so, we ensure that using AI in grant-making processes does not inadvertently perpetuate existing inequities.
While AI can polish and perfect an application, it’s essential to develop mechanisms that enable fund managers to capture the authentic talent behind “artificial intelligence.” It’s time to rethink how we structure our submission practices and interfaces. We must find ways for applicants to demonstrate their authentic selves beyond the more polished face that LLMs and other AI tools can provide. This requires a fundamental shift in our approach: embracing AI where it enhances equity and inclusion while remaining vigilant against its potential to introduce new forms of bias.
At Caribou Digital, we’re committed to exploring innovative methods that allow for a more genuine representation of applicants’ potential. By doing so, we can ensure that the best ideas, no matter where they come from, have a fair chance to shine. We’re currently thinking about ways we can support genuineness in applications, such as:
- Allowing applicants to provide a video application (rather than solely text-based applications).
- Reducing or removing the need for computer access by running an application process on WhatsApp or mobile phone, for example.
- Plugging into existing platforms that allow applications to be submitted from an existing profile or organizational presence (e.g., f6s or Linkedin).
- Working with community-based organizations who can make initial recommendations or referrals on behalf of potential grantees and omitting lengthy written applications.
We know that none of these ideas will exclude bias in grant applications and assessments (some might even exacerbate it). However, AI tools in grant-writing have highlighted the need for innovation in how we assess authenticity and potential, and it’s time to test some new and innovative approaches to assessing innovation.
Please reach out if you’d like to discuss this further.
*The perception of bias varies widely; what seems unbiased to one person may be seen differently by someone with a different background or political belief. One excellent showcase of some examples of this is the Rest of the World AI series.