AI Experimentation canvas

Dan Sutch
CAST Writers
Published in
4 min readMay 30, 2024

We have had a number of requests to share Generative AI (GenAI) policies that charitable organisations can adapt and reuse. The challenge is that GenAI represents a wide range of technologies — sometimes described as general purpose technologies — meaning they can be used in many, many different contexts and operations — from internal to external, to efficiencies within meetings to creative images and outputs. Similarly, the pace of change means it’s too early in the development of GenAI to pick specific tools for staff development — every few weeks (or days) another whole set of capabilities, complexities and products emerge.

Rather than creating policies, at CAST, we take two specific approaches. First, taking an experimental approach to ensure we’re being mindful about safe, stoppable uses of GenAI. The second is being open about when we are/aren’t using it.

Experimentation framework

Image of the AI experimentation canvas

It’s a simple canvas that helps us think through any use of GenAI. As the test or practice becomes bigger, so we might use a more detailed version to document and think through the approach. We’ve got a Miro version that is easy to edit, and printable versions if Post-its and Sharpies are your style.

It’s relatively self explanatory, but below outlines each section’s purpose.

Experiment title

Published? Are you sharing this with your team? Can you share it more openly so we can all learn together and make quicker collective progress?

Description: An overview of what you’re planning on doing. Why, with whom?

Hypothesis: What do you think (or hope!) will happen?

To test this we’ll: Description of what you’re going to do — detail the plan so others can understand it clearly

We’ll know if it works by measuring: What are your metrics of success? How will others know that you’ve proved your hypothesis?

Tools used: List the tools you’re using for your experiment. We privilege sector and community owned technologies (due to the accountability of the tools being in the sector and that it keeps money within the sector).

Boundaries of the experiment: This is a critical area for consideration as it’s the boundaries that ensure this is a safe experiment. The boundaries may be time-based (we’ll only trial this for a week), cohort based (we’re only testing this with one team internally) — as you think of the boundaries you’re exploring how to ensure the test is ‘safe enough to try’

Person in the loop: During the experiment (and potentially afterwards) what is your role, or the role of people in this experiment? It may be designing the new tool, testing its use, checking in on those who are using it etc — here you’ll ensure you know who is accountable and responsible for the use and its outcomes.

Data and Privacy: For this experiment what data do you need to use or collect? Where is that data being processed or stored? What data has been used to train the tool you’re using? How does this experiment build on other data policies your organisation holds (this question has the potential to be huge — so answer it in alignment with the boundaries of the experiment and use the experiments to inform future policies).

Engagement: How have you consulted, co-designed or engaged with those impacted by this experiment?

Success/Failure: Capturing the data and learning from the experiment

Summary reflections: Reflecting on the experiment. Again, is this something you can share more widely to help others learn?

Links: Links to any tools or resources that are used/come from this experiment.

If you have any reflections, additions or challenges to this canvas — please share them so we can improve it; please contact hello@wearecast.org.uk.

The second is being open about when we use AI. For now, that means adding an AI Transparency statement to the end of our writing and presentations — originated by Kester Brewin — to be clear about our use. This helps in conversations with partners but also ensures we’re being mindful of our use. The simple table below is the current structure we use — in this case, completed for this article.

We’ve put together a living library of AI resources that we hope will support you as you explore the challenges and opportunities associated with AI. Please take a look — and do let us know if there is anything you need particular support with.

AI transparency statement showing this article used spell check and grammar check

--

--