Image generated by DALL·E. Post text is completely human authored other than the indicated model output.

Mad Libs-style Prompts for More Imaginative LLMs

Alex Gorischek

--

The magic of large language models is that they’re generative, meaning they can render novel, never-before-seen content. In theory, this means they can be an endless source of stories, poems, jokes, and other creative content. In practice, though, they’re often remarkably repetitive. Even at a temperature of 1, asking GPT-4 to “Tell me a joke” a half dozen times gives me:

  • “Why did the scarecrow win an award? Because he was outstanding in his field!”
  • “Why don’t skeletons fight each other? They don’t have the guts.”
  • “Why did the scarecrow win an award? Because he was outstanding in his field!”
  • “Why did the scarecrow win an award? Because he was outstanding in his field!”
  • “Why don’t skeletons fight each other? They don’t have the guts!”
  • “Why don’t skeletons fight each other? Because they don’t have the guts.”

LLMs may be generative (they can output novel values), nondeterministic (they produce differing outputs for the same input), and stochastic (they involve an unpredictable element), but I needed them to be creative — even imaginative. And I realized that the models themselves might not be enough for this. Of course, the art of AI engineering is combining multiple components to get the system behavior you need!

I found a few key elements that led to variable and interesting responses. The first is context randomness, by which I mean putting varying content into the prompt itself. I started by adding random numbers into the prompt (“Tell me joke #0492”) — but this wasn’t enough, because the model simply ignored the numbers as irrelevant to the task. This led me to the second aspect, randomness relevance. The varying elements need to be related to the task at hand for the model to consider them. For example in writing a joke, we could feed it a subject (“Tell me a joke about apples”). But this wasn’t quite enough either, because when selecting subjects randomly from a limited list, there’s a decent chance of choosing the same one twice, even before exhausting all the available options. This is fixable with the third element, selection state — keeping track of which options have already been used, and not using them again until there are no more options left.

Together, these needs let me to a Mad Libs®-esque fill-in-the-blank model, where common classes of entities can be inserted into prompts in a shuffled-but-not-random order. I implemented this as a TypeScript library, and the syntax looks like this:

import { profession } from 'fill-in-the-blank';

const prompt = `Tell me a joke about a ${profession}`;

Here are a half dozen GPT-4 replies:

  • “Why do programmers always mix up Halloween and Christmas? Because Oct 31 equals Dec 25.”
  • Why did the teacher go to the beach? Because she wanted to test the waters!
  • “Why did the architect always carry a pencil and paper? Because you never know when you’ll come across a sketchy situation!”
  • “Why did the entrepreneur start a bakery? Because he kneaded dough!”
  • “Why did the software engineer go broke? Because he used up all his cache!”
  • “Why did the chef break up with the oven? Because it was too hot and cold!”

(I know, LLM jokes are pretty groanworthy…)

You might wonder how the placeholder could render different values if it’s a constant — but it’s not a constant! It’s a function whose `toString()` method invokes the rendering process, and that means each time it’s cast to a string, it resolves to a new value. (You can also invoke it explicitly if you want to, though.)

For greater variability, multiple placeholders can be composed, leading to exponentially more combinations:

const prompt =
`You are a ${emotion} ${profession} who enjoys ${exercise}. ` +
`You like ${musicGenre} music and you ride a ${vehicle} to work.`;

I asked GPT-4 “Tell me about yourself” with that role and got:

Hey there! I’m a firefighter, which sounds a lot braver than I sometimes feel. The truth is, facing those flames can get pretty terrifying, but helping people and making a difference is what keeps me going. To clear my head and find some peace, I love going for long walks. There’s something about the rhythm of stepping forward that just helps me process things and relax. Soul music is my go-to for pretty much any mood — it’s got this way of speaking to the heart that I can’t get enough of. And, believe it or not, I ride a helicopter to work. It’s an exhilarating way to start the day, soaring above everything with a view that puts it all into perspective.

But even that’s a lot of prompting to write, and I wanted to see if I could make it simpler. So I added a little `creativity` utility that can be literally just appended to a prompt.

const prompt = `Tell me a fact.` + creativity

Here’s a few of the facts I got back:

  • “The hexagon shape is naturally found in beehives and on the planet Saturn.”
  • “Jumping can burn up to 20 calories per minute.”
  • “Skateboarding was invented in the late 1940s/early 1950s by surfers looking for a way to “surf” when the waves were flat.”
  • “Dingoes are known to be skilled climbers, often seen scaling trees and fences.”
  • “Aloe vera is a succulent plant known for its healing properties and is commonly used in skincare products.”
  • “The color green is often associated with nature and symbolizes growth, harmony, and freshness.”

(Note: Asking language models for facts is generally not a great idea, however apparently the hexagon on Saturn thing is real…)

And that’s how I combined mechanical, randomized placeholder code with LLMs to get more creative responses. What techniques have you used to get more imaginative responses from LLMs? I’d love for you to give the library a try and let me know how it goes!

Note: Mad Libs® is a registered trademark of Penguin Random House LLC. This post, the author, and the library are not affiliated with or endorsed by them in any way. The author thinks Mad Libs are great, though!

--

--

Alex Gorischek

Software engineer working on dev tooling and AI at Microsoft. Opinions my own.