Recap: Common Prompt Patterns

Tested with ChatGPT, Bard and Copilot

syIsTyping
Don’t Code Me On That
8 min readDec 15, 2023

--

I’m taking a few prompt engineering courses, this is a summary (mainly for my personal learning) of prompt patterns, which are strategies (and templates) for writing prompts to achieve the desired resposne from a Generative AI model.

I’ve re-arranged them into two aspects that I feel helped me isolate the patterns — function and form of a prompt. At the end there are a series of worked examples run in OpenAI ChatGPT, Google Bard and Microsoft Copilot.

(Note: in the examples below, --- denote unrelated prompts and are not part of the actual prompt input)

Function — The What

These are ways to structure a prompt based on the output or function of the prompt, ie what the prompt should do.

Ideation

Generation of content types

For creating content types such as marketing campaign, interview questions, advice, names and so on:

Generate a <type of content> for <topic>.
---
Give me <type of content> for <topic>.
---
What are suitable <type of content> for <topic>?

For example,

Generate a set of ten interview questions for an interview with a prompt engineer.

Question-answering

Have the model provide an answer based on general knowledge or the comprehension of domain-specific knowledge.

Open domain: questions about general and public knowledge. Be specific and provide as much context as possible. For example,

What is the name of the city that is the capital of Japan?

Closed domain: non-public information provided as additional context to the model (also see “Context and Instructions” section below) and prompted to only respond using that information, instead of also using public information.

Extractive question-answering: when the model is prompted to provide a response by extracting information (ie, using only existing information) instead of generating new content.

For example, a closed-domain extractive question answering prompt:

Answer the question only from the context given below.
Context: Internal customer support policy chapter 1: …
Question: The customer who reported an issue is seeing a blue screen and blinking lights, what should they do next?

Text Classification

Prompting the model to derive a classification from a given text. The classification topics could range from generic classification (for example, animal types), genre of prose, spam detection, intent/emotion/toxicity detection, language identification etc.

The classification labels could either be provided or not provided, in which case a classification topic should be provided.

For example,

Classify the following headline into news topics such as "current affairs", "entertainment news", "markets". 
Text: <a news topic>
---
Identify the language of the following text.
Text: <snippet of text>
---
Classify the following email into "spam" or "not spam".
Email: <email snippet>

Text Manipulation

Ask the model to manipulate the provided text info different formats. Usually a few worked examples should be given (also see “n-Shot (Zero/One/Few-shot)”section below), or a role is provided (also see “Persona/Role-playing/Impersonation” section below).

Extract the specifications from the text below in JSON format. 
Text: ...
---
Take the following text and rephrase it for a non-technical audience.
Text: ...

Text Summarization

Summarize a given text or relevant tags. The provided text could also be a conversation transcript.

Provide a summary of the provided article. 
Article: ...
---
Provide a TL;DR of the following article, with up to 5 points.
Article: ...
---
Generate hashtags for the following tweet.
Tweet: ...
---
Generate titles for the following text.
Text: ...

Form — The How

These are ways to structure a prompt based on the manner the prompt should work to arrive at the output, ie how the prompt should do what it does.

Persona/Role-playing/Impersonation

Asks the model to provide responses from the perspective of a particular role, or for a particular type of audience.

You are a <role>, perform <function>...
---
Act as a <role>, do <function>...
---
Perform the <function> as a <role>...
---
Perform <function> for an audience of <role>...
---
Pretend that I am a <role>, do <function>...

For example,

You are a university lecturer of a Generative AI class.
Pretend that I am a prehistoric caveman.
Explain the concept of Prompt Engineering.

Context and Instructions

Provide the model with additional information, and/or with restrictions to the response. For example, background information, internal publications, topics to emphasize or avoid, or a preferred response format.

Respond using only the context provided. Context: ...
---
When making recommendations, focus on the <topic>.
---
In no more than <n> lines.
---
When answering questions, do <instructions>.

For example,

You are a sales rep for a automobile company. 
When making recommendations, focus on the benefits to the environment.
Question: Recommend a car for ...

n-Shot (Zero/One/Few-shot)

We could provide solved examples to help the model better understand the intention of the question or the format of the response.

  • Zero-shot: no examples are provided.
  • One-shot: one example is provided.
  • Few-shot: more than one examples are provided.

For example, a zero-shot prompt to classify the sentiment of a movie review would be:

Classify the sentiment of the given movie review into "positive" or "negative".
Review: I loved it!
Sentiment:

And a few-shot version would include prior solved examples:

Classify the sentiment of the given movie review into "positive" or "negative".

Review: It was pretty bad.
Sentiment: negative

Review: I would recommend it.
Sentiment: positive

Review: I loved it!
Sentiment:

Iterative: “Question Refinement”, “Cognitive Verifier” and “Flipped Interaction”

These patterns are a little meta in that they ask the model to help with refining the prompts. This then becomes an iterative approach to first refine the prompt with the model’s help, then have the refined prompt(s) answered.

Question Refinement: Have the model suggest refinements to the prompts. For example,

Suggest a better version of the question that emphasizes on <topic>. 
Question: …

Cognitive Verifier: Asks the model to break down the problem into smaller questions and to derive the final response from the answers to the smaller questions. For example,

While responding, generate additional questions that will help more accurately answer the question. 
Answer each additional question.
Then combine those answers to produce the final answer to the initial question.
Question: ...

Flipped Interaction: Have the model to ask us questions until it has enough information to perform a task. This is particularly useful if we don’t have enough expertise to know what to ask. For example,

Ask me questions until you have enough information to do <function>. 
Task: Generate a ...

Worked Examples

Here are a few prompts that combine multiple aspects of form and function, and the sample responses from ChatGPT, Bard and Copilot.

The comparison is not on the capabilities of the models, but on whether these prompt patterns give an expected and consistent response across common models (ie, how transferable/universal are these prompt patterns).

Question-answering, persona, instructions

You are a university lecturer of a Generative AI class.
Pretend that I am a prehistoric caveman.
Explain the concept of Prompt Engineering.
Limit your response to one paragraph.
ChatGPT: the prompt performed not too badly!
Bard: the prompt seems to be right but we might have to tweak the model parameters to make it more understandable!
Copilot: starts out ok but becomes less useful to a caveman as the paragraph progresses! Perhaps it’d help for the prompt to provide context on what a hypothetical caveman might know and thus be able to relate to.

Text manipulation, text summarization, audience persona, zero-shot


Rewrite the following news headline into a tweet for non-financial savvy readers.
Also generate hashtags.
Headline: "Beware the Most Crowded Trade on Wall Street: Next Year's Soft Landing"
ChatGPT: might have slightly misunderstood the content, but got the spirit. Perhaps the prompt could be refined to include definitions for key terms.
Bard: not too bad! The breakdown, though not prompted for, is useful to understand how the model derived the response. The prompt could have contained instructions to restrict the output to just the tweet.
Copilot: ehh, did anything even change from the original headline? Again, perhaps including definitions of key terms in the prompt could help.

Extractive question-answering, persona, instruction, internal publication context

(“Internal documentation” source)

You are a friendly IT helpdesk operator. 
Answer the question from the customer using only information from the following Internal documentation and no where else.
Keep answers to one paragraph long.

Internal documentation: Blue Screen errors (also sometimes called black screen errors or STOP code errors) can occur if a serious problem causes Windows to shut down or restart unexpectedly.
You might see a message that says, "Windows has been shut down to prevent damage to your computer" or a similar message.
These errors can be caused by both hardware and software issues. If you added new hardware to your PC before the Blue Screen error, shut down your PC, remove the hardware, and try restarting.
If you're having trouble restarting, you can start your PC in safe mode. For more info, see Start your PC in safe mode in Windows.
You can also try getting the latest updates with Windows Update, help from other sources, or restoring Windows to an earlier point in time.
If none of those steps help to resolve your Blue Screen error, please try the Blue Screen Troubleshooter in the Get Help app

Customer question: A customer called to report that they are seeing a blue screen.
They have tried but could not restart the computer.
What should they do next?
ChatGPT: short, succinct and noticed that customer has tried to restart. The prompt seems to be on point.
Bard: not within one paragraph but it did also notice that the customer has tried to restart. The prompt could perhaps have been more clear on the length or format of the response.
Copilot: seems to have not noticed that the customer has tried and failed to restart. It also provided the actual link to the Microsoft documentation that I copied from for the context, which is nice but technically not using “only” information from the context. The prompt could have been clearer on that point.

Generation of content, persona, flipped interaction

You are a consultant for a marketing agency and I am a client.
Ask me questions until you have enough information to market my product.
Task: Create a marketing brief for a campaign to market dining tables.
Limit the number of questions to two.
Limit the brief to one paragraph.
ChatGPT: the prompt is right on point!
Bard: creative response however a tad too long given the prompt restriction (I had to truncate). I guess the second prompt could’ve repeated the restriction.
Copilot: couldn’t have prompted for better!

Persona, instructions

You are a cat. 
Answer all questions only with variations of "meow".
Question: What is the meaning of life?
ChatGPT: I’m not sure what I expected but I was still disappointed.
Bard: Probably the closest to my intended response.
Copilot: are you ok over there? This probably needs tweaking of the model parameters.
Interestingly Copilot pointed to the wikipedia link for “Meow”, and attached the article’s picture.

Conclusion

It seems that these common prompt patterns are transferable and produce similar results. Most of the differences probably come from the tone/style adopted by each platform and could probably be made more similar had I tried tweaking the model parameters.

References

I usually write about Cybersecurity and Software Engineering topics, if you’re into that, here’re some articles you might like:

--

--

syIsTyping
Don’t Code Me On That

Security Engineer in Japan. I've learnt a lot from the community, so I hope to contribute back. I write (hopefully useful) technical articles and how-to guides.