The Craft and Science of Prompt Engineering

Kyle M
AI Guardians
Published in
9 min readJul 18, 2023

A prompt is a piece of text that is used to guide an AI model to generate a specific output. The effectiveness of a prompt depends on a number of factors, including the context, the task, the guidelines, and examples.

In this post:

  • Anatomy of a good prompt
  • Evaluating a prompt
  • Tips from Domain Experts
  • Tips from LLMs
  • Example prompts

Anatomy of a good prompt:

The best way to remember this format is to remember CAGE:

C — Context

A — Action

G — Guidelines

E — Few-shot examples

Context

The context of a prompt is the information that is provided to the AI model before it generates an output. This information can include the model’s purpose, the audience for whom it is generating content, and any relevant background knowledge.

For example, if you’re using an AI model to help with driving navigation for weekend adventurers, you might include:

  • What territory is in scope.
  • The target audience.
  • Destination recommendations.

So your context could look like:

“You are an AI assistant designed to generate driving route recommendations for weekend travelers in the Pacific Northwest region of the United States. The target audience is outdoorsy couples and small groups looking to explore lesser-known destinations.

Focus the route recommendations on destinations within a 4 hour drive of Seattle and Portland that would appeal to this audience. Use vivid and conversational language in framing the recommendations, with the goal of inspiring a sense of discovery and escapism for the travelers.”

Action or Task

The task is the specific task that you want the AI model to perform. When defining the task, it is important to be clear and specific.

For example, if you want the AI model to generate a blog post, you could specify the length of the post, the topic of the post, and the tone of the post.

“Write a 150–200 word blog post about the history of AI using a fun and engaging tone.”

Guidelines

You can also provide the AI model with additional instructions that can help the model to generate a more accurate and relevant output.

For example, if you are using an AI model to translate a document, you might provide the following guidelines:

  • Translate the document into Spanish.
  • Use formal language.
  • Avoid using slang or colloquialisms.

Few-shot Examples

Few-shot learning examples can be powerful and extremely helpful for guiding the model and ensuring that it generates an output that is similar to the examples. “Few-shot” refers to how an ML model can learn from just a few examples. (If you don’t use any examples, this is called zero-shot learning and the model will use the data it’s already trained on, and providing one example is called one-shot learning.)

If you want your output to be in a specific format, for example, like an intent classification, provide the specific format:

“Input: My internet has been down for hours. What’s going on?
Output: Report_problem

Input: Can you tell me more about your premium membership plan?
Output: Inquiry_about_product”

Prompt evaluation tips:

  • If you used few-shot examples, test those same examples as an input and evaluate the response. If the model doesn’t provide the expected output, it likely means your instructions aren’t clear enough. (Of course, be sure to check other examples, too.)
  • For labeling prompts, ask the model for examples of specific labels to check its understanding. For example, “give me utterances for an intent Report_problem.”
  • Ask the model to give step by step details on how it arrived at a specific answer or label. (This technique is called Chain-of-Thought (COT).) For example, “walk me through why you classified this utterance with a report_problem intent.”

Prompt engineering tips from domain experts:

https://www.promptingguide.ai

  • You can start with simple prompts and keep adding more elements and context as you aim for better results. Iterating your prompt along the way is vital for this reason.

https://medium.com/@fareedkhandev/prompt-engineering-complete-guide-2968776f0431

  • Break down the problem into smaller steps, enabling the model to reason through intermediate stages before providing a response.

https://learn.microsoft.com/en-us/azure/cognitive-services/openai/concepts/advanced-prompt-engineering?pivots=programming-language-chat-completions

  • Models can be susceptible to recency bias, which in this context means that information at the end of the prompt might have more significant influence over the output than information at the beginning of the prompt. Therefore, it is worth experimenting with repeating the instructions at the end of the prompt and evaluating the impact on the generated response.

https://learnprompting.org/docs/prompt_hacking/intro

  • Depending on your use case, you may need to be mindful of prompt hacking. Prompt hacking refers to crafting inputs to deceive language models into performing unintended actions, including producing unintended output, leaking confidential information, and bypassing safety measures. Protective measures against this threat include prompt-based defenses, monitoring model behavior, and retraining models.

Prompt engineering tips from LLMs:

Claude 2:

  • Use clear, simple language. Avoid overcomplicated prompts. The chatbot will interpret prompts literally, so be direct.
  • Put key information first, use bullets or numbered lists for multiple ideas.
  • Leverage the chatbot’s knowledge by referring to facts it should know.
  • Use concrete examples and scenarios to guide the chatbot. Providing sample responses teaches it what you’re looking for.
  • Correct the chatbot when it makes mistakes by providing the right answer. This feedback helps improve its training.
  • Balance specificity with open-endedness. Being too narrow can limit creativity, while being too broad produces irrelevant responses.
  • Test variations of your prompt to compare outputs. Small tweaks can make big differences in chatbot performance.

GPT4:

  • Contextual Prompts: Make sure your prompt provides enough context. For example, instead of “Translate this,” you could say, “Translate this English text into French.” This gives the model a clear understanding of what you want.
  • Instructional Prompts: If you want a specific kind of output, provide explicit instructions. For example, if you need a summary, you could say, “Summarize the following paragraph in no more than three sentences.”
  • Role-Playing Prompts: You can often get better results by having the model role-play. For example, instead of asking the model to generate a joke, you could prompt it as, “You are a stand-up comedian. Share one of your funniest jokes.”
  • Multiple Attempts: If the first prompt doesn’t generate the desired output, try a few different versions. Sometimes a slight change in phrasing can make a big difference.
  • Systematic Prompts: For more complex tasks, you may need to break down the task into a series of prompts. For example, if you’re trying to write a story, you might start with a prompt to generate the setting, then another to create the characters, and so on.
  • Prompt Chains: Use the output of one prompt as the input to another. This is a form of conversation with the model and can be used to dig deeper into a topic or to guide the model towards a specific outcome.
  • Temperature and Max Tokens Settings: Remember to adjust the temperature and max tokens settings to control the randomness and length of the output. Higher temperature values make the output more random, while lower values make it more deterministic. The max tokens parameter sets a limit on the length of the generated text.
  • Experiment and Iterate: Always experiment with different types of prompts and iterate based on the results. What works best will depend on the specific task and the particular model you’re using.
  • Keep Up With Research: AI research is rapidly evolving. Stay current with the latest research papers, blog posts, and forum discussions to learn about new techniques and approaches for prompt engineering.

PaLM 2

  • Use a variety of prompts, including open-ended questions, fill-in-the-blanks, and multiple-choice questions.
  • Use keywords and phrases that are relevant to the task you want the model to perform.
  • Provide the model with additional context, such as a description of the task or an example of the desired output.
  • Be patient and persistent. Prompt engineering can take some time to get right, but it is worth the effort in the end.

Bard

  • Use few-shot prompting. This technique involves providing the language model with a small number of examples or demonstrations to improve its performance on various tasks. The examples serve as a guide for the model to understand the desired format or problem-solving approach.
  • Be clear and specific. The more clear and specific your prompts are, the better the language model will be able to understand what you want it to do.
  • Phrase your questions carefully. The way you phrase your prompts can have a big impact on the results you get. Be sure to use language that is easy for the language model to understand, and avoid using ambiguous or vague language.
  • Anchor your prompts with examples. If you can provide examples of the kind of output you are looking for, this will help the language model to better understand what you want.
  • Constrain the language model. If you want to limit the scope of the language model’s responses, you can use constraints in your prompts. For example, you could specify a maximum length for the response, or you could specify that the response should be in a certain format.

Even more prompt examples

Example prompt:

Context: You are a conversational AI agent designed to assist customers of a major shoe retailer with their questions and needs.

Task: Answer customers’ queries about shoe products, orders, refunds, returns and other customer service issues in a helpful, prompt and professional manner.

Guidelines:

- Be respectful, empathetic and attentive in your responses.

- Answer questions directly and succinctly. Avoid rambling.

- If you don’t know the answer, apologize and offer to connect the customer to a live agent.

- Do not provide any suggestions or recommendations beyond fulfilling the customer’s direct request.

Examples:

Customer: My order hasn’t arrived yet. What should I do?

You: I apologize for any delays. Please provide your order number and I’ll be happy to look into the status of your order for you.

Customer: These shoes are uncomfortable. Can I return them?

You: Of course! Returns are free and easy within 30 days of receipt. Please visit [company website URL] to start a return. I’ll be happy to walk you through the process if you have any questions.

Example prompt:

Context: You are an AI assistant designed to help students learn data science skills.

Task: Help students understand data science concepts, provide guidance on assignments, and offer feedback on their work.

Guidelines:

- Be patient and explain concepts clearly in simple terms.

- Give concrete examples or analogies to illustrate difficult ideas.

- Point students to relevant resources and learning materials.

- Provide clear, constructive feedback on assignments.

- Suggest areas for improvement while also acknowledging progress.

Few-show Examples:

Student: I don’t understand what a confidence interval is.

You: A confidence interval provides an estimated range of values that is likely to include an unknown population parameter, with a certain degree of confidence. Think of flipping a coin 1000 times — the result will give you an interval of heads that you can be 95% confident the true probability of heads lies within.

Student: How should I approach this regression analysis assignment?

You: First, plot your data and check for any obvious relationships. Then run some initial regression models and check that the coefficients make sense. Don’t worry about getting the ‘perfect’ model yet, just make sure you understand the data and can interpret the results.

Example prompt:

You are an AI language model developed to do NLP tasks for chatbot utterance data.

Provide a sentiment analysis for each of the utterances below.

Use the following sentiment labels: Happy, Neutral, and Sad.

Use the following format: [Utterance]+[Sentiment]

Utterances:

“What a beautiful day it is!”

“I can’t wait to start my new job tomorrow.”

“I’m really disappointed with the service at that restaurant.”

“It’s just another boring day.”

“I love spending time with my family.”

“My heart is broken.”

“I’m so excited about our upcoming vacation!”

“This book is just okay, nothing special.”

“I can’t stand this traffic.”

“I’m feeling quite content with how things are going.”

Example prompt:

You are a childrens’ author.

Your task is to create a short story (around 300 words) about a brave little turtle who embarks on a journey across the sea.

The story should have a clear beginning, middle, and end, and it should convey a message about the importance of courage and perseverance.

For instance, the turtle could face challenges like rough waters or bigger sea creatures, demonstrating its bravery and determination in overcoming these obstacles.

Example prompt:

Context: You are writing a blog post for a travel website aimed at adventurous couples.

Task: Provide tips and recommendations for the top off-the-beaten-path places to visit together.

Guidelines:

- Use an enthusiastic yet conversational tone

- Include 3–5 bullet pointed tips or recommendations

- Begin each bullet point with an action verb

- Focus on unique places with adventure, culture and romance

Write a blog post including:

- An introduction with an intriguing opening

- 3–5 bulleted recommendations for places to visit

- Describe each recommendation using vivid wording

- End with a short call to action

Tone:

- Enthusiastic yet conversational

- Quirky and fun without being overly cutesy

- Inspiring readers with a sense of possibility and adventure

--

--