Best Practices in Prompt Engineering

Dr.Q writes AI-infused insights
6 min readMay 9, 2023

Prompt engineering is a critical skill to maximize the potential of Large Language Models (LLMs) like GPT-4 or BERT despite having different capabilities. The quality and precision of a prompt directly impact the accuracy and relevance of the generated responses. This article provides some best practices for prompt engineering that can help you obtain more accurate, relevant, and useful responses from LLMs.

“The hottest new programming language is English” — Andrej Karpathy

Photo by Jason Goodman on Unsplash

Best Practices

The following ten best practices should help you construct prompts to harness the potential of LLMs.

  1. Identify the primary goal and subject of your prompt. A well-engineered prompt should have a clear and specific objective, ensuring that the generated response is relevant to the intended purpose.
  2. Use clear and concise language. Avoid complex vocabulary or jargon in your prompt, as it may lead to confusion or ambiguity in the generated response. Use simple and unambiguous language that is easily understood by the model.
  3. Ensure coherence and consistency. Make sure the language and tone remain consistent throughout. The prompt should flow logically and coherently, with each sentence building upon the previous one.
  4. Provide context. Incorporate relevant background information or examples in your prompt, as this can help the LLM better understand the subject matter and generate a more accurate response.
  5. Provide explicit instructions. Be specific when framing your prompt, specifying details like timeframes, locations, or conditions to generate targeted responses.
  6. Use open-ended questions. Instead of closed-ended questions that elicit limited responses, use open-ended questions to encourage more comprehensive and informative answers.
  7. Avoid or minimize bias. Be neutral when framing your prompt, avoiding any biases or assumptions that may influence the generated response.
  8. Follow formatting guidelines. Provide clear and concise instructions for formatting, style, and structure in your prompt to ensure the generated response adheres to your desired format.
  9. Separate instruction from context. Construct your prompt by starting with the instruction, and use delimiters such as ### around the input text to help the model understand what you want it to do and what information it can use. Here is an example: Translate the text below into French. Text: ###{text you want to translate}###.
  10. Test and refine your prompts. The perfect prompt is constructed through an iterative process, so you should experiment with different prompts and adjust them as needed based on the generated responses, refining them to achieve the desired outcome.

What is meant by zero-shot and few-shot? Zero-shot means providing the model with instruction but no examples, and few-shot means providing instruction as well as one or more examples.

Examples

Here are a few examples for demonstration purposes.

Example #1: Step by step instructions

Consider the following self-explanatory prompt I tested from a Haystack presentation. Note the incorrect answer in the first trial, but with step by step instructions (second trial) — used to facilitate a step-by-step processing before answering the question, ChatGPT provided the correct answer . The research paper Large Language Models are Zero-Shot Reasoners provides more details and shows that ChatGPT accuracy improves if you simply append “Let’s think step by step” to your prompt.

Show solution step by step

Example #2: Generate output in JSON format

In this example clear instructions are provided for producing output in JSON format.

Prompt to produce output in JSON format

Example #3: Use delimiters

This example shows how to use delimiters to separate instructions from context.

Use delimiters around input text

Example #4: Chain of thought

Consider the following prompt from the research paper Program-aided Language Models. When I tried this prompt, ChatGPT provided the correct answer as you can see below.

Example from the paper ‘Program-aided Language Models’

But Bing Chat provided an incorrect answer on the first try as shown here.

Incorrect output from Bing Chat

I asked Bing Chat to generate Python code to solve the same problem. As you can see from the following output, the logic is incorrect but easy to fix. Bing Chat also volunteered to calculate profit made — it is going beyond its boundary, and this is called hallucination.

Incorrect program

A prompt generator

Consider the following Haystack prompt that you can use to generate other prompts for a given input to maximize the potential of LLMs. Note how the prompt is structured to follow a set of guidelines to generate a prompt for a given input. The source of this prompt is https://tinyurl.com/HaystackPrompt

ChatGPT, I would like to request your assistance in creating an AI-powered prompt rewriter, which can help me rewrite and refine prompts that I intend to use with you, ChatGPT, for the purpose of obtaining improved responses. To achieve this, I kindly ask you to follow the guidelines and techniques described below in order to ensure the rephrased prompts are more specific, contextual, and easier for you to understand.

Identify the main subject and objective: Examine the original prompt and identify its primary subject and intended goal. Make sure that the rewritten prompt maintains this focus while providing additional clarity.

Add context: Enhance the original prompt with relevant background information, historical context, or specific examples, making it easier for you to comprehend the subject matter and provide more accurate responses.

Ensure specificity: Rewrite the prompt in a way that narrows down the topic or question, so it becomes more precise and targeted. This may involve specifying a particular time frame, location, or a set of conditions that apply to the subject matter.

Use clear and concise language: Make sure that the rewritten prompt uses simple, unambiguous language to convey the message, avoiding jargon or overly complex vocabulary. This will help you better understand the prompt and deliver more accurate responses.

Incorporate open-ended questions: If the original prompt contains a yes/no question or a query that may lead to a limited response, consider rephrasing it into an open-ended question that encourages a more comprehensive and informative answer.

Avoid leading questions: Ensure that the rewritten prompt does not contain any biases or assumptions that may influence your response. Instead, present the question in a neutral manner to allow for a more objective and balanced answer.

Provide instructions when necessary: If the desired output requires a specific format, style, or structure, include clear and concise instructions within the rewritten prompt to guide you in generating the response accordingly.

Ensure the prompt length is appropriate: While rewriting, make sure the prompt is neither too short nor too long. A well-crafted prompt should be long enough to provide sufficient context and clarity, yet concise enough to prevent any confusion or loss of focus.

With these guidelines in mind, I would like you to transform yourself into a prompt rewriter, capable of refining and enhancing any given prompts to ensure they elicit the most accurate, relevant, and comprehensive responses when used with ChatGPT. Please provide an example of how you would rewrite a given prompt based on the instructions provided above.

Here’s my prompt: [INSERT PROMPT HERE]

I tested the above prompt with this input: generate an article about q-learning for edge computing.

ChatGPT generated the following prompt to use:

A prompt generated by ChatGPT

The above best practices in prompt engineering can help you harness the full potential of LLMs, generating more accurate and relevant responses that can be applied to a wide range of use cases.

To probe further

Prompt Engineering Guide

Example Prompts from OpenAI

Best practices for prompt engineering with OpenAI API

Your Guide to Communicating with Artificial Intelligence

A Teacher’s Prompt Guide to ChatGPT aligned with ‘What Works Best’

--

--

Dr.Q writes AI-infused insights

Qusay Mahmoud (aka Dr.Q) is a Professor of Software Engineering and Associate Dean of Experiential Learning and Engineering Outreach at Ontario Tech University