Prompt Engineering Guide

Tauseef Ahmad
6 min readApr 22, 2024

--

Prompt Engineering is a relatively new field that involves developing and optimizing prompts to efficiently use large language models (LLMs) for a wide variety of applications and research topics. Engineering prompts also help to better understand the capabilities and limitations of large language models (LLMs). In this blog, we will try to understand the basics of prompt engineering along with the popular prompt engineering techniques used by researchers and developers.

What is a prompt?

A prompt is a set of instructions used by the generative AI to perform a specific task. A prompt can take various forms depending on the task or application. It may consist of one or more sentences, keywords, questions, or even structured data.

What is prompt engineering?

Prompt engineering is an iterative process of designing a generative AI prompt to improve its accuracy and effectiveness. Let’s try to explore this using a real-life example. Let’s say, you want to ask Siri or Ok Google or any voice assistant for a recipe. If you just say “Share a recipe”, the AI will probably provide a random recipe, which might not be exactly what you’re looking for. However, if you specify, “Give me a quick vegan breakfast recipe under 300 calories,” you have engineered your prompt to narrow down the result produced by the Generative AI model.

Importance of prompt engineering

Prompt engineering bridges the gap between the end users and the large language model. A well-crafted prompt can significantly enhance the accuracy, relevance and usefulness of the outputs generated by LLMs. By refining prompts, users can guide the AI to understand the context effectively, avoiding pitfalls like ambiguity or bias and tailoring responses to specific needs or tasks. This makes prompt engineering an important skill in optimizing interactions with AI applications, ensuring that they deliver high-quality contextually appropriate results.

A simple prompt engineering example using OpenAI API:

First, let’s do the initial code set up to connect to OpenAI API. We need to install openai library and then import the OpenAI function along with the module os. We also set the OPENAI_API_KEY environment variable by generating the OpenAI API key. Next, we define a helper function get_completion which will make it easier to use prompts and view the output. For this blog, we will be using the OpenAI gpt-3.5-turbo model. Note: You can also use your own choice of LLM like Gemini 1.5 Pro, Llama 3, Mistral 7B, etc. based on your convenience to replicate the examples used in this blog.

Initial set up

Let’s consider a task where we need to extract the features of a PlayStation PS5 product listed on Amazon using the product description.

Product description

If we pass the prompt ‘Summarize the product description’ , we get a very detailed response from the GenAI model.

Initial raw prompt

The PS5 Digital Edition offers lightning fast loading, deeper immersion with haptic feedback and adaptive triggers, and stunning graphics. It is an all-digital version of the PS5 console with no disc drive, allowing users to buy and download games from the PlayStation Store. The console harnesses the power of a custom CPU, GPU, and SSD for a truly immersive gaming experience.

However, if we engineer the prompt by giving clearer and more precise instructions like ‘Summarize the description below in less than 30 words, focusing on product features and functionality’, we get a more desirable output answering the intended query of the user.

Engineered prompt

PS5 Digital Edition offers lightning speed SSD, haptic feedback, adaptive triggers, and 3D Audio for immersive gaming experience without a disc drive.

Generic prompt engineering tips

  1. Start simple: Prompt engineering is an iterative process and requires extensive experimentation to get optimal results. Try starting with a simple prompt and keep adding more elements and context for better results
  2. Clear instruction: Design effective prompts for various simple tasks by using commands to direct the model as per your end objective, such as ‘Write’, ‘Classify’, ‘Translate’, ‘Summarize’, etc.
  3. Specificity: Be very specific about the instruction and task that you want the model to perform. The more descriptive and detailed the prompt is, the better the results
  4. Avoid impreciseness: Avoid falling into the trap of giving too detailed prompts and potentially creating imprecise descriptions. Direct and specific prompts often enhance communication effectiveness, similar to clear communication between humans

Prompting techniques

In this section, we go through some of the popular techniques that the researchers and prompt engineers use for designing effective prompts.

Zero-shot prompting

This is the most direct and simplest method of prompt engineering in which a Gen AI model is simply asked a question or given direct instruction without providing any additional information. The zero-shot prompt directly instructs the model to perform a task without any additional examples to steer it. This is best used for relatively simple tasks like the one shown below.

Example: Zero-shot prompting

Few-shot prompting

Few-shot prompting is a useful technique to enable in-context learning where we provide demonstrations to steer the model to better performance. This method involves supplying the generative AI with some examples to help guide its output. Let’s consider an example presented in Brown et al. 2020 where the task is to correctly use a new word in a sentence.

Example: Few-shot prompting

Chain-of-thought (CoT) prompting

Introduced in Wei et al. (2022), this method helps improve an LLM’s output by breaking down complex reasoning into intermediate steps mimicking a train of thought rather than directly answering the question.

Source: Wei et al. (2022)

You can combine it with few-shot prompting to get better results on more complex tasks that require reasoning before responding.

Example: Chain-of-thought prompting

Prompt Chaining

The prompter splits a complex task into smaller (and easier) subtasks, then use the response of one subtask as an input to another prompt. This technique is useful to accomplish complex tasks which an LLM might struggle to address if prompted with a very detailed prompt, especially while building LLM-powered conversational assistants and improving the personalization and user experience of your applications. Let’s consider the below example where the objective is to create a personalized study program for a student.

Objective: Create a personalized study program tailored to a student’s strengths and weaknesses.

Prompt 1: Analyze student’s past grades to identify their strong and struggling areas

Prompt 2: Design a study schedule that focuses on student’s weak areas while maintaining the areas of strength

Prompt 3: Suggest interactive learning activities matching the study schedule

Another example:

You are a helpful assistant. Your task is to help answer a question given in a document. The first step is to extract quotes relevant to the question from the document, delimited by ####. Please output the list of quotes using <quotes></quotes>. Respond with “No relevant quotes found!” if no relevant quotes were found.####{{document}}####

Source: Prompt Chaining

There are other prompt generation techniques like Tree-of-Thoughts, Retrieval Augmented Generation (RAG), Multimodal CoT which you can explore here. These are just some of the prompting techniques that you might play with as you continue to explore prompt engineering. Often, in fact, the most effective prompt strategy is to combine several different techniques to achieve the desired output.

References:

  1. https://learn.deeplearning.ai/courses/chatgpt-prompt-eng
  2. https://www.promptingguide.ai/techniques
  3. https://platform.openai.com/docs/guides/prompt-engineering
  4. https://cookbook.openai.com/articles/related_resources

--

--