Prompt Engineering — Get desired outcomes from Language Models

Fariba Laiq
AI insights by querifai
5 min readApr 30, 2024

Explore effective prompt engineering methods that adhere to OpenAI guidelines to ensure accurate responses from Large Language Models.

Overview

Large Language Models (LLMs) don’t always give the responses users expect or hope for. To prevent frustration, it’s helpful to understand where LLMs excel and where they struggle, as well as how to tackle some of the challenges they present. While interacting with an LLM feels like a natural conversation, employing prompt engineering techniques can significantly enhance the quality of results.

In this article, we’ll showcase examples of both basic and slightly more advanced prompt engineering techniques that align with OpenAI’s guidelines.

Please note that this is a shorter version of the actual article. You can read the full article here.

Basic Prompt Engineering Techniques

Testing out these prompt engineering techniques can help optimize your interactions with Large Language Models (LLMs). Here’s a rundown of some basic strategies that have proven effective for us in various scenarios:

  1. Provide Detailed Queries: Furnish the model with all pertinent details or context necessary for it to generate a relevant response. Avoid ambiguity by clearly outlining your expectations. Remember, the more information you offer, the less the model has to infer.
  2. Write Clear Instructions: Be explicit and precise about how you want the model to respond. Whether you need succinct answers or in-depth insights, clearly communicate your requirements. This helps minimize guesswork and enhances the accuracy of the generated content.
  3. Specify Desired Output Length: If response length matters, specify your preference. Whether it’s three paragraphs, a few bullet points, or a concise 10-word answer, make your expectations clear. While models may not always adhere strictly to these instructions, they generally follow them to some extent.
  4. Utilize Delimiters: For certain tasks, it’s essential to provide the model with structured input within the prompt. Use delimiters, such as triple quotation marks before and after the input, to distinguish between task requirements and input data.
  5. Adopt a Persona: LLMs can adapt their writing styles to suit different scenarios. Specify the desired style to guide the model accordingly. For instance, requesting a legal professional tone will yield a distinct style compared to asking for an explanation suitable for a five-year-old.

Now, let’s apply some of these techniques in three practical examples to see how they perform.

Product Description Generation

LLMs can generate product descriptions but require clear instructions to tailor them to specific needs. Without guidance, they may produce inaccurate or irrelevant content. Have a look at the example below.

Without Prompt Engineering

With Prompt Engineering

Utilizing basic prompt engineering techniques like providing clear instructions, and a specified length of output minimizes risks of errors or hallucinations.

Contract Clause Explanation

LLMs can assist in legal contract clauses, but their responses typically lack guarantees of accuracy. Here’s an example with and without employing prompt engineering techniques:

Without Prompt Engineering

With Prompt Engineering

Employing prompt engineering techniques to specify a persona by whom or for whom the answer is provided can efficiently alter the style of the response.

Some models offer the convenience of setting a system message that applies to every request, saving you from repeating instructions each time. In this case, we can prepare the model to consistently provide answers to questions about legal contract clauses in a specific style and structure, dividing the explanation into three parts:

  1. Description
  2. Implication
  3. Advice

Advanced Prompt Engineering Techniques

Advanced prompt engineering strategies enhance language model interactions by refining prompts for more precise responses. These methods involve intricate prompt manipulation to guide reasoning, integrate learning examples directly, and handle complex tasks effectively. Here are two key strategies:

1. One-shot/Few-shot Prompting:

● One-shot prompting offers guidance by providing an example, aiding tasks like categorization.

● Few-shot prompting goes further, offering multiple examples, but performance peaks with around five examples.

2. Chain-of-Thought Prompting:

● This strategy constructs a logical sequence in prompts to guide the model’s generation process.

● It ensures coherence and contextuality, improving output quality

Survey Responses Categorization

Let’s take a step forward and utilize the LLM for a more advanced task: categorizing survey responses into specific categories to gain rapid insights into your product or service.

Without Prompt Engineering

With Prompt Engineering

Compared to the previous response, this response provides the output as per user requirements in a more structured way that is easy to comprehend, providing quick insights.

Curious to know the exact prompt we applied to get the accurate output from the LLM in a structured way? Read our full article here to discover all the strategies used behind our prompts in detail.

Conclusion

Embracing prompt engineering is the initial and simplest step in unleashing the full potential of Language Models. It’s a user-friendly strategy that doesn’t require support from data scientists. Proper prompt engineering can transform LLMs from mere ideation tools into indispensable business resources. Whether it’s providing clear instructions or employing advanced techniques like chain-of-thought prompting and few-shot learning, prompt engineering significantly enhances the quality and relevance of LLM responses.

Sign up for querifai today to experiment with various prompts on our basic chat interface. Apply the prompt engineering strategies learned from this article and achieve your desired outcomes with LLMs.

--

--