Prompting Precision: Enhancing AI Dialogue Techniques

Ceren Güzelgün
Insider Engineering
6 min readNov 6, 2023

Right now, we’re riding a new wave of machine learning, with large language models leading the charge. The key challenge? Communicating with these intricate models in the most effective way. This is where prompt engineering comes in. The process of creating and adjusting inputs to instruct machine learning models to produce desired outputs is known as prompt engineering. It’s a way to “query” or “instruct” the model to get the most relevant, accurate, and safe results.

But as these models evolve and become more advanced, they also get more complex to interact with. It’s not just about having a language model with billions of parameters; the real deal is knowing how to leverage those capabilities to get the practical results we’re after. The significance of this topic cannot be overstated.

Though, here’s the catch. While these models are impressively knowledgeable, they’ve been binge-reading the entire internet — a space where fact and fiction often blur. Trained on vast amounts of internet text, modern language models can also be a repository of biases and controversial viewpoints. Without careful prompting, models can unintentionally output biased and/or potentially harmful information. Through effective prompt engineering, we can set guardrails to guide models toward safer responses, mitigating the chances of producing inappropriate or offensive content. This is especially important for business applications, since the sheer number of user interactions magnifies the potential risks of any model ‘errors’.

Working with advanced language models is an exercise of finding the right balance. Sometimes the models might offer lengthy or off-topic responses. Prompt engineering helps redirect them, ensuring outputs are concise and on the mark.

There’s also an efficiency angle to consider. Every token processed by a model has computational costs attached. More concise answers may be retrieved with the help of precision-engineered prompts, optimising both time and computational expenditure.

The versatility of prompt engineering truly shines when you consider its applications. Instead of dedicating separate models for individual tasks, prompt engineering enables us to repurpose a single model across multiple domains. It’s like maximising the utility of a multipurpose tool in a tech stack. The interactive aspects of our applications, like chatbots or virtual assistants, benefit immensely from this. The efficacy of such tools is largely defined by their interaction quality. With meticulous prompting, we can achieve interactions that are fluid, intuitive, and mirror natural conversation. Moreover, by tailoring prompts, we can ensure that outputs align closely with user-specific contexts, enhancing overall user experience.

For those who might not be deep into the AI realm, fear not. The beauty of prompt engineering is its accessibility. It’s a technique that, once mastered, allows even those without a deep machine learning background to harness the model’s capabilities effectively. And as one iteratively refines prompts based on model feedback, it becomes an enlightening exercise in understanding and adapting to model behavior.

Let us now have a look at certain prompt engineering techniques to achieve what is discussed above.

Zero-Shot Prompting

In zero-shot prompting, the user gets a response without providing specific examples or any background information to the models. When the use case is getting quick answers to common questions, or surface-level information on some concepts, this strategy can come in handy. Due to the nature of chat bots, this is also one of the most widely used methods when communicating with them.

Zero-Shot prompting example, in conversation with OpenAI’s GPT-3.5 Turbo model.

One-Shot Prompting

In contrast, one-shot prompting steps it up a notch by giving the model a single example or context to guide its output. This method is especially useful when you need the model to follow a certain pattern or style in its response. Imagine you’re seeking an answer that aligns more with a prior experience or a specific format — that’s when one-shot prompting shines. It acts like a friendly nudge, ensuring the model stays on track.

The image below demonstrates how this prompting technique is applied.

One-Shot prompting example, in conversation with OpenAI’s GPT-3.5 Turbo model.

As opposed to the Zero-Shot example where the user vaguely requests a book recommendation, above prompt achieves to extract a more concise answer with the help of the extra information provided. It can be seen that the completion is considerably shorter, which in this case would aid in optimizing the token usage.

Few-Shot Prompting

Moving further along the spectrum, few-shot prompting elevates the guidance by presenting the model with multiple examples or contexts. Think of it as giving the AI not just one, but a handful of breadcrumbs to follow. This approach is invaluable when you’re aiming for a blend of specificity and versatility in the model’s response. If one-shot is a nudge, few-shot is like providing a mini tutorial. By witnessing several instances, the model gets a clearer picture of the desired outcome, enhancing its chances of nailing the answer just right. The following image illustrates the application of few-shot prompting in action.

Few-Shot prompting example, in conversation with OpenAI’s GPT-3.5 Turbo model.

Here’s a comparison for the same request, in a Zero-Shot prompting fashion.

Zero-Shot prompting example to provide a comparison with the Few-Shot approach.

Negative Prompting

Being precise in our prompts is very important to make the model understand our intent. But sometimes, explaining what we don’t want can be equally useful. Here is where negative prompting comes into play. This technique involves explicitly instructing the model on what not to produce or which pitfalls to avoid. For instance, if you’re concerned about biases or want to steer clear of controversial topics, a negative prompt could guide the model away from those zones. This method is especially valuable when you have a clear sense of potential missteps or areas you’d prefer the model to sidestep. Here are some examples.

  • Describe the economic structure of the 20th century, but avoid mentioning any specific wars or conflicts.
  • Explain the plot of ‘Romeo and Juliet’, but avoid mentioning the fate of the main characters.
  • Give an overview of space exploration, but steer clear of discussing the politics of the Moon Race.

Reference-Based Prompting

In cases where you want the model to stick to a certain theme or context, you can “anchor” the prompt with consistent keywords or phrases that remind the model of the context throughout the interaction. Such as,

  • In the format of a FAQ section, explain the principles of quantum physics.
  • Using Shakespearean language, describe the process of photosynthesis.
  • Mimicking a news report from the 1940s, present the invention of the smartphone.

Chain of Thought Prompting

The method involves providing the model with a few examples of how to solve a problem step-by-step. This helps the model to learn how to decompose complex problems into smaller, more manageable steps.

For example, to prompt a language model to solve the following math word problem:

If you have 10 apples and you give 5 to your friend, how many apples do you have left?

source: Wei et al. (2022)

You could provide the model with the following chain-of-thought prompt:

  1. Start with 10 apples.
  2. Subtract 5 apples.
  3. The answer is the number of apples you have left.

The language model can then use this prompt to solve the problem step-by-step. Chain-of-thought prompting is a simple but effective method for improving the reasoning ability of large language models. It is a promising new approach for making these models more useful for a wide range of tasks.

These are some of the main techniques that can help you build better prompts. Prompt engineering is like a bridge between the potential of language models and the needs rising from their real-world applications. It’s not only about instructing, but refining and directing the model’s behavior to align with specific objectives. As we continue to witness the evolution of AI, mastering the techniques of prompt engineering will keep evolving alongside it.

Hope you found the article useful. If you have any questions, feel free to ask away in the comments. You can also follow our Insider Engineering Blog for more articles where we detail our engineering processes. Here is a starter for you to check out! Crafting Robust HTTP Requests: Building a Requester in Go.

--

--