Prompt engineering: 6 powerful LLM techniques everyone should know

An introduction to the fundamentals of prompt engineering

dxxmsdxy
Art Accelerationism
5 min readFeb 6, 2024

--

In our ongoing exploration of ‘prompt-craft’, we dive deeper into specific prompt engineering techniques that can improve your interactions with LLMs.

What is prompt engineering?

Prompt engineering refers to the developing array of techniques that can be used to help you help LLMs to produce more relevant results. These are the tools of prompt-craft.

We can define the techniques of prompt engineering as practical syntactical templates that reduce ambiguity in human-AI interactions.

Let’s consider some of the most fundamental concepts one-by-one, each with a simple example, to help you design more effective prompts.

1. Zero-shot/Single-shot/Few-shot Prompting

Referring to how much context the LLM is given to perform its task. By providing relevant examples or templates as context, you help the LLM triangulate the kind of result that you desire.

  • Zero-shot: The LLM returns a result without prior examples, relying on its general understanding.
  • Single-shot: The LLM is given one example to guide its response.
  • Few-shot: The LLM receives a few examples, enhancing its output accuracy in a new context.

When possible, you should prompt using few-shot, or at least single-shot prompting. It can be hard to put exact desires or descriptions into words, often it helps to teach by showing as well.

Examples:

  • Zero-shot:
    Suggest a unique theme for a community event.
  • Single-shot:
    Like ‘A Night Under the Stars’,
    suggest a theme for a community event.
  • Few-shot:
    Given themes like ‘A Night Under the Stars’ and ‘2001: A Space Bonanza’,
    suggest another theme for a community event.

Each of these prompts will return significantly different responses because the increase in context narrows its focus. Be mindful, too much context can confuse or unnecessarily confine the LLM’s ‘imagination’.

2. Chain-of-Thought Prompting

This technique involves guiding the AI to articulate its reasoning process step-by-step, beneficial for problem-solving or understanding complex concepts. This is accomplished by demonstrating step-by-step reasoning similar to the logic required to arrive at the result you desire. We use examples (single-shot or few-shot) to teach the LLM how we want it to think.

Example:

  • Q1: Roger has 5 tennis balls. He buys 2 more cans of tennis balls. Each can has 3 tennis balls. How many tennis balls does he have now?
    A1: Roger started with 5 balls. 2 cans of 3 tennis balls each is 6 tennis balls. 5 + 6 = 11. The answer is 11.

    Q2: The cafeteria had 23 apples. If they used 20 to make lunch and bought 6 more, how many apples do they have?”

In this example, the first question and answer pair (Q1 & A1) provided in the prompt are context that will guide the LLM in how to approach the second question (Q2), which is our real question. The LLM’s response will resemble the structure demonstrated in the example.

3. Self-Ask Prompting

Self-Ask prompting involves having the AI ask itself questions to refine its understanding of a prompt before tackling the larger task at hand.

Example:

  • Who lived longer, Bill Hicks or Terrence McKenna? Determine if this question can be broken down into smaller questions, and answer them individually before answering the main question.

Simply by prompting the LLM to consider whether the question can be broken down can have great results, and helps clarify how the AI has reached its conclusion. In this example, the LLM will realize that it should first ask itself how old Bill Hicks and Terrence McKenna were respectively when they passed (RIP).

This technique can be combined with the single-shot and chain-of-thought concepts to produce very enlightening breakdowns of the LLM’s reasoning or other complex processes.

4. Iterative Prompting and Roleplaying

Iterative prompting means refining prompts based on previous LLM responses in an interactive way. Roleplaying involves the LLM and/or user adopting a persona or role for the conversation. These concepts are especially powerful when used together.

Example:

  • As a nutritionist, can you work with me to create a meal plan for a more balanced diet?

Inviting the LLM to actively engage you tells it how to approach the interaction. This technique is useful for surfacing nuanced insights, or testing an idea with a particular audience.

Giving the LLM a role in this way can dramatically change the kinds of responses you receive. While it doesn’t change the depth of information you can uncover, it does affect how it communicates it. Having an LLM roleplay an expert may invoke more high-resolution concepts which is helpful if you are also an expert.

5. Least-to-Most Prompting

This method breaks down complex problems into simpler parts, gradually building towards the more complex downstream aspects of the prompt.

Example:

  • Outline important indoor gardening techniques. Provide a detailed task management framework for breaking down short-term and long-term tasks. Tell me about the Boston Fern and its unique needs. Finally, provide a task management plan for my Boston Fern.

Structuring a prompt in this way will result in your final result being informed by the preceding generated text. This creates a sort of synthetic context that will influence your final result.

6. Generated Knowledge Prompting

The AI draws on reference knowledge to generate ideas or insights, and uses this as context to inform its response. This reference knowledge could be included explicitly, generated programmatically, or pulled in from external sources.

Examples:

  • The word child means one kid. The word children means [M] or more kids. Solve for [M].
  • Search for and list the most pressing environmental challenges we will face in the near future. Suggest business ideas for consumer products or services that address these environmental challenges.

The knowledge we’ve provided as context in these examples allows the prompter to overcome the reasoning limitations of the LLM’s underlying model.

For example, without “the word child means one kid” included in the first example, many LLMs would incorrectly respond that the word children means one or more kids. In fact, the correct answer is two or more. This is just a quirk of how LLMs reason blindly without true semantic understanding of the world. Positively declaring relevant hard facts can function almost like a prosthesis for an LLM model’s limitations.

The Takeaway:

Each of these basic techniques present a unique mechanism that we can use to compose our prompts. They are tools in our prompting toolkit, and as with most tools, they are best used in concert, and when appropriate.

And that means you will need to experiment, reflect, and iterate. With a little intentional learning as we engage more and more with these new tools, you will become fluent in the art of prompt-craft.

Again, these are only foundational tools; a first step on your path to prompt mastery. Next, we’ll take a look at another fundamental skill: how to structure complex prompts. We’ll continue to learn more advanced techniques that we can layer on top of these skills to continually take our prompting even further.

Developing our understanding of these concepts can not only help us structure our interactions with LLMs, but also our own thinking, whether we’re brainstorming, seeking deeper insights, or exploring new, creative ideas.

But first, we have to put them to use!

dxxmsdxy

--

--