First Principles of Prompt Engineering

Curtis Savage
AI For Product People
11 min readMay 22, 2023

When prompting Large Language Models (LLMs), Product Managers should remember two key principles. First, provide clear and specific instructions. Second, give the model time to process and generate responses. This can be a valuable skill-set to help PMs drive new features and products.

Attribution: Many of the insights and code snippets below are taken directly from ChatGPT Prompt Engineering for Developers by DeepLearning.AI and OpenAI.ai. If this post peaks your interest, please check out the highly recommended course with Andrew Ng and Isa Fulford in the link above. Both are thought leaders in the space.

As product managers, it’s essential to understand the tools we’re working with, especially when it comes to cutting-edge technology like AI and machine learning.

Understanding the potential of Large Language Models (LLMs) and optimizing prompts and API calls to LLMs for rapid software development is important for product managers because we can leverage this technology to drive new features or products that incorporate AI, providing a competitive advantage in the market. Let’s start with prompting.

There are not shortage of “click-baity” articles online like “The 30 Prompts you Need to Master Right Now!” but, as with anything, it’s important to learn first principles.

First Principles for Prompting LLMs

There are two key principles to keep in mind when prompting LLMs. The first principle is to write clear and specific instructions. And the second principle is to give the model time to think.

Let’s dive into our first principle, which is to write clear and specific instructions.

You should express what you want a model to do by providing instructions that are as clear and specific as you can possibly make them.

Note: the code snippets assume you have a basic understanding of code. If not, don’t worry, the principles still offer useful tactics for prompt engineering using the standard chatGPT interface. So carry on!

Principle 1: Write Clear and Specific Instructions

When working with language models, it’s critical to provide clear, specific instructions to guide the model towards desired outputs and avoid irrelevant responses. Longer prompts, full of context and details, often yield more accurate results.

Tactic 1: Use Delimiters for Clarity

A key tactic for clarity is the use of delimiters. These are symbols that separate distinct parts of the input, making it clear what the model should focus on. Beyond clarity, delimiters also help prevent ‘prompt injections,’ where user-added input could give conflicting instructions, leading the model astray. By employing delimiters, we can ensure that the model focuses on our intended task rather than misinterpreting user input as a new instructions.

text = f"""
Toronto, the capital of the province of Ontario,
is a major Canadian city along Lake Ontario’s northwestern shore.
It's a dynamic metropolis with a core of soaring skyscrapers,
all dwarfed by the iconic, free-standing CN Tower.
Toronto also has many green spaces, from the orderly oval of
Queen’s Park to 400-acre High Park and its trails, sports facilities
and zoo.
"""
prompt = f"""
Summarize the text delimited by triple backticks \
into a single sentence.
```{text}```
"""
response = get_completion(prompt)
print(response)

This generates the following response:

Toronto is a bustling city with skyscrapers, green spaces, and the iconic CN Tower located along Lake Ontario’s northwestern shore.

Tactic 2: Ask for structured output

The next tactic is to ask for a structured output. Using structured output, like HTML or JSON, when interacting with AI models can make parsing the output easier. For instance, when asking the model to generate a list of fictional book titles with their authors and genres, instructing it to provide the output in JSON format (with keys such as book ID, title, author, and genre) can lead to a neatly formatted, easy-to-parse output that can be directly read into a Python dictionary or list.

prompt = f"""
Generate a list of three made-up book titles along \
with their authors and genres.
Provide them in JSON format with the following keys:
book_id, title, author, genre.
"""
response = get_completion(prompt)
print(response)

This generates the following response:

Tactic 3: Ask the model to check conditions

The next tactic is to ask the model to check whether conditions are satisfied.

Incorporating condition checks in AI model interactions can help ensure accurate task completion. If a task relies on certain conditions, asking the model to check and validate these first can prevent flawed outputs.

For example, a model can be prompted to rewrite a sequence of instructions given within a text.

In this case: IF the text contains a sequence of instruction, THEN the steps are provided.

text_1 = f"""
Toronto is a world-class city. If you've moved here,
there are a few essential things you will need to do to be a true Torontonian.
First, you'll need to choose a sports team. Next, you'll need to enjoy losing.
After that, you'll have to complain about the weather, especially the snow.
From there, you'll need to master the art of being stuck in traffic.
Finally, you'll need to love lining up for overhyped and overpriced events.
Remember to take a selfie for insta. Practice makes perfect!
"""
prompt = f"""
You will be provided with text delimited by triple backticks.
If it contains a sequence of instructions,
re-write those instructions in the following format:

Step 1 - ...
Step 2 - ...

Step N - ...

If the text does not contain a sequence of instructions,
then simply write \"No steps provided.\"

```{text_1}```
"""
response = get_completion(prompt)
print("Completion for Text 1:")
print(response)

This generates the following response:

However, IF the provided text does NOT contain a sequence of instructions, THEN the model should return “no steps provided”.

text_2 = f"""
Toronto, the capital of the province of Ontario, is a major Canadian city
along Lake Ontario’s northwestern shore. It's a dynamic metropolis
with a core of soaring skyscrapers, all dwarfed by the iconic, free-standing
CN Tower. Toronto also has many green spaces, from the orderly oval of
Queen’s Park to 400-acre High Park and its trails, sports facilities and zoo.
"""
prompt = f"""
You will be provided with text delimited by triple backticks.
If it contains a sequence of instructions,
re-write those instructions in the following format:

Step 1 - ...
Step 2 - ...

Step N - ...

If the text does not contain a sequence of instructions,
then simply write \"No steps provided.\"

```{text_2}```
"""
response = get_completion(prompt)
print("Completion for Text 1:")
print(response)

This generates the following response:

This approach allows the model to handle potential edge cases and prevent unexpected errors, enhancing the reliability of the model’s output.

Tactic 4: Provide examples ("Few-shot" prompting)

The final tactic is few-shot prompting. This is a technique where the model is given examples of successful task completion before performing a similar task.

prompt = f"""
Your task is to answer in a consistent style.

<child>: Teach me about kindness.

<grandparent>: ""Kindness is a guiding lighthouse,
shining to bring people together through the storms of life."".

<child>: Teach me about sharing.
"""
response = get_completion(prompt)
print(response)

This generates the following response:

After the model is instructed to maintain a consistent style, it successfully mimics the metaphoric tone when asked to teach about sharing.

These four tactics constitute our first principle: providing the model with clear and specific instructions.

Our second principle emphasizes the importance of giving the model sufficient time for thoughtful contemplation.

Principle 2: Give the model time to “think”

The second rule is to give the model enough time to figure things out. If a model is making mistakes because it’s hurrying, you should ask the question in a different way, so the model can think step by step before giving the final answer. This is like asking someone to solve a hard math problem without giving them enough time to think; they’d probably get it wrong. In the same way, you can tell the model to take more time to think about a tough problem, which means it’s working harder on the task.

Tactic 1: Specify the steps

Our first tactic is to specify the steps required to complete a task. The instructions in this prompt tell the model to perform a very specific series of actions. First, summarize the text into one sentence. Second, translate the summary into French. Third, list each name in the French summary. And fourth, output a JSON object that contains specific keys.

text = f"""
In a charming village, siblings Jack and Jill set out on \
a quest to fetch water from a hilltop \
well. As they climbed, singing joyfully, misfortune \
struck—Jack tripped on a stone and tumbled \
down the hill, with Jill following suit. \
Though slightly battered, the pair returned home to \
comforting embraces. Despite the mishap, \
their adventurous spirits remained undimmed, and they \
continued exploring with delight.
"""
prompt = f"""
Your task is to perform the following actions:
1 - Summarize the following text delimited by
<> with 1 sentence.
2 - Translate the summary into French.
3 - List each name in the French summary.
4 - Output a json object that contains the
following keys: french_summary, num_names.

Use the following format:
Text: <text to summarize>
Summary: <summary>
Translation: <summary translation>
Names: <list of names in Italian summary>
Output JSON: <json with summary and num_names>

Text: <{text}>
"""
response = get_completion(prompt)
print("\nCompletion for prompt:")
print(response)

This generates the following response:

Completion for prompt:

Summary: Jack and Jill go on a quest to fetch water, but misfortune strikes and they tumble down the hill, returning home slightly battered but with their adventurous spirits undimmed.

Translation: Jack et Jill partent en quête d’eau, mais la malchance frappe et ils dégringolent la colline, rentrant chez eux légèrement meurtris mais avec leurs esprits aventureux intacts.

Names: Jack, Jill

Output JSON: {“french_summary”: “Jack et Jill partent en quête d’eau, mais la malchance frappe et ils dégringolent la colline, rentrant chez eux légèrement meurtris mais avec leurs esprits aventureux intacts.”, “num_names”: 2}

Tactic 2: Have the model to work out its own solution

The next strategy we’ll be exploring is urging the model to formulate its own solution before jumping to conclusions. There are times when the results are significantly improved if we explicitly guide the model to deduce its own solutions prior to arriving at any conclusions. This is closely tied to the idea we’ve previously discussed, about giving the model sufficient time to thoroughly examine a problem before hastily deciding if an answer is correct or incorrect, akin to how a human would approach it.

In the example below, the model is tasked to assess if a student’s response is correct or not. Here, we present a mathematical problem followed by a student’s proposed solution. However, if the student’s response is erroneous (e.g. using 100x rather than 10x) the model may not catch the error. This occurs because, upon superficial inspection, the student’s solution may seem plausible. The model may concur with the student’s solution because it merely skimmed through it.

To rectify this, we can direct the model to develop its own solution first and subsequently compare it to the student’s solution. In the prompt below, we outline the model’s task of determining the student’s solution’s accuracy. It includes the following steps: Firstly, create your own solution to the problem. Secondly, compare your solution with the student’s solution, and lastly, evaluate the correctness of the student’s solution. We stress that the model must first solve the problem itself before deciding on the student’s solution’s correctness.

Upon running this command, the model first performs its own calculation, arriving at the correct answer. Comparing this with the student’s solution, the model discerns a discrepancy and rightfully declares the student’s solution as incorrect. This instance underscores the benefits of prompting the model to solve the problem itself and taking the time to deconstruct the task into manageable steps, thereby yielding more accurate responses.

prompt = f"""
Your task is to determine if the student's solution \
is correct or not.
To solve the problem do the following:
- First, work out your own solution to the problem.
- Then compare your solution to the student's solution \
and evaluate if the student's solution is correct or not.
Don't decide if the student's solution is correct until
you have done the problem yourself.

Use the following format:
Question:
```
question here
```
Student's solution:
```
student's solution here
```
Actual solution:
```
steps to work out the solution and your solution here
```
Is the student's solution the same as actual solution \
just calculated:
```
yes or no
```
Student grade:
```
correct or incorrect
```

Question:
```
I'm building a solar power installation and I need help \
working out the financials.
- Land costs $100 / square foot
- I can buy solar panels for $250 / square foot
- I negotiated a contract for maintenance that will cost \
me a flat $100k per year, and an additional $10 / square \
foot
What is the total cost for the first year of operations \
as a function of the number of square feet.
```
Student's solution:
```
Let x be the size of the installation in square feet.
Costs:
1. Land cost: 100x
2. Solar panel cost: 250x
3. Maintenance cost: 100,000 + 100x
Total cost: 100x + 250x + 100,000 + 100x = 450x + 100,000
```
Actual solution:
"""
response = get_completion(prompt)
print(response)

Final Thoughts

Understanding and mastering the principles of writing clear and specific instructions for Large Language Models (LLMs) is crucial for product managers. As we delve into the world of AI and machine learning, leveraging the potential of LLMs through optimized prompts and API calls becomes a powerful tool for rapid software development.

By incorporating AI into our products and features, we gain a competitive advantage in the market. While there may be an abundance of “click-baity” articles offering shortcuts, it is essential for product managers to prioritize learning first principles.

Following two key principles 1) writing clear and specific instructions and 2) allowing the model time to think, set the foundation for effectively utilizing LLMs. By expressing our intentions precisely, we can maximize the model’s capabilities and drive innovation in our products. Happy producting… 🤖💡🛠

--

--