Prompt Engineering: Some Prompts Are More Equal Than Others

Miray TOPAL
5 min readJun 2, 2023

--

Imagine you are a language model, tasked with generating a coherent and relevant response to a given prompt. The prompt is your guide, your North Star, your raison d’être. But not all prompts are created equal. Some are vague and open-ended, while others are specific and targeted. And that’s where prompt engineering comes in.

Let’s look at the examples together to show the difference simply and to understand it better.

Prompt :

The Earth is

Output :

the third planet from the sun and is the only known planet to support life.
It has a diameter of approximately 12,742 kilometers and a mass of 5.97 x 10²⁴ kilograms.
The Earth is composed of several layers, including the crust, mantle, and core.
It has a magnetic field that protects it from harmful solar radiation and
a thin atmosphere that contains oxygen, nitrogen, and other gases necessary for life.
The Earth rotates on its axis once every 24 hours, causing day and night, and orbits
the sun once every 365.25 days, causing the seasons. The Earth is home to millions of
species of plants and animals, including humans, and is constantly changing due to
natural processes and human activities.

The output contained general information about the world that served no purpose.

But when we change something to make the output more specific.

Prompt :

Complete the sentence:
The Earth is

Output :

a planet in the solar system.

We arrived at a simple sentence about Earth as intended.

When we initially provided a simple prompt without specific instructions, the output from the model included irrelevant general information. However, by formulating a more detailed and specific prompt, we were able to guide the model towards generating a suitable response that aligned with our intended purpose.

In prompt engineering, the goal is to design prompts that strike a balance between these factors to guide the model towards the desired output. This involves experimenting with different prompts and evaluating their effectiveness in achieving the desired task.

Let’s dive a little deeper and take a look at some prompting techniques.

Prompting Techniques

Zero-shot and few-shot prompt techniques are two of the most popular and effective ways to generate text using artificial intelligence. These techniques are based on the idea of training a language model on a small amount of data and then using it to generate text based on a prompt.

Zero-shot prompt techniques involve using a pre-trained language model to generate text without any additional training. This means that the model has already been trained on a large corpus of text and can generate text in a variety of styles and formats. To use a zero-shot prompt, you simply provide the model with a prompt and it will generate text based on its understanding of the language.

Prompt:

Classify the text into neutral, negative or positive. 
Text: I think the vacation is okay.
Sentiment:

Output:

Neutral

Few-shot prompt techniques, on the other hand, involve training a language model on a small amount of data before using it to generate text. This allows the model to learn specific patterns and styles of language that are relevant to the task at hand. To use a few-shot prompt, you provide the model with a small amount of training data and then use it to generate text based on a prompt.

Prompt:

This is awesome! // Positive
This is bad! // Negative
Wow that movie was rad! //Positive
What a horrible show!

Output:

Negative

Both zero-shot and few-shot prompt techniques have their advantages and disadvantages. Zero-shot prompts are quick and easy to use, but they may not always generate text that is relevant to the task at hand. Few-shot prompts require more training data, but they can be more accurate and generate text that is more relevant to the task.

Overall, both zero-shot and few-shot prompt techniques are powerful tools for generating text using artificial intelligence. By understanding the strengths and weaknesses of each technique, you can choose the one that is best suited to your needs and generate high-quality text that meets your specific requirements.

Parameters of Prompts

Now, let’s talk about the parameters of prompts that can help you engineer the perfect prompt for your language model.

First up, we have temperature. No, we’re not talking about the weather outside. In the world of prompt engineering, temperature refers to the level of creativity and randomness in the generated output. A low temperature will result in more predictable and conservative responses, while a high temperature will lead to more imaginative and unpredictable outputs. It’s like choosing between a safe, boring date or a wild, unpredictable adventure.

When we set the temperature parameter to 0:

Prompt :

Generate a one-sentence description of the world.

Output :

The world is a vast and diverse planet, home to a multitude of living organisms and constantly changing due to natural and human-made forces.

When we set the temperature parameter to 1:

Output:

The world is a diverse and ever-changing place full of both beauty and chaos.

As the examples show, the higher the temperature, the more random (and often creative) the output. However, this is not the same as “correctness”. A temperature of 0 is best for most factual use cases such as data extraction and correct Q&A.

Next, we have top_p, which stands for “top probability.” This parameter controls the diversity of the generated output by limiting the probability of less likely words. Think of it like a game of Mad Libs, where you have a few options to choose from to fill in the blanks. A high top_p value will give you more options, while a low top_p value will limit your choices.

Finally, we have max_length, which is pretty self-explanatory. This parameter sets the maximum length of the generated output. It’s like setting a word count limit for your essay. A shorter max_length will result in more concise and focused responses, while a longer max_length will allow for more detailed and elaborate outputs.

By tweaking these parameters, you can guide your language model towards generating the perfect response for your intended purpose.

In conclusion, prompt engineering is a powerful technique for improving the performance of large language models in specific tasks. It’s the key that unlocks the door to more accurate, relevant, and useful language generation. So next time you’re crafting a prompt, remember that some prompts are more equal than others.

You can check this repo to find the codes for what is described in this article.

In creating this article, I utilized the powerful GPT-3.5-turbo model to provide compelling examples and insights.

That’s all and thanks a lot for reading!

References

  1. https://learn.deeplearning.ai/chatgpt-prompt-eng

2. https://www.promptingguide.ai/

--

--