Prompt Engineering — 101

Yash Wasalwar
𝐀𝐈 𝐦𝐨𝐧𝐤𝐬.𝐢𝐨
4 min readMay 26, 2023

--

Source: Google Search (GS)

First naive question arises if any newcomer enconters this term is, “Is this a typical engineering course analogous to others like CompSci., etc?”.

Actually it’s not!

In simple terms it is just a way to carefully crafting your query so that language models can generate most appropriate results. To know what are language models, you can read this.

For being data centric, I went to google and searched for :
```When was the first time “prompt engineering” used?```

or

```Who coined the term “prompt engineering” ?```

And the response I got is :

As you can see, this is actually the most recent term (it’s not even a year old) first introduced By Google researchers. In my opinion, it’s popularity and usage burgeoned since the inception of chatbots like ChatGPT.

Now let’s get into some technicalities and understand features and key points related to “Prompt Engineering” which will definitely help you in your prompting task.

Note: I have a surprise for a you🥳 at the end, which will make sense if you understand the below stated points. So stay focused!

Prompt engineering is the process of crafting effective and precise instructions or prompts to guide a language model like ChatGPT, and other language models to generate desired responses. It involves carefully formulating the input text provided to the model in order to elicit the desired output. Prompt engineering plays a crucial role in shaping the behavior and output quality of a language model, allowing users to obtain more accurate and relevant responses.

Let’s delve deeper into the components and techniques involved in prompt engineering:

  1. Task specification:
Source: GS

One of the primary goals of prompt engineering is to clearly define the task or objective for the language model. This involves providing explicit instructions or questions that convey the desired output. For example, if you want to know the weather forecast, a suitable prompt could be: “What will be the weather like in New York City tomorrow?”. By specifying the task, you help the model understand the context and generate more accurate responses.

2. Context and background:

Source: GS

Setting the context is crucial for the language model to comprehend the conversation or query. By providing relevant background information, you establish the necessary context for the model to generate informed responses. This can include details about the topic, previous statements, or any other relevant information to ensure the model understands the conversation contextually.

3. System behaviour and persona:

Source: GS

It allows users to shape the behavior and persona of the language model. By specifying the desired style, tone, or characteristics, you can guide the model to respond in a particular manner. For example, you can instruct the model to respond like a specific character, adopt a formal or informal tone, or emulate the speech pattern of a particular era. This control over system behavior helps customize the outputs according to specific requirements.

4. Length and format:

Source: GS

You can provide guidelines on the desired length or format of the response. This can involve specifying a word count limit, requesting a bulleted list, or asking for a detailed explanation. Such instructions help the model generate outputs that align with the expected format, making it easier to process the information provided.

5. Calibration and fine-tuning:

Source: GS

It can involve fine-tuning or calibrating the model to improve its performance on specific tasks. This can be achieved by training the model on custom datasets or using reinforcement learning techniques. Fine-tuning allows the model to become more specialized and proficient in generating accurate responses for a specific domain or task.

6. Iterative refinement:

Source: GS

It often follows an iterative process. Users analyze the model’s outputs and refine the prompts based on the results obtained. By examining the generated responses, users can identify any errors, biases, or inconsistencies and make necessary adjustments to the prompts. This iterative feedback loop helps improve the performance and align the model’s outputs with the desired outcomes.

So these were some of the important techniques which you “should” inculcate into your queries for better results and productivity.

I hope, you must have got an idea about what prompt engineering is, and it’s fine if you haven’t got everything cleared. My intention was to make you aware of this term so that in coming future you can read and explore it more.

Finally, let’s reveal the surprise : I have attached a pdf document about prompt engineering and its practical use case using ChatGPT. Thanks to SuperDataScience Team for sharing. You can find it here.

Happy Prompting 🚀!!

--

--

Yash Wasalwar
𝐀𝐈 𝐦𝐨𝐧𝐤𝐬.𝐢𝐨

Ex-Research Intern @DRDO · Always learning · Loves to talk about Data Science and Life Experiences