Prompt Engineering

Using intelligence to use artificial Intelligence: A deep dive into Prompt Engineering

Research Graph
6 min readMar 19, 2024
Image from Unsplash.

Author

Introduction

Large Language Models (LLMs) have become the new normal in the field of Natural Language Processing (NLP). With their improved performance and generative power, people around the world are relying on it for various tasks. However, they have proven to generate incorrect answers sometimes. Nevertheless, its chances can be reduced via a technique called prompt engineering. In this article, we will discuss in detail what prompt engineering is and some of the techniques that can be used.

However, before diving into prompt engineering, we must understand how LLMs work and perform. In the backend, the LLMs work as a text generation tool that predicts word after word using a probability calculator. In the case of ChatGPT and other generative AI models the LLMs try to map the user input with the domain data (training data). After mapping the data, the output is generated. Therefore, when the user gives in a new input, the LLMs hallucinate, thereby generating absurd outputs. To fully utilise the potential of LLMs we need to understand how to convert these mappings into domain mapping. This is exactly what prompt engineering is.

What is Prompt Engineering?

Prompt Engineering can be defined as generating the desired output from an LLM by giving a set of clear instructions about the desired output. Prompt engineering helps to better understand the capabilities and limitations of LLMs. This approach is more than just designing and developing prompts. It comprises a wide range of skills and techniques required to interact, develop, and test the LLMs.

A simple prompt with some instructions can get you an output, however, to obtain good-quality results, the prompt must be well crafted. A prompt can contain instructions such as questions and other details which provide more context such as giving hints about the answer.

Elements of a Prompt

A prompt can contain any of the following elements:

  • Instruction: any particular task the user wants the model to perform
  • Context: Additional information that the model can use to generate the desired output
  • Input data: The input or the question whose response is required
  • Output indicator: The format of the output
Example of the prompt given in ChatGPT. It contains an instruction, a context and the format of the desired output.

Prompt Engineering Techniques

There are several prompt engineering techniques based on the task at hand.

  • Zero-Shot Prompting: In the case of zero-shot prompting, the model is provided with a prompt that is not a part of the training data and is expected to perform some task. However, when zero-shot prompting doesn’t work, few-shot prompting is used.
Example of Zero-Shot Prompting. Source: https://machinelearningmastery.com/what-are-zero-shot-prompting-and-few-shot-prompting/
  • Few-Shot Prompting: In the case of few-shot prompting, the model is given various demonstrations or examples in the prompt and is expected to learn and then produce the output through that.
Example of Few-Shot Prompting. Source: https://machinelearningmastery.com/what-are-zero-shot-prompting-and-few-shot-prompting/
  • Chain-of-Thought (CoT) Prompting: It enables the LLM to provide the user with complex reasoning through intermediate steps. CoT can be combined with few-shot prompting to allow LLM to learn the reasoning and generate the required output. In addition to zero-shot and few-shot, automatic chain-of-thought (Auto-CoT) is also another process. This prompting technique uses the let’s think step-by-step technique of an LLM and then generates a demonstration for the answer.
Example of CoT prompting. Source: https://www.promptingguide.ai/techniques/cot
  • Self-consistency prompting: The idea behind this prompting technique is to sample multiple, diverse reasoning problems through few-shot CoT. It allows the LLM to break and understand the reasoning behind different problems presented in the prompt. It increases the accuracy of the generated answer and explanation.
Example of Self Consistency prompting. Source: https://learnprompting.org/docs/intermediate/self_consistency
  • Generated Knowledge Prompting: In this prompting technique, the user asks the LLM to generate some knowledge about a particular topic which is then used to generate further content. For example: if you have to write a blog about Cricket. Before asking the LLM to write a blog you first ask the LLM to state some facts about Cricket. The knowledge generated by the LLM can be used to help it in writing a better blog.
Example of General Knowledge prompting. Source: https://www.promptingguide.ai/techniques/knowledge
  • Prompt Chaining: In this technique, a prompt is broken into a set of smaller sub-prompts. The response generated is fed along with the next prompt. It helps in breaking a big problem into smaller sub-problems which the LLM might struggle to address.
Example of prompt chaining. Source: https://txt.cohere.com/chaining-prompts/
  • Tree of Thoughts (ToT) prompting: This technique is built upon the Tree-of-Thoughts framework and uses the well-established chain-of-thought framework as well. It allows the LLM to exhibit superior reasoning abilities by creating various blocks of knowledge.
Example of Tree of Thoughts prompting. Source: https://www.promptingguide.ai/techniques/tot
  • Retrieval Augmented Generation (RAG): This technique uses a LLM supported by a large chunk of data. By giving the LLM relevant context, we allow the model to retrieve the required knowledge from the dataset. In the case of RAG, the LLM is augmented with a large corpus of data and a retriever system that gets the required answer for the query.
Diagram of the original RAG model. Source: Lewis, P., Perez, E., Piktus, A., Petroni, F., Karpukhin, V., Goyal, N., Küttler, H., Lewis, M., Yih, W., Rocktäschel, T., Riedel, S., & Kiela, D. (2020). Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks, Link:https://doi.org/10.48550/arXiv.2005.11401

These techniques are just the tip of the iceberg. Other techniques such as automatic prompting, active prompt, directional stimulus, etc. can be used.

Even though prompt engineering helps utilise the potential of an LLM, certain rules of thumb should be followed. These include putting instructions at the beginning of the prompt, giving exhaustive and specific information about the desired output, starting with zero-shot and then moving to few-shot prompting, and emphasising the dos instead of the don’ts.

Conclusion

LLMs have become a popular tool amongst users. However, people are still not able to utilise it to its full potential. Prompt engineering allows users to tailor their input commands in such a way that the LLMs can generate a better response. By doing this, both the user and LLMs can help each other out.

References

--

--