Prompt Engineering — Part 2

Using intelligence to use artificial Intelligence: A deep dive into Prompt Engineering

Research Graph
7 min readMar 26, 2024
Image from Unsplash.

Author

Introduction

In the previous article we discussed what prompt engineering and some of the techniques used for prompt engineering. Prompt engineering can be defined as a technique of constructing an effective prompt or query in order to get the desired output from an AI model. For further reading the link to the previous article can be found here. In this article we will discuss some other advanced prompt engineering techniques. We will also focus on the applications of prompt engineering.

Automatic Reasoning and Tool-use (ART)

The prompting technique combines automatic chain of thought prompting and tool usage. The intuition behind ART is that given a task and an input, the large language model (LLM) first retrieves tasks that are similar from the library. These tasks are added as examples to the prompt in a specific format which allows the LLM to learn from them. After learning from these examples the LLM is able to generate the output.

Example of Automatic Reasoning and Tool Use. Source: Paranjape et al., (2023).

Automatic Prompt Engineering (APE)

Automatic Prompt Engineering can be thought of as an unsupervised machine learning algorithm. In this prompting technique the LLM is given three inputs: dataset with expected user input, dataset with desired output, and a prompt template. The LLM model is used to generate the prompt itself. This prompt can then be evaluated by the user.

Example of Automatic Prompt Engineering. Source: Zhou et al., (2022).

Active Prompt

Active prompting method is devised upon the idea of Chain-of-thought (CoT). In case of CoT, the method relies on a fixed set of human-annotated questions. However, in case of active prompting, a set of questions are given to the LLM which act as training questions. The answers generated by the LLM are parsed through an uncertainty metric. Out of these questions the the most uncertain questions are selected for human inspection and annotation. In the final stage specific examples are added to the next input prompt to reduce the uncertainty in the previously generated answers.

Example of Active-Prompt. Source: Diao et al., (2023).

Directional Stimulus Prompting

In this prompting technique a small hint/ help is added in the initial prompt to allow the model to think in that direction. This enables the model to understand the output the user desires as well.

Example of Active-Prompt. Source: Li et al., (2023).

Program-Aided Language Models (PAL)

This prompting technique is similar to CoT. However, the difference between CoT and PAL is that instead of using free-form text to get the solution, PAL offloads the solution step to programmatic runtime.

Example of PAL Prompt. Source: Gao et al., (2022).

ReAct Prompting

In case of ReAct Prompting the goal is to use the LLM to generate both reasoning traces and task specific actions one after the other.

It enables the LLM to generate verbose reasoning traces and actions for a task. This forces the LLM model to perform dynamic reasoning. It also allows the model to incorporate additional information into reasoning.

Example of ReAct Prompt. Source: Yao et al., 2022.

Reflexion Prompting

In case of Reflexion prompting technique, the LLM is allowed to learn through reinforcement learning. After generating the output, LLM asks the user for language based feedback. This feedback allows the LLM to learn and thereby generate better outputs on the next prompt.

Example of Reflexion Prompt. Source: https://www.promptingguide.ai/techniques/reflexion.

Graph-of-Thought Prompting (GoT)

GoT prompting uses both reasoning and visual inputs for solving reasoning tasks. It can take in three inputs:

  • Text: Normal textual input that we use for any prompt based engineering method
  • Image: Can add an optional image for the reasoning task at hand
  • Thought Graph: We add a graph with all named entities and their relationships.

The representation of chains of thought using nodes and connections between them, allows the Graph-of-Thought to capture the rich, non sequential nature of human thinking. This allows for a more realistic and logical modelling of reasoning processes.

Example of GoT Prompt. Source: Yao et al., 2023.

Applications

LLMs are one of the most widely used AI tools throughout. They have enabled the users to do a lot of varied tasks. Prompt engineering helps the user to take the potential of LLMs even further. Various applications of Prompt engineering are:

  • Generating Data: LLMs have strong capabilities of generating data. Using effective prompting techniques can help steer the model in the right direction. This will help in generating factually correct and consistent data.
  • HealthCare: The AI models have started predicting accurate medical information, diagnoses and personalised medical treatments. The medical officials can use prompt engineering techniques to achieve better and consistent results.
  • Customer Service: The LLMS are capable of generating information which can be utilised by the users. If provided with correct prompts, LLMs can provide accurate and correct responses to the customer. This will help improve the customer satisfaction and will also help in efficiency.
  • Legal and Compliance: Prompt engineering along with the power of LLMs can be utilised for document reviews and legal research. Prompt engineering can help get rid of discrepancies and grey areas in the document thereby reducing the defaults.
  • Education: AI is transforming the way education is being consumed. Both teachers and students can benefit from prompt engineering. Teachers can use prompt engineering to improve and tailor the teaching experience. In addition, the students can use prompt engineering to understand complex concepts.
  • Data Analysis: Prompt engineering can be utilised to provide invaluable data analysis. Researchers and data analysts can use prompt engineering to gain insights from the data by putting in the right prompt.
  • Language Translation: AI models have significantly improved on language translation tasks. However, prompt engineering can be utilised to increase the performance even further. Using techniques such as CoT and setting the inference, personalised tasks can be achieved.

Challenges

Even though prompt engineering is helping the users better utilise LLMs there are still some challenges that it needs to overcome. Some of those challenges are:

  • Achieving Clarity and Specification: Even though prompt engineering helps us decipher useful prompts for AI models. There are challenges around creating a clear and specific prompt. In order to overcome this challenge iterative prompting techniques can be utilised.
  • Ambiguity and Misinterpretation: AI models tend to misinterpret the given commands which leads to hallucination and incorrect responses. To overcome these issues, adding more context to prompts can help out.
  • Managing AI bias: AI models tend to show a bias around training data which leads to skewed responses unintentionally. Adjusting and regular reviews of the prompts can be used to counteract the bias.
  • Adapting to new AI capabilities: With the field of AI growing at a rapid pace, it is difficult for prompt engineering techniques to keep up with them. One of the best ways to overcome this challenge is to make sure that newer prompt techniques are being developed or improved AI models.
  • Increasing the Efficiency: Even though prompt engineering techniques help in devising a prompt to get correct data, the process is both time and resource consuming. Clear prompt engineering methods need to be discovered to overcome this challenge.

Conclusion

In this article we covered the advanced techniques of prompt engineering and its applications. Looking ahead, the future of prompt engineering is bright. It holds the potential to help humans fully utilise the potential of new AI developments. There is research being conducted around the world to come up with more sophisticated prompt engineering techniques. Lastly, with prompt engineering gaining popularity the AI models will become more adept at understanding these prompts and work according to the instructions given.

References

--

--