Multi-Shot Prompting

Anmol Talwar
3 min readJun 20, 2024

--

While large-language models demonstrate remarkable zero-shot capabilities, they still fall short on more complex tasks when using the zero-shot setting.

  • Multi-shot prompting can be used as a technique to enable in-context learning where we provide demonstrations in the prompt to steer the model to better performance.
  • The demonstrations / examples can potentially lead to significantly improved results, as the model has a broader understanding of the task it needs to perform.

For understanding Multi-shot prompting with Visual Aids and its Practical implementation in python, watch my video:

Multi-Shot Prompting
MULTI-SHOT EXAMPLE

In the above example, since the word “farduddle” is not from English Vocabulary and hence the Pre-Trained model has never been exposed to such a word, we need to give the model a reference (as an example) on how to use the such words in a sentence.

Comparing Zero-Shot & Multi-Shot

Zero-Shot vs Multi-Shot

For the User Query to calculate the ROI, Zero-Shot Prompt gave an irrelevant response whereas a Multi-Shot Prompt with 2 examples (similar to the user-query) gave a correct response. Providing relevant reference to the model to solve a mathematical problem helped to get the correct response.

To learn more about efficient Zero-Shot Prompting, refer my blog:

Types of Multi-Shot Prompting [One-Shot & Few-Shot]

One-Shot vs Few-Shot Prompting

Objective in the above example : Classify the Sentiment of a Product review as Positive, Negative or Neutral.

One-Shot Prompting

Wrong Response : Model could not learn efficiently from just one example of a Positive review given as a reference to the model.

VS

Few-Shot Prompting

Correct Response : Model learnt better with a reference of 3 examples w.r.t each of the three classes.

Limitations

Scalability and Practicality

  • Providing examples for highly specialized every possible task or variation can be impractical.
  • For tasks involving complex legal language or technical jargon, creating relevant examples can be time-consuming and require domain expertise.

Overfitting to Example

  • The model might overfit to the given examples, meaning it performs well on similar inputs but poorly on slightly different ones.

Context Length Limitation

  • Language models have a maximum context length they can handle. Providing too many examples might exceed this limit, truncating the input and losing critical information.

Multi-shot prompting refers to a technique used in natural language processing (NLP), particularly in the context of training and utilizing large language models, where multiple examples of the desired output are provided to the model within a single prompt. This technique helps guide the model’s responses more effectively by giving it a clearer idea of the pattern or format expected.

--

--