Prompt Engineering & In-Context Learning

Sean Gahagan
2 min readSep 7, 2024

--

In this note, we’ll look at ways of improving the performance of a LLM by changing the prompt, and without changing the underlying model.

When you provide a prompt for a LLM to complete a task without providing any examples of how the task should be completed, this is called “zero-shot learning”.

Example of a prompt with no examples.

When you include an example of the completed task in your prompt, this is called “one-shot learning”.

Example of a prompt with one example of a completed task.

When you include multiple examples of the completed task in your prompt, this is called “few-shot learning”.

Example of a prompt with two examples of completed tasks. The user is hoping the LLM will complete this with “Jurassic Park”.

One-shot and few-shot learning are both examples of in-context learning (ICL). Intuitively, providing examples of the specific completed tasks helps the model to better understand what you’re asking it to do by giving it more opportunities to develop a deeper understanding (i.e., statistical representations) of the meaning and context of the task request.

Smaller models may benefit from ICL, but be mindful of the context window. If the model still doesn’t perform well, even with 5 or more examples, it may require fine tuning (i.e., more model training with domain-specific examples).

Up Next

In the next note, we’ll look at different adjustments to model settings at inference time, and how they influence the model’s outputs.

Previous Posts in this Series

  1. What is Generative AI? + Key GenAI Vocabulary
  2. Intuition on How Transformers Work

--

--