`Generative AI with LLM Complete Interview Guide: Part 2

Narender Kumar
4 min read6 days ago

Large Language Models (LLMs) have revolutionized natural language processing and generation. However, achieving optimal performance across different tasks and domains often requires specific fine-tuning techniques. In this article, we will explore the concepts of Instruction Fine-Tuning, Catastrophic Forgetting, and Parameter-Efficient Fine-Tuning, and discuss how to evaluate a fine-tuned model.

What is Instruction Fine Tuning?

  • Through in context learning, or prompting, only a certain level of performance can be achieved.
  • Few shot learning might not work for smaller LLMs and it also takes up a lot of space in the context window.
  • Fine Tuning is a supervised learning process, where you take a labelled dataset of prompt-completion pairs to adjust the weights of an LLM. -
  • Instruction Fine Tuning is a strategy where the LLM is trained on examples of Instructions and how the LLM should respond to those instructions. Instruction Fine Tuning leads to improved performance on the instruction task.
  • Full Fine Tuning is where all the LLM parameters are updated. It requires enough memory to store and process all the gradients and other components.
Fine Tuning

--

--