LoRA: Low-Rank Adaptation of Large Language Models

Gary Nakanelua
GTA: Generative Tech Advances
2 min readAug 7, 2023

Prompt engineering, fine-tuning, and model training are all viable options to get domain or task-specific results from a Large Language Model (LLM). One model training technique to consider is Low-Rank Adaptation of Large Language Models (LoRA).

Background

First introduced by Microsoft via the whitepaper here, LoRA is a technique used in language models to make them more efficient and easier for different tasks. Imagine you have a big language model that knows much about language and can understand and generate sentences. This model is like a big brain that has been trained on a lot of data.

Let’s say you want to use this language model for different tasks, like summarizing articles or answering questions. The problem is that the model is so big and has so many parameters that it becomes difficult and expensive to use for each task separately.

That’s where LoRA comes in. LoRA is a way to make the language model more adaptable and efficient. Instead of training the whole model again for each task, LoRA freezes the pre-trained model and adds smaller trainable matrices to each model layer. These matrices help the model adapt to different tasks without changing all the parameters.

By using LoRA, organizations can significantly reduce the number of trainable parameters in a model, making it easier and faster to use for different tasks. It also saves memory on the computer or device running the model.

Real World Example

Artist Greg Rutkowski opposed his artistic style being used in training Stable Diffusion (https://stablediffusionweb.com/), a popular generative AI image generator. So, the creators of Stable Diffusion took action and removed his work (and other artists) from their dataset.

“After vocally opposing the AI art trend, Stability AI — creators of the popular AI image generator Stable Diffusion — responded by removing his work from their dataset" — https://decrypt.co/150575/greg-rutkowski-removed-from-stable-diffusion-but-brought-back-by-ai-artists

In response, the community trained a LoRA specifically around Greg Rutkowski’s style and posted it on Civitai, a platform for Stable Diffusion models. — https://civitai.com/models/117635/greg-rutkowski-style-lora-sdxl.

Learn More

For the Technically Curious

The whitepaper is 26 pages long, so I loaded it into a “ChatGPT for PDF” service to save reading time. You can ask questions, get a summary, etc., of the whitepaper at https://askyourpdf.com/chat/9734b319-ad88-407d-a913-b05adceeacf7.

For the Experimenters

You can check out a Github repo from Microsoft at https://github.com/microsoft/LoRA with a Python package and a few examples of integrating it with PyTorch models.

--

--

Gary Nakanelua
GTA: Generative Tech Advances

🚀MD of Innovation @ Blueprint - 🧬Open-Source Evangelist - 📘Author of Experiment or Expire - 🎸Bass player for Iris Drive - Co-host of Puget Sounds Podcast