A Simple Fine Tuning Using Lamini

Feng Li
3 min readJan 23, 2024

--

A small pond in Rouge Park on Jan 1 2024

Lamini is a Python library that deals with LLMs in implementing AI workloads. (there are many similar libraries like vLLM as well). You can easily integrate Lamini in your AI applications.

Need to refresh GenAI concepts? Check out this Understanding Generative AI post.

In this post, we just give it a try to see how fine tuning can be done with Lamini. We use Colab to run our code.

First, installation and API key setup.

!pip install lamini typing-extensions==4.5.0

import lamini
lamini.api_key = "xxxx"

Then we can chat with Llama2 from HuggingFace:

(1) Use base model

If we ask “base model” meta-llama/Llama-2–7b-chat-hf two questions:

  • what is the capital of France?
  • what is your favorite animal?

it kind of goes wild like below…

(2) Compared to a fine tuned model

But if we ask a “fine tuned” model after instruction tuning so it can chat back to us as we’d expect.

(3) Let’s see how we can do a simple fine tuning using Lamini

def get_data():
data = [
{
"user": "What's your favorite animal?",
"output": "dog"
},
{
"user": "What's your favorite color?",
"output": "blue"
},
{
"user": "What's your favorite animal?",
"output": "cat"
},
{
"user": "What's your favorite color?",
"output": "red"
},
{
"user": "What's your favorite animal?",
"output": "rabbit"
},
{
"user": "What's your favorite color?",
"output": "yellow"
},
]
return data

from lamini import LlamaV2Runner

llm2 = LlamaV2Runner("meta-llama/Llama-2-7b-chat-hf")
data = get_data()
llm2.load_data(data)

results = llm2.train(finetune_args={'learning_rate': 1.0e-4})

llm2("What's your favorite animal?")

So we prepared some data — it’s recommended by Lamini to has at least 1000 data inputs but we’ll try that another time from reading bigger data from data file.

Then we load the data to the model we want to fine tune and start training…

Lamini starts a container at backend to run the fine tuning job on Lamini end.

Once the job is done, the new fine tuned model is loaded automatically. We can ask the new model with the same question to see if how it’s improved.

Of cause in our example, we don’t see much differences — given the data used to fine tune is way to small. But this is how developers can do fine tuning under their full control.

By doing this simple exercise, we gain more understanding about how the libraries out there that are being developed to help on AI driven applications.

On the other hand, cloud platforms have provided tools to do fine tuning, RAG and more like from AWS/Azure UI. We’ll explore those and Snowflake’s GenAI capabilities later but with a better understanding to what’s going on under the hood.

Happy Reading!

--

--

Feng Li

Software Engineer, playing with Snowflake, AWS and Azure. Snowflake Data Superhero 2024. SnowPro SME, Jogger, Hiker. LinkedIn: https://www.linkedin.com/in/fli01