Fine-Tuning Large Language Models with FELT Labs — Full Guide

Fine-tuning LLMS with FELT Labs using Ocean protocol technology for less than $8

Břetislav Hájek
Published in
7 min readAug 14


Exciting updates from FELT Labs, you can now fine-tune large language models (LLMs) on your private data using our algorithms without the need to set up your own architecture. For that, we will use Ocean protocol technology and polygon blockchain. FELT Labs provide all the necessary tools for running the fine-tuning; you only need to prepare your data and pay for the computation time.

Does your project have a specific need for fine-tuning LLMs? Contact us at, and our team will help you with that!

All materials (dataset, algorithm, docker file, etc.) used in this tutorial are published in the following GitHub repository:

Why is blockchain involved?

Ocean Protocol is building an ecosystem for securely monetizing your data. FELT Labs is building data science tools on top of this data marketplace. Therefore, not only can you fine-tune an LLM on your data, but you can also profit from others using your data to fine-tune their models. All that can be done without revealing your data. That way, data owners can be fairly compensated for using their data for training AI models.

Fine-Tuning Guide

The following guide will walk you through all the steps from preparing the dataset, fine-tuning the model, and using it for inference. Since our solution requires blockchain technology, there are some necessary prerequisites so that you can pay for the computation. Getting prerequisites done might be challenging for those using blockchain for the first time, but once you complete them, the rest of the tutorial will be smooth sailing.

1. Prerequisites

We will use Polygon Mainnet. Therefore you must have your wallet (we currently support MetaMask or WalletConnect). In the wallet, you must have some MATIC currency to pay gas fees (1 MATIC should be enough). Further, you must have a USDC token to pay for the computation. You need around 7.2 USDC for one run of fine-tuning.

There are many possible ways to obtain these two tokens; you can use crypto exchanges or some on-ramp solution. Feel free to ask in the comments for more details (maybe I write a separate tutorial for this).

To proceed to the next steps, you should have the wallet with the following assets:

  • MATIC: 1+
  • USDC: 7.2+

2. Preparing dataset

Next up, you will need to prepare your data. This is the most important step of the whole tutorial. The data needs to be in JSON format, looking as follows:

{"text": "### Question: What is FELT Labs? ### Response: FELT Labs is data science company developing tools for working with distributed data."},
{"text": "### Question: What is Ocean protocol? ### Response: ..."},
{"text": "### Question: ...### Response: "}

A few things to notice:

  • For now, we are using fine-tuning for question response data. Your data should contain ## Question: … and ### Response: …part (we will add support for other tasks as well)
  • JSON file contains a list of objects with only one key "text" followed by a string representing the expected text
  • JSON file can contain as many training objects as needed

Once your file is ready, you must publish it on Ocean protocol. There are different ways to publish assets on Ocean protocol, but I will describe the simplest one here. For more details, visit Ocean’s documentation.

We will publish the dataset using the file URL. For that, you need a unique URL pointing to your file. We will publish the file on the GitHub repository and obtain the raw file URL as shown below:

Once the URL is ready, we can publish the new asset through the Ocean marketplace.

  1. Open ocean marketplace: Connect your wallet in the top-right corner and use the Polygon Mainnet network.
  2. On the first page, you must select dataset and fill in the rest of the fields as you like.
  3. On the second page, it’s important to set the following:
    - Access Type: Compute
    - Provider URL:
    - File:
    - Timeout: can be set as you like (it represents how long after purchase will be the dataset available to the user for computation)
  4. In the pricing section, select your preferred price. I will see it to free for simplicity
  5. Finally, check the preview and hit submit!
Screenshot from access configuration
Screenshot from correct access configuration — second page of publishing.

Once your dataset is ready, you should have a unique address to view it. In our case, it looks like this:

3. Running Fine-Tuning

Now the fun part begins, running the training! The first step is to go to: In the top-right corner, make sure that you first click on connect wallet and, after that, also the login button. Ensure your wallet is connected to the correct network, “Mumbai Mainnet.” You will need that to store information about the training you started and access them later.

Once logged in, start by filling in the name of your training jobs (the name is just for your reference). Then search for the dataset you previously published; in the case of this guide, we will search for the dataset “LLM Fine-tuning tutorial dataset” and select it for training. In the next step, you will select the LLM algorithm.

Three steps of starting fine-tuning are selecting a dataset, selecting the algorithm, and setting hyperparameters.

Finally, you can pick some training hyperparameters. The most important is the training steps. For larger amounts of data, you will need more training steps. More training steps also mean longer training time. The maximum number is currently capped at 500 due to the computing time limits.

Once you are done, click submit. A modal will pop up and walk you through starting the computation. If you are starting the computation via browser, you will go through the following steps:

  1. Purchase dataset (approve + purchase transaction)
  2. Purchase algorithm (approve + purchase transaction)
  3. Sign compute to start

4. Using Model for Inference

To monitor the progress of the computation, go to:

The page for monitoring the status of compute job and obtaining the download command.

You should see the name and progress of the job you started in the previous step. Once finished, it’s time to put our model through some testing. We already prepared a notebook for testing the model and running the inference. The notebook can be found here:

To run the notebook, you must select GPU runtime in the collaborator. Then you will need to comply download command for your model. The command can be obtained by clicking the download model button next to the finished computation job. Paste this command into the notebook at the respective place. Don’t forget to include an exclamation mark (!) at the beginning of the command. Then you can go ahead and run the notebook. To feed in your prompt, go to the inference section and change the input_text variable.


For our model, we use the following input prompt ### Question: What can you tell me about FELT Labs? ### Response: . This prompt isn’t directly included in our dataset. Therefore, it’s interesting to see how the fine-tuned model performs compared to the original model.

Fine-tuned model response: “FELT Labs is a data science company that provides a suite of tools for working with private and distributed data. Our focus is on federated learning, which allows you to train machine learning models or perform data analytics across multiple datasets while…”

Original model response: “FELT Labs helps companies and organizations move into the 21st century with state-of-the-art software development to boost revenue, optimize customer engagement, and increase digital marketing efficiency. That’s not all that “FELT Labs” stands for though…”

As we can see, the fine-tuned model produces an answer close to what we have in our dataset compared to the original model, which produces a very generic answer.


Fine-tuning LLMs is an exciting field. It’s definitely challenging to set up the whole process, including the training infrastructure. In FELT Labs, we are excited about our progress in this area as it opens many new possibilities. Currently, the fine-tuning algorithm is limited to a single type of data and model. However, extending the algorithm for other models and datasets is relatively easy now that we have the infrastructure in place.

Does your project have a specific need for fine-tuning LLMs? Contact us at, and our team will help you with that!

WRITER at / AI Movie Director /imagine AI 3D Models



Břetislav Hájek

Programming | Machine Learning | Blockchain | PhD student | Building start-up: | Follow for weekly stories/tutorials from the start-up journey.