Sitemap
about ai

Diverse topics related to artificial intelligence and machine learning, from new research to novel approaches and techniques.

How to fine tune LLMs with Axolotl

--

When developing LLM applications, there are different options to improve the performance of the LLM for the task at hand. Given the restrictions in data, costs and hardware required to “re-train” the LLM from scratch, more used options are to use a retrieval augmented generation (RAG) system or fine-tune the LLM (which are not exclusive). One of the tools to make these operations more systematic and easy to use is axolotl. In this post, I review the things I have learned during the workshop (conference) Mastering LLMs for Developers and Data Scientists organized by

and .

Zoom image will be displayed
Figure 1. Axolotl image generated using GPT4o.

What is axolotl?

According to [1], Axolotl is a post-processing framework, that is agnostic to tasks and models for interacting with LLMs without direct access to internal parameters via APIs. Basically, you can think of it as an

easy-to-use wrapper for low level HuggingFace libraries so that you can spend most of the time looking at your data to improve your LLMs.

Why to use axolotl?

From the github repo [2], Axolotl is a tool designed to streamline the fine-tuning of various AI models, offering support for multiple configurations and architectures.

Features:

  • Train various Huggingface models such as llama, pythia…

--

--

about ai
about ai

Published in about ai

Diverse topics related to artificial intelligence and machine learning, from new research to novel approaches and techniques.

Edgar Bermudez
Edgar Bermudez

Written by Edgar Bermudez

PhD in Computer Science and AI. I write about neuroscience, AI, and Computer Science in general. Enjoying the here and now.

No responses yet