Maxime LabonneinTowards Data ScienceFine-tune Llama 3 with ORPOA cheaper and faster unified fine-tuning technique8 min read·Apr 19, 2024--5--5
Maxime LabonneinTowards Data ScienceCreate Mixtures of Experts with MergeKitCombine multiple models into a single MoE9 min read·Mar 27, 2024--7--7
Maxime LabonneinTowards Data ScienceMerge Large Language Models with mergekitCreate your own models easily, no GPU required!11 min read·Jan 8, 2024--16--16
Maxime LabonneinTowards Data ScienceFine-tune a Mistral-7b model with Direct Preference OptimizationBoost the performance of your supervised fine-tuned models10 min read·Jan 1, 2024--9--9
Maxime LabonneinTowards Data ScienceExLlamaV2: The Fastest Library to Run LLMsQuantize and run EXL2 models6 min read·Nov 20, 2023--6--6
Maxime LabonneinTowards Data ScienceQuantize Llama models with GGUF and llama.cppGGML vs. GPTQ vs. NF49 min read·Sep 4, 2023--5--5
Maxime LabonneinTowards Data ScienceA Beginner’s Guide to LLM Fine-TuningHow to fine-tune Llama and other LLMs with one tool·8 min read·Aug 30, 2023--7--7
Maxime LabonneinTowards Data ScienceGraph Convolutional Networks: Introduction to GNNsA step-by-step guide using PyTorch Geometric16 min read·Aug 14, 2023--7--7
Maxime LabonneinTowards Data Science4-bit Quantization with GPTQQuantize your own LLMs using AutoGPTQ10 min read·Jul 31, 2023--3--3
Maxime LabonneinTowards Data ScienceFine-Tune Your Own Llama 2 Model in a Colab NotebookA practical introduction to LLM fine-tuning·12 min read·Jul 25, 2023--43--43