Maxime LabonneinTowards Data ScienceFine-Tune Llama 3.1 Ultra-Efficiently with UnslothA beginner’s guide to state-of-the-art supervised fine-tuningJul 292Jul 292
Maxime LabonneinTowards Data ScienceFine-tune Llama 3 with ORPOA cheaper and faster unified fine-tuning techniqueApr 198Apr 198
Maxime LabonneinTowards Data ScienceCreate Mixtures of Experts with MergeKitCombine multiple models into a single MoEMar 278Mar 278
Maxime LabonneinTowards Data ScienceMerge Large Language Models with mergekitCreate your own models easily, no GPU required!Jan 817Jan 817
Maxime LabonneinTowards Data ScienceFine-tune a Mistral-7b model with Direct Preference OptimizationBoost the performance of your supervised fine-tuned modelsJan 19Jan 19
Maxime LabonneinTowards Data ScienceExLlamaV2: The Fastest Library to Run LLMsQuantize and run EXL2 modelsNov 20, 20236Nov 20, 20236
Maxime LabonneinTowards Data ScienceQuantize Llama models with GGUF and llama.cppGGML vs. GPTQ vs. NF4Sep 4, 20235Sep 4, 20235
Maxime LabonneinTowards Data ScienceA Beginner’s Guide to LLM Fine-TuningHow to fine-tune Llama and other LLMs with one toolAug 30, 20237Aug 30, 20237
Maxime LabonneinTowards Data ScienceGraph Convolutional Networks: Introduction to GNNsA step-by-step guide using PyTorch GeometricAug 14, 20238Aug 14, 20238