Published inTowards Data ScienceFine-Tune Llama 3.1 Ultra-Efficiently with UnslothA beginner’s guide to state-of-the-art supervised fine-tuningJul 293Jul 293
Published inTowards Data ScienceFine-tune Llama 3 with ORPOA cheaper and faster unified fine-tuning techniqueApr 198Apr 198
Published inTowards Data ScienceCreate Mixtures of Experts with MergeKitCombine multiple models into a single MoEMar 278Mar 278
Published inTowards Data ScienceMerge Large Language Models with mergekitCreate your own models easily, no GPU required!Jan 817Jan 817
Published inTowards Data ScienceFine-tune a Mistral-7b model with Direct Preference OptimizationBoost the performance of your supervised fine-tuned modelsJan 19Jan 19
Published inTowards Data ScienceExLlamaV2: The Fastest Library to Run LLMsQuantize and run EXL2 modelsNov 20, 20236Nov 20, 20236
Published inTowards Data ScienceQuantize Llama models with GGUF and llama.cppGGML vs. GPTQ vs. NF4Sep 4, 20235Sep 4, 20235
Published inTowards Data ScienceA Beginner’s Guide to LLM Fine-TuningHow to fine-tune Llama and other LLMs with one toolAug 30, 20237Aug 30, 20237
Published inTowards Data ScienceGraph Convolutional Networks: Introduction to GNNsA step-by-step guide using PyTorch GeometricAug 14, 20239Aug 14, 20239