Published inTowards Data ScienceFine-Tune Llama 3.1 Ultra-Efficiently with UnslothA beginner’s guide to state-of-the-art supervised fine-tuningJul 29, 20243Jul 29, 20243
Published inTowards Data ScienceFine-tune Llama 3 with ORPOA cheaper and faster unified fine-tuning techniqueApr 19, 20248Apr 19, 20248
Published inTowards Data ScienceCreate Mixtures of Experts with MergeKitCombine multiple models into a single MoEMar 27, 20248Mar 27, 20248
Published inTowards Data ScienceMerge Large Language Models with mergekitCreate your own models easily, no GPU required!Jan 8, 202417Jan 8, 202417
Published inTowards Data ScienceFine-tune a Mistral-7b model with Direct Preference OptimizationBoost the performance of your supervised fine-tuned modelsJan 1, 20249Jan 1, 20249
Published inTowards Data ScienceExLlamaV2: The Fastest Library to Run LLMsQuantize and run EXL2 modelsNov 20, 20236Nov 20, 20236
Published inTowards Data ScienceQuantize Llama models with GGUF and llama.cppGGML vs. GPTQ vs. NF4Sep 4, 20235Sep 4, 20235
Published inTowards Data ScienceA Beginner’s Guide to LLM Fine-TuningHow to fine-tune Llama and other LLMs with one toolAug 30, 20237Aug 30, 20237
Published inTowards Data ScienceGraph Convolutional Networks: Introduction to GNNsA step-by-step guide using PyTorch GeometricAug 14, 20239Aug 14, 20239