Tess DejaeghereTEXT TALES: zero-shot NER with an open-source LLM (Mixtral 8x7b) for DH👤 WHO: this blog is part of a series on TECH TALES — a blogging initiative by the Ghent Center for Digital Humanities (GhentCDH). Our…Oct 7
syrominGenerative AIKnowledge Graph Extraction & Visualization with local LLM from Unstructured Text: a History exampleMotivation and contextApr 16
Matthew GuntoninTowards Data ScienceUnderstanding Direct Preference OptimizationThis blog post will look at the “Direct Preference Optimization: Your Language Model is Secretly a Reward Model” paper and its findings.Feb 185Feb 185
Tess DejaeghereTEXT TALES: few-shot NER with an open-source LLM (Mixtral 8x7b) for DH👤 WHO: this blog is part of a series on TECH TALES — a blogging initiative by the Ghent Center for Digital Humanities (GhentCDH). Our…Oct 7Oct 7
Matthew GuntoninTowards Data ScienceUnderstanding the Sparse Mixture of Experts (SMoE) Layer in MixtralThis blog post will explore the findings of the “Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer” paper…Mar 21Mar 21
Tess DejaeghereTEXT TALES: zero-shot NER with an open-source LLM (Mixtral 8x7b) for DH👤 WHO: this blog is part of a series on TECH TALES — a blogging initiative by the Ghent Center for Digital Humanities (GhentCDH). Our…Oct 7
syrominGenerative AIKnowledge Graph Extraction & Visualization with local LLM from Unstructured Text: a History exampleMotivation and contextApr 16
Matthew GuntoninTowards Data ScienceUnderstanding Direct Preference OptimizationThis blog post will look at the “Direct Preference Optimization: Your Language Model is Secretly a Reward Model” paper and its findings.Feb 185
Tess DejaeghereTEXT TALES: few-shot NER with an open-source LLM (Mixtral 8x7b) for DH👤 WHO: this blog is part of a series on TECH TALES — a blogging initiative by the Ghent Center for Digital Humanities (GhentCDH). Our…Oct 7
Matthew GuntoninTowards Data ScienceUnderstanding the Sparse Mixture of Experts (SMoE) Layer in MixtralThis blog post will explore the findings of the “Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer” paper…Mar 21
Harshal DharpureUnderstanding Mistral and Mixtral: Advanced Language Models in Natural Language ProcessingMistral and Mixtral are large language models (LLMs) developed by Mistral AI, designed to handle complex NLP tasks such as text generation…Apr 8
kirouane AyoubinGoPenAIBuilding a Custom Mixture of Experts Model for our Darija: From Tokenization to Text GenerationLets Build MOE model From scratch .Jul 25
Fireworks.aiFireworks Raises the Quality Bar with Function Calling Model and API ReleaseFireworks conducts alpha launch of our function calling model and API, with quality reaching GPT-4 and surpassing open-source modelsDec 20, 20231