Darren OberstSLIMs: small specialized models, function calling and multi-model agentsWhy small, specialized function calling LLMs are the futureMay 14May 14
Darren OberstThinking does not happen one token at a time…One of the most important parts of generative AI is often over-looked, specifically that GPT-based models are auto-regressive causal…May 14May 14
Darren Oberst6 Tips to Becoming a Master LLM Fine-tuning ChefLLM Finetuning Best PracticesDec 18, 20231Dec 18, 20231
Darren OberstRAG-Instruct Capabilities: “They Grow up So Fast”Comparing 1B vs 3B vs 7B parameter LLM model capabilitiesNov 7, 20231Nov 7, 20231
Darren OberstHow to Evaluate LLMs for RAG?Introducing New RAG Instruct LLM Benchmark Performance TestNov 5, 20234Nov 5, 20234
Darren OberstOpen Source LLMs in RAGMost RAG workflow examples showcase GPT-4 or another leading proprietary LLM. One of the most common questions that we get is: how can I…Oct 31, 20231Oct 31, 20231
Darren OberstTechniques for Automated Source Citation Verification for RAGOver the last year, retrieval augmented generation (RAG) has emerged as a popular LLM-based architecture to address one of the most common…Oct 31, 20231Oct 31, 20231
Darren OberstEvaluating LLM Performance in RAG Instruct Use CasesWhile there are many solid and widely used testing benchmarks for LLMs (see the LLM Leadership Board on HuggingFace for the best examples)…Oct 15, 2023Oct 15, 2023
Darren OberstThe Emerging LLM Stack for RAGEvery time that a new technology bursts on the scene, there is a scramble among thought leaders, analysts, investors and start-ups to try…Oct 3, 2023Oct 3, 2023