Darren OberstSLIMs: small specialized models, function calling and multi-model agentsWhy small, specialized function calling LLMs are the future5 min read·May 14, 2024----
Darren OberstThinking does not happen one token at a time…One of the most important parts of generative AI is often over-looked, specifically that GPT-based models are auto-regressive causal…9 min read·May 14, 2024----
Darren Oberst6 Tips to Becoming a Master LLM Fine-tuning ChefLLM Finetuning Best Practices9 min read·Dec 18, 2023--1--1
Darren OberstRAG-Instruct Capabilities: “They Grow up So Fast”Comparing 1B vs 3B vs 7B parameter LLM model capabilities8 min read·Nov 7, 2023--1--1
Darren OberstHow to Evaluate LLMs for RAG?Introducing New RAG Instruct LLM Benchmark Performance Test6 min read·Nov 5, 2023--4--4
Darren OberstOpen Source LLMs in RAGMost RAG workflow examples showcase GPT-4 or another leading proprietary LLM. One of the most common questions that we get is: how can I…5 min read·Oct 31, 2023--1--1
Darren OberstTechniques for Automated Source Citation Verification for RAGOver the last year, retrieval augmented generation (RAG) has emerged as a popular LLM-based architecture to address one of the most common…9 min read·Oct 31, 2023--1--1
Darren OberstEvaluating LLM Performance in RAG Instruct Use CasesWhile there are many solid and widely used testing benchmarks for LLMs (see the LLM Leadership Board on HuggingFace for the best examples)…10 min read·Oct 15, 2023----
Darren OberstThe Emerging LLM Stack for RAGEvery time that a new technology bursts on the scene, there is a scramble among thought leaders, analysts, investors and start-ups to try…9 min read·Oct 3, 2023----