Changsha MaHow to Determine If You Will Benefit from Fine-Tuning an LLMIn the fast-paced world of Generative AI, pre-trained large language models (LLMs) have become go-to tools for a wide range of…Jun 23Jun 23
Changsha MaTo Retrieve or Extend? Key Considerations and Research Insights on Using RAG and Long-Context LLMsA significant development in large language models (LLMs) is the expansion of context windows — the span of text a model can consider at…Apr 21Apr 21
Changsha MaHow to prepare an instruction dataset to fine-tune LLM?Fine-tuning large language models (LLMs) on custom datasets is a popular technique to adapt these powerful models for specific downstream…Mar 4Mar 4
Changsha MaCreate meaningful representations of data for RAGIn the evolving landscape of generative AI, retrieval augmented generation (RAG) has demonstrated immense promise for open-domain question…Feb 4Feb 4
Changsha MaCustom metrics for instruction fine-tuning of LLMsWhen fine-tuning large language models (LLMs) on downstream tasks, we often rely too much on generic metrics like loss/perplexity. While…Jan 1Jan 1
Changsha MaYour RAG Needs Some ScaffoldingRetrieval Augmented Generation (RAG) has been a key method to infuse new knowledge into Large Language Models (LLMs). However, there is…Oct 15, 20231Oct 15, 20231
Changsha MaLLM as Knowledge Base v.s. LLM with Knowledge RetrievalCompanies using large language models (LLMs) with their proprietary data face a choice: fine-tune the model with their private data and use…Sep 17, 2023Sep 17, 2023
Changsha MaA Comprehensive Guidance of Prompt Engineering for Natural Language to SQLLarge Language Models (LLMs) have demonstrated a remarkable ability to understand natural language prompts and generate coherent responses…Aug 31, 20231Aug 31, 20231
Changsha MaTesting ChatGPT Code InterpreterBy the time you come across this article, all ChatGPT Plus users will have been granted access to the Code Interpreter feature. This useful…Jul 9, 2023Jul 9, 2023