Changsha MaTo Retrieve or Extend? Key Considerations and Research Insights on Using RAG and Long-Context LLMsA significant development in large language models (LLMs) is the expansion of context windows — the span of text a model can consider at…·7 min read·Apr 21, 2024----
Changsha MaHow to prepare an instruction dataset to fine-tune LLM?Fine-tuning large language models (LLMs) on custom datasets is a popular technique to adapt these powerful models for specific downstream…·5 min read·Mar 4, 2024----
Changsha MaCreate meaningful representations of data for RAGIn the evolving landscape of generative AI, retrieval augmented generation (RAG) has demonstrated immense promise for open-domain question…5 min read·Feb 4, 2024----
Changsha MaCustom metrics for instruction fine-tuning of LLMsWhen fine-tuning large language models (LLMs) on downstream tasks, we often rely too much on generic metrics like loss/perplexity. While…·5 min read·Jan 1, 2024----
Changsha MaYour RAG Needs Some ScaffoldingRetrieval Augmented Generation (RAG) has been a key method to infuse new knowledge into Large Language Models (LLMs). However, there is…5 min read·Oct 15, 2023--1--1
Changsha MaLLM as Knowledge Base v.s. LLM with Knowledge RetrievalCompanies using large language models (LLMs) with their proprietary data face a choice: fine-tune the model with their private data and use…6 min read·Sep 17, 2023----
Changsha MaA Comprehensive Guidance of Prompt Engineering for Natural Language to SQLLarge Language Models (LLMs) have demonstrated a remarkable ability to understand natural language prompts and generate coherent responses…·4 min read·Aug 31, 2023--1--1
Changsha MaTesting ChatGPT Code InterpreterBy the time you come across this article, all ChatGPT Plus users will have been granted access to the Code Interpreter feature. This useful…4 min read·Jul 9, 2023----