Build LLM Apps With Open Source Models Using OctoAI + LangChain

Team Octo
OctoAI
Published in
2 min readJul 13, 2023

OctoAI is now one of the most popular LangChain integrations!

The OctoAI Endpoint integration in LangChain allows you to easily create LangChain applications using large language models (LLMs) on OctoAI. You can use it to build against any model running on OctoAI — including the latest LLMs like Falcon, MPT and Vicuna, as well as your custom or fine tuned models.

LangChain connects LLMs to data and logic specific to an application. LLMs are trained on a broad but generic set of historical data, and through LangChain, applications can get rich, use case specific, responses to queries. Early momentum of LLM usage was centered around GPT-3 and GPT-4, with LangChain being used to add context to responses using different types of data sources and vector embeddings. However, powerful open source models are proving to be highly effective and even better for certain use cases.

With the OctoAI Endpoint integration, you can easily provision your desired LLM on OctoAI, connect it to your LangChain data sources, and build your natural language application. Check out the MovieBot sample application we built with LangChain and LlamaIndex here:

OctoAI’s model acceleration brings you a library of the fastest and most affordable foundation AI models — including the latest OSS LLMs like LLaMA-65B, MPT, Falcon and Vicuna. As OctoAI continues to add to its library of the fast and efficient foundation models with the newest OSS LLMs as they are released, the OctoAI Endpoint integration in LangChain makes it easy for developers to adopt and apply these models in their use cases.

You can install the latest LangChain release by following the steps at LangChain installation. Get started and create API endpoints for your favorite new OSS LLMs, with a free trial on OctoAI today.

--

--

Team Octo
OctoAI
Editor for

Thoughts on machine learning, app dev, and the future of AI from the engineers at octo.ai