Announcing Langchain’s integration with Javelin

sharathr
Javelin Blog
Published in
2 min readSep 21, 2023

Today, we’re thrilled to introduce an update: Langchain now ships with Javelin support, including llm classes. Upgrade to Langchain0.0.298 or later to use Javelin AI Gateway for your LLM interactions.

With the power of Javelin’s gateway and LangChain, applications can now deliver faster, more secure, and more accurate language processing tasks. Whether you want to complete a sentence, extract embeddings, or engage in an AI-assisted chat, this integration streamlines the process. Both synchronous as well as asynchronous methods are fully supported.

Seamless Setup in Seconds!

Setting up is straightforward. Begin by installing the javelin_sdk to establish the necessary connections with the Javelin AI Gateway:

pip install 'javelin_sdk'

Langchain Completions Example

LangChain’s completion capabilities enable swift and relevant text suggestions. These completions become even more tailored and efficient by utilizing the Javelin AI Gateway. Here’s how you can set it up:

​from langchain.chains import LLMChain
from langchain.llms import JavelinAIGateway
from langchain.prompts import PromptTemplate

route_completions = "eng_dept03"

# Init Gateway Client
gateway = JavelinAIGateway(
gateway_uri="http://localhost:8000", # replace with Javelin endpoint URI
route=route_completions,
model_name="text-davinci-003",
)

# Call your chains as usual...
llmchain = LLMChain(llm=gateway, prompt=prompt)
result = llmchain.run("podcast player")

print(result)

Langchain Embeddings Example

Extracting embeddings is still extremely simple. With the JavelinAI Gateway’s support, you can now quickly derive embeddings from your documents or text:

​from langchain.embeddings import JavelinAIGatewayEmbeddings
from langchain.embeddings.openai import OpenAIEmbeddings

embeddings = JavelinAIGatewayEmbeddings(
gateway_uri="http://localhost:8000", # replace with Javelin endpoint URI
route="embeddings",
)

print(embeddings.embed_query("hello"))

Langchain Chat Example

Engage in real-time interactions using the combined capabilities of LangChain and Javelin:

from langchain.chat_models import ChatJavelinAIGateway
from langchain.schema import HumanMessage, SystemMessage

# human & ai chat messages
messages = [
SystemMessage(
content="You are a helpful assistant that translates English to French."
),
HumanMessage(
content="Artificial Intelligence has the power to transform humanity and make the world a better place"
),
]

# setup javelin gateway client
chat = ChatJavelinAIGateway(
gateway_uri="http://localhost:8000", # replace with Javelin endpoint URI
route="mychatbot_route",
model_name="gpt-3.5-turbo"
params={
"temperature": 0.1
}
)

print(chat(messages))

The integration of LangChain with the Javelin AI Gateway marks a step forward. By combining the strengths of both platforms, we’re confident that users will experience significant benefits as applications move from prototypes to production. We invite everyone to explore, experiment, and enjoy the fruits of this exciting integration.

Javelin helps Enterprises transition their LLM Applications from prototype to production with operational monitoring, automatic retries, caching, and model routing combined with robust policy & cost guardrails around model use. Javelin is architected around a zero-trust security architecture for enabling production usage with detailed archiving for fine-tuning & compliance auditing.

Get your LLM Applications ready for production, dive in now, and explore this integration's possibilities.

--

--