LangChain Framework — Shaping the Future of Digital Products through LLM-Dependent Software Development

Willian valle
AI Topics and discussions
3 min readMay 29, 2023

In the near future, we'll be flooded with products using generative language models. Educational platforms may gain totally automated teachers, Microsoft will continue to insert Copilots wherever they can, and we may finally see customer support chatbots that indeed work. To achieve this future, a lot of groundwork is being done, such as the development of libraries and frameworks that help us combine large language models (LLMs) with other computational resources or knowledge sources.

LangChain is a Python library designed to facilitate the development of software that depends on LLMs. Is a versatile tool that can assist you in building various types of applications, including question-answering systems for specific documents, chatbots, and agents.

To facilitate the construction of these applications, LangChain provides support in six main areas:

  1. LLMs and Prompts: This includes managing and optimizing prompts, providing a universal interface for all LLMs, and offering common utilities for working with LLMs.
  2. Chains: LangChain supports the creation of sequences of calls to LLMs or other utilities, extending beyond a single LLM call. The library offers a standard interface for chains, a variety of integrations with other tools, and end-to-end chains for common applications.
  3. Data Augmented Generation: This involves specific types of chains that first interact with an external data source to fetch data for use in the generation step. Applications in this area include summarization of long texts and question-answering over specific data sources.
  4. Agents: In the context of LangChain, agents are LLMs that make decisions about which actions to take, execute those actions, observe the outcome, and repeat this process until completion. LangChain provides a standard interface for agents, a selection of ready-to-use agents, and examples of end-to-end agents.
  5. Memory: This refers to the ability to persist state between calls of a chain or agent. LangChain provides a standard interface for memory, a collection of memory implementations, and examples of chains or agents that utilize memory.
  6. Evaluation: Evaluating generative models can be challenging using traditional metrics. LangChain introduces a novel way of evaluation using language models themselves and provides prompts and chains to assist in this process.

Getting Started with LangChain

To start with LangChain, let's first set up the environment and install the required dependencies. LangChain is built on Python, therefore, make sure you have Python 3.8 or higher installed.

To install LangChain, you can simply use pip:

pip install langchain

Now, let's dive into writing a basic program using LangChain. For this example, we will create a simple question-answering system that interacts with an external data source.

Firstly, import the required classes and modules:

from langchain import LLM, Chains, DataAugmentedGeneration, Memory

Next, initialize your language model. In this case, we'll use an instance of GPT-3 from OpenAI:

my_llm = LLM.from_openai('gpt-3')

Let's assume that we have a database with a table named 'books' and we want to create an agent that can answer questions about the books in the database. We'll use the DataAugmentedGeneration class to interact with this external data source.

data_gen = DataAugmentedGeneration.from_database('postgresql://localhost:5432/my_database', 'books')

Now we can use the Chains class to create a sequence of calls to the LLM. We will create a chain that first fetches data from the database and then generates a response based on the data.

chain = Chains.create()
chain.add_step(data_gen.fetch_data)
chain.add_step(my_llm.generate)

LangChain provides a Memory interface that can be used to persist state between calls of a chain. This can be particularly useful when the chain is used by an agent that maintains a conversation over multiple turns.

memory = Memory.create()
chain.set_memory(memory)

Finally, we can create an agent that uses the chain to answer questions.

agent = Agents.create('question-answering')
agent.set_chain(chain)

Now you have an agent that can answer questions about books in your database! You can ask the agent a question as follows:

response = agent.ask('Who is the author of "1984"?')
print(response)

That's it! This is a simple example, but LangChain is a powerful and flexible tool that can help you build complex applications involving LLMs.

--

--