LangChain: A Framework for Harnessing LLM Potential

Kushal V
TechHappily
Published in
12 min readJun 21, 2023

In this article, we will explore the LangChain framework, a powerful tool that allows users to unlock the full potential of LLMs (Large Language Models). LangChain not only provides the essential building blocks but also offers a high level of abstraction, making it easier to develop applications on this foundation.

Image Credit : LangChain

The LangChain framework offers modular abstractions for the essential components needed to work with language models. These components are designed to be user-friendly and can be used independently of the rest of the LangChain framework. The framework also offers use-case-specific chains, which assemble these components in specific ways to effectively address particular use cases. These chains serve as a higher-level interface, making it easy for users to start working on a specific use case.

Prompt Engineering

LangChain simplifies the foundational tasks required for prompt engineering, including template creation, LLM model invocation, and output data parsing. It achieves this through its modules of abstraction, which provide a streamlined approach to these essential components.

By leveraging LangChain’s modules, users can easily create templates that define the structure and format of their inputs. These templates serve as a foundation for interacting with LLM models.

Furthermore, LangChain simplifies the parsing of output data, allowing users to efficiently extract the desired information or insights. This abstraction layer eliminates the complexities associated with handling and interpreting LLM outputs, enabling users to focus on their prompt engineering tasks with ease.

Image Credit : LangChain

We will be implementing the same use cases that we did for the previous article
LangChain offers ChatPromptTemplate that can be utilized to write prompts for ChatGPT. It provides us the facility to serialize the prompt — that can be reused in any application.

from langchain.chat_models import ChatOpenAI
from langchain.prompts import ChatPromptTemplate

# Initialising the ChatGPT LLM
chat = ChatOpenAI(temperature=0.0)

# Product Review - Input Data

review = """ Writing this review after using it for a couple of months now. It can take some time to get used to since the water jet is quite powerful. It might take you a couple of tries to get comfortable with some modes. Start with the teeth, get comfortable and then move on to the gums. Some folks may experience sensitivity. I experienced it for a day or so and then went away.
It effectively knocks off debris from between the teeth especially the hard to get like the fibrous ones. I haven't seen much difference in the tartar though. Hopefully, with time, it gets rid of it too.
There are 3 modes of usage: normal, soft and pulse. I started with soft then graduated to pulse and now use normal mode. For the ones who are not sure, soft mode is safe as it doesn't hit hard. Once you get used to the technique of holding and using the product, you could start experimenting with the other modes and choose the one that best suits you.
One time usage of the water full of tank should usually be sufficient if your teeth are relatively clean. If, however, you have hard to reach spaces with buildup etc. it might require a refill for a usage.
If you don't refill at all, one time full recharge of the battery in normal mode will last you 4 days with maximum strength of the water jet. If you refill it once, it'll last you 2 days after which the strength of the water jet reduces.
As for folks who are worried about the charging point getting wet, I accidentally used it once without the plug for the charging point and yet it worked fine and had no issues. Ideally keep the charging point covered with the plug provided with the product.
It has 2 jet heads (pink and blue) and hence the product can be used by 2 people as long as it's used hygienically. For charging, it comes with a USB cable without the adapter which shouldn't be an issue as your phone adapter should do the job.
I typically wash the product after every usage as the used water tends to run on the product during usage.
One issue I see is that the clasp for the water tank could break accidentally if not handled properly which will render the tank useless. So ensure to not keep it open unless you are filling the tank.
"""

# Prompt

prompt = """
Your task is to provide insights for the product review \
on a e-commerce website, which is delimited by \
triple colons.

Perform following tasks:
1. Identify the product.
2. Summarize the product review, in upto 50 words.
3. Analyze the sentiment of review - positive/negative/neutral
4. Extract topics that user didnt like about the product.
5. Identify the name of the company, if not then "not mentioned"

Format using JSON keys

Product review: {text}
"""

prompt_template = ChatPromptTemplate.from_template(prompt)

review_message = prompt_template.format_messages(text = review)

output = chat(review_message)
print(output.content)

print(type(output.content)) # gives string output

The output of the above prompt :

'{
"Product": "Water Flosser",
"Review": "Effective in removing debris from between teeth, but may take time to get used to. Has 3 modes of usage and lasts for 2-4 days on a single charge. Charging point can get wet but works fine. Clasp for water tank could break if not handled properly.",
"Sentiment": "Neutral",
"Negative Feedback": "Clasp for water tank could break if not handled properly.",
"Company": "Not mentioned"
}'

ChatOpenAI model formats the output in string type, in order to parse through the JSON output — we need to intialize Output Parsers for the given prompt.

The ResponseSchema component enables you to define the specific structure and fields you want in the output. Once the output is generated by the application, the StructuredOutputParser comes into play. It is responsible for processing the output data according to the defined schema. There are different types of parsers available in LangChain — list, datetime, enum etc that can be utilized as per the use-case demands.

from langchain.output_parsers import ResponseSchema
from langchain.output_parsers import StructuredOutputParser


# initialising the schema for each output
product_schema = ResponseSchema(name="Product",
description="Identify the product name")
review_schema = ResponseSchema(name="Review",
description="Summarize the review")
sentiment_schema = ResponseSchema(name="Sentiment", a
description="Identify the sentiment of the review - positive/negative/neural")
topics_schema = ResponseSchema(name='Topics', description='Extract topics that user didnt like about the product.')

company_schema = ResponseSchema(name='Company',description='Identify the name of the company, if not then "not mentioned"')


# Concatinating the defined schema
response_schemas = [product_schema,review_schema,sentiment_schema,topics_schema, company_schema]

output_parser = StructuredOutputParser.from_response_schemas(response_schemas)

# Generating the format instructions - that will be added to prompt
format_instructions = output_parser.get_format_instructions()

# Newly defined prompt that includes format instructions

prompt = """
Your task is to provide insights for the product review \
on a e-commerce website, which is delimited by \
triple colons.

Perform following tasks:
1. Identify the product.
2. Summarize the product review, in upto 50 words.
3. Analyze the sentiment of review - positive/negative/neutral
4. Extract topics that user didnt like about the product.
5. Identify the name of the company, if not then "not mentioned"

Format using JSON keys

Product review: {text}

{format_instructions}
"""

prompt_template = ChatPromptTemplate.from_template(prompt)
review_message = prompt_template.format_messages(text = review, format_instructions= format_instructions)
output = chat(review_message)

# Passing the output through the parser

output_dict = output_parser.parse(output.content)

print(output_dict['Product'])

The only difference apart from intializing individual schemas and Output Parser is the addition of format instruction in the prompt.

We have successfully used LangChain modules for the use-case of product reviews.

Different Types of Chains

In more complex applications, relying solely on a single LLM may not be sufficient. Instead, it becomes necessary to chain multiple LLMs together, either with each other or with additional components.

By chaining LLMs, various components can be combined to create a cohesive and comprehensive application. For instance, a chain can be created to receive user input, apply to format using a PromptTemplate, and then pass the formatted response to an LLM for generating the desired output. This chaining approach empowers the development of sophisticated applications capable of handling intricate tasks.

We will be going through the following chains that will give us a good foundation to start working on solving complex use-cases.

  1. LLMChain
  2. SequentialChain

The LLMChain serves as a fundamental building block in the LangChain framework, offering enhanced functionality around language models. It is extensively utilized throughout LangChain, including in other chains and agents.

An LLMChain comprises two key components: a PromptTemplate and a language model, which can either be an LLM (Large Language Model) or a chat model (Like the one we used in the above section: ChatOpenAI).
The chain operates by formatting the prompt template using the provided input key values, along with any available memory key values. The resulting formatted string is then passed to the LLM, which generates the output based on the provided prompt.

from langchain.chat_models import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
from langchain.chains import LLMChain

llm = ChatOpenAI(temperature=0.9)

prompt = ChatPromptTemplate.from_template(
"What is the best name to describe \
a company that makes {product}?"
)

chain = LLMChain(llm=llm, prompt=prompt)

product = "Ice-cream"
chain.run(product)

# Output : 'Scoops Ice Cream Co.'

input_list = [
{"product": "shoes"},
{"product": "computer"},
{"product": "window"}
]

chain.apply(input_list)

# Output : [{'text': '"Stride Footwear" would be a good name to describe a company that makes shoes.'},
# {'text': 'Techtronics.'},
# {'text': 'WindowWorks.'}]

In the above code, we can see how an LLMChain can be utilized to perform basic prompting as well as take in multiple inputs.

Taking one step ahead, Chains are often utilized in a sequential fashion in many applications: the output of the first chain becomes an input to the second chain henceforth.

SimpleSequentialChain

To implement this functionality, LangChain provides:

  1. SimpleSequentialChain : The simplest form of sequential chains, where each step has a singular input/output, and the output of one step is the input to the next.
  2. SequentialChain: A more general form of sequential chains, allowing for multiple inputs/outputs.
SequentialChain

In the below use case, we will be performing two tasks to be performed in sequence: summarize the product review and then use the summary to identify the sentiment of the review. Since it is a SimpleSequentialChain, we will get only one output (sentiment in this case)

from langchain.chains import SimpleSequentialChain

llm = ChatOpenAI(temperature=0.0)

# prompt template 1
first_prompt = ChatPromptTemplate.from_template(
"Summarize the product review : {review}"
)

# Chain 1
chain_one = LLMChain(llm=llm, prompt=first_prompt)


# prompt template 2
second_prompt = ChatPromptTemplate.from_template(
"Identify the sentiment of the summary : {summary}"
)
# chain 2
chain_two = LLMChain(llm=llm, prompt=second_prompt)

# Initializing a simple sequential chain
overall_simple_chain = SimpleSequentialChain(chains=[chain_one, chain_two],
verbose=True
)

# Single Input -> Single Output

review = """3 MODES & ROTATABLE NOZZLE DESIGN- This portable oral irrigator comes with Normal, Soft and Pulse modes which are best for professional use. The 360° rotatable jet tip design allows easy cleaning helping prevent tooth decay, dental plaque, dental calculus, gingival bleeding and dental hypersensitivity.
DUAL WATERPROOF DESIGN- The IPX7 waterproof design is adopted both internally and externally to provide dual protection. The intelligent ANTI-LEAK design prevents leakage and allows the dental flosser to be used safely under the running water.
UPGRADED 300 ML LARGE CAPACITY WATER TANK- The new water tank is the largest capacity tank available and provides continuous flossing for an entire session. The removable full-opening design allows thorough cleaning thus preventing formation of bacteria and limescale deposits.
CORDLESS & QUALITY ASSURANCE- Cordless and lightweight power irrigator comes with a powerful battery that lasts upto 14 days on a single charge
RECHARGEABLE & QUALITY ASSURANCE- Cordless and lightweight power irrigator comes with a powerful battery that lasts upto 14 days on a single charge
"""

overall_simple_chain.run(review)

# Output : ['Positive sentiment.']

To extend the capability of SimpleSequentialChain, we will be implementing the general version of it — which has the ability to take multiple inputs and give multiple outputs. Such types of Chains are easy to use and can compartmentalize tasks which then are easy to debug and modify without affecting all parts of the operation.

In addition to identifying the sentiment and also topics that caused customer issues with the product, we will also be formatting a follow-up response to the customer.

from langchain.chains import SequentialChain

llm = ChatOpenAI(temperature=0.0)

# prompt template 1: summarize the product review - output (summary)
first_prompt = ChatPromptTemplate.from_template(
"Summarize the product review:"
"\n\n{Review}"
)
# chain 1: input= Review and output= summary
chain_one = LLMChain(llm=llm, prompt=first_prompt,
output_key="summary"
)

# prompt template 2: identify the sentiment from the summary
second_prompt = ChatPromptTemplate.from_template(
"Identify the sentiment of the review"
"\n\n{summary}"
)
# chain 2: input= summary and output= sentiment
chain_two = LLMChain(llm=llm, prompt=second_prompt,
output_key="sentiment"
)

# prompt template 3: identify the topics that are causing issues
third_prompt = ChatPromptTemplate.from_template(
"Identify the topics which are causing issues for customer:\n\n{Review}"
)
# chain 3: input= Review and output= topics
chain_three = LLMChain(llm=llm, prompt=third_prompt,
output_key="topics"
)


# prompt template 4: follow up message
fourth_prompt = ChatPromptTemplate.from_template(
"""Write a follow up response to the customer based on sentiment
and topics:
"\n\nSentiment: {sentiment}\n\Topics: {topics}"""
)
# chain 4: input= sentiment, topics and output= followup_message
chain_four = LLMChain(llm=llm, prompt=fourth_prompt,
output_key="followup_message"
)

# overall_chain: input= Review
# and output= summary,sentiment, topics, followup_message
overall_chain = SequentialChain(
chains=[chain_one, chain_two, chain_three, chain_four],
input_variables=["Review"],
output_variables=["summary", "sentiment","topics","followup_message"],
verbose=False
)

review = """
I never thought that I would receive such a bad iPhone from Amazon when I ordered it online. My iPhone is taking extremely blurry photos and it heats up quickly, within 10 minutes of using Instagram or snap. when I called customer service for a replacement,they refused to exchange it directly. This is the first time I bought an iPhone and I had a hard time saving up for it. I thought the camera would be great, but it is taking very blurry photos and overheating. I bought it for Rs. 60,499 because there was a bigger discount than on Flinkart but now I regret buying it from here because I think the seller sold it to me for a lower price due to their own faulty iPhone 13." My only request is that you replace my iPhone 13 as soon as possible
"""
overall_chain(review)

The output of the following chain is

{
'Review': '\nI never thought that I would receive such a bad iPhone from Amazon when I ordered it online. My iPhone is taking extremely blurry photos and it heats up quickly, within 10 minutes of using Instagram or snap. when I called customer service for a replacement,they refused to exchange it directly. This is the first time I bought an iPhone and I had a hard time saving up for it. I thought the camera would be great, but it is taking very blurry photos and overheating. I bought it for Rs. 60,499 because there was a bigger discount than on Flinkart but now I regret buying it from here because I think the seller sold it to me for a lower price due to their own faulty iPhone 13." My only request is that you replace my iPhone 13 as soon as possible\n',
'summary': 'The reviewer received a faulty iPhone 13 from Amazon that takes blurry photos and overheats quickly. Customer service refused to exchange it directly. The reviewer regrets buying it from Amazon and believes the seller sold it for a lower price due to its faults. They request a replacement as soon as possible.',
'sentiment': 'Negative sentiment.',
'topics': '1. Defective iPhone camera\n2. Overheating iPhone\n3. Refusal of customer service to exchange the product\n4. Disappointment with the product after saving up for it\n5. Regret for buying from Amazon instead of Flipkart\n6. Suspicion of seller selling a faulty product at a lower price.',
'followup_message': "Dear valued customer,\n\nWe are sorry to hear about your negative experience with our product. We understand how frustrating it can be to encounter issues with your iPhone camera and overheating. We apologize for any inconvenience caused by our customer service team's refusal to exchange the product.\n\nWe want to assure you that we take all customer complaints seriously and are committed to resolving any issues you may have. We encourage you to reach out to our customer service team again to discuss your concerns and explore possible solutions.\n\nWe also understand your disappointment after saving up for the product and regret for buying from Amazon instead of Flipkart. We appreciate your feedback and will take it into consideration as we continue to improve our products and services.\n\nRegarding your suspicion of the seller selling a faulty product at a lower price, we assure you that we have strict quality control measures in place to ensure that all our products meet our high standards. However, we will investigate this matter further to ensure that our customers receive only the best products.\n\nThank you for bringing these issues to our attention. We hope to have the opportunity to make things right and regain your trust in our brand.\n\nSincerely,\n\n[Your Company]"}

Another prominent type of Chain used in various applications are the RouterChain, we will not be going into detail about this in article, but readers are encouraged to go through the documentation.

At its core LangChain offers a wide array of abstractions that can utilize prompts and chains into building real-time applications. We will be covering one of the major use cases of LangChain in the next article: VectorStores, OpenAI Embeddings, and Semantic-Search Engines.

--

--