[Course Notes] LangChain for LLM Application Development: Part 7

Chanan
4 min readMar 9, 2024

--

LangChain for LLM Application Development: Agents

LangChain for LLM Application Development by DeepLearning.AI

Table of Contents

  1. Introduction
  2. Models, Prompts, and Parsers
  3. Memory
  4. Chains
  5. Question and Answer
  6. Evaluation
  7. Agents (This part)

Agents

Sometimes people think of a large language model as a knowledge store, as if it is learn to memorize a lot of information maybe off the internet, so when you ask it a question, it can answer the question.

But I think even more useful way to think of a large language is a reasoning engine in which you can give a chance of other source of information and then the large language model, LLM, will maybe use this background knowledge off the internet, but to use the new information you give it to help you answer questions or reason through content or decide even what to do next. And that’s what LangChain Agents helps you to do.

import os

from dotenv import load_dotenv, find_dotenv
_ = load_dotenv(find_dotenv()) # read local .env file

import warnings
warnings.filterwarnings("ignore")

# account for deprecation of LLM model
import datetime
# Get the current date
current_date = datetime.datetime.now().date()

# Define the date after which the model should be set to "gpt-3.5-turbo"
target_date = datetime.date(2024, 6, 12)

# Set the model variable based on the current date
if current_date > target_date:
llm_model = "gpt-3.5-turbo"
else:
llm_model = "gpt-3.5-turbo-0301"

#!pip install -U wikipedia
from langchain.agents.agent_toolkits import create_python_agent
from langchain.agents import load_tools, initialize_agent
from langchain.agents import AgentType # will be used to specify agent type we want to use.
from langchain.tools.python.tool import PythonREPLTool
from langchain.python import PythonREPL
from langchain.chat_models import ChatOpenAI
llm = ChatOpenAI(temperature=0, model=llm_model)
tools = load_tools(["llm-math","wikipedia"], llm=llm)
  • llm-math tool, is actually a chain itself, which is a language model conjunction with calculator to do math problem.
  • Wikipedia tool, is an API that connected with Wikipedia, which allows you to search query against Wikipedia and get back result.
tools = load_tools(["llm-math","wikipedia"], llm=llm)

agent= initialize_agent(
tools,
llm,
agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION,
handle_parsing_errors=True,
verbose = True)

# Example 1
agent("What is the 25% of 300?")

# Wikipedia example
question = "Tom M. Mitchell is an American computer scientist \
and the Founders University Professor at Carnegie Mellon University (CMU)\
what book did he write?"
result = agent(question)

Here, we are going to use CHAT_ZERO_SHOT_REACT_DESCRIPTION as an AgentType. The important things you should know here are:

  1. CHAT: This is an agent that has been optimized to work with chat model
  2. REACT: This is a prompting technique, designed to get the best reasoning performance out of language model.

We also are going to set handle_parsing_errors=True, this is really useful when the language model might output something that is not able to be parsed into an action input, which is the desired output.

# Python Agent

agent = create_python_agent(
llm,
tool=PythonREPLTool(),
verbose=True
)

customer_list = [["Harrison", "Chase"],
["Lang", "Chain"],
["Dolly", "Too"],
["Elle", "Elem"],
["Geoff","Fusion"],
["Trance","Former"],
["Jen","Ayai"]
]

agent.run(f"""Sort these customers by \
last name and then first name \
and print the output: {customer_list}""")


'''
Output:
> Entering new AgentExecutor chain...
I can use the sorted() function to sort the list of customers by last name and then first name. I will need to provide a key function to sorted() that returns a tuple of the last name and first name in that order.
Action: Python REPL
Action Input:
```
customers = [['Harrison', 'Chase'], ['Lang', 'Chain'], ['Dolly', 'Too'], ['Elle', 'Elem'], ['Geoff', 'Fusion'], ['Trance', 'Former'], ['Jen', 'Ayai']]
sorted_customers = sorted(customers, key=lambda x: (x[1], x[0]))
for customer in sorted_customers:
print(customer)
```
Observation: ['Jen', 'Ayai']
['Lang', 'Chain']
['Harrison', 'Chase']
['Elle', 'Elem']
['Trance', 'Former']
['Geoff', 'Fusion']
['Dolly', 'Too']

Thought:The customers have been sorted by last name and then first name, and the output has been printed.
Final Answer: [['Jen', 'Ayai'], ['Lang', 'Chain'], ['Harrison', 'Chase'], ['Elle', 'Elem'], ['Trance', 'Former'], ['Geoff', 'Fusion'], ['Dolly', 'Too']]

> Finished chain.
"[['Jen', 'Ayai'], ['Lang', 'Chain'], ['Harrison', 'Chase'], ['Elle', 'Elem'], ['Trance', 'Former'], ['Geoff', 'Fusion'], ['Dolly', 'Too']]"
'''

Python REPL tool

Sometimes, for complex calculations, rather than have an LLM generate the answer directly, it can be better to have the LLM generate code to calculate the answer, and then run that code to get the answer.

# Define your own tool
from langchain.agents import tool
from datetime import date

@tool
def time(text: str) -> str:
"""Returns todays date, use this for any \
questions related to knowing todays date. \
The input should always be an empty string, \
and this function will always return todays \
date - any date mathmatics should occur \
outside this function."""
return str(date.today())

agent= initialize_agent(
tools + [time],
llm,
agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION,
handle_parsing_errors=True,
verbose = True)

That’s the end of our LangChain journey for LLM Application Development!

Thanks for sticking with me until the end. Have a great day 😄

--

--

Chanan

Book and Coffee enthusiast | Data Scientist | Python, NLP, AI