Multi-Agent Systems / LangGraph

Mine Kaya
8 min readJun 19, 2024

--

Yeah, you heard it right, Smiths are here! It’s been a while since my previous post. If you’re new, hi. I would like to talk about LangGraph in this post and a bit about LangSmith too. Recently, we started to implement Agent Supervisor; it’s a way to implement a multi-agent system.

But before we get too technical, can we just wait for a moment? This is ducking Agent Smith from the Matrix. Remember that they were a piece of code created by the Matrix to keep ‘order’ in the system, the system that keeps humans in the simulation. (even if the system collapsed, shout out to Neo). Smiths were in a hierarchy; there was ‘the Smith’ that ordered other Smiths to do stuff. (He could multiply his existence in the simulation later on. Actually, it’s a really cool way to handle a system that has many requests, maybe LangChain’s next move ;))

After my joke session, LangGraph is a really cool library; it is so helpful in cases where LangChain is not capable. It provides a solution for dividing complex problems, use cases, or flows when there is a need and one agent system is not good enough. As I said before, I will talk about Multi-Agent Systems, mainly Supervisor implementation, because there are different methods for combining these agents. And because we are trying to implement a Chatbot, we got a lot of help from the Customer Support Bot tutorial. It gives a really clear idea of what you can do in a chat and how to control your agents to do the right thing.

Introduction to LangGraph

LangGraph is built on top of LangChain and is completely compatible with the LangChain ecosystem. It is basically a Python library for building complex, scalable AI agents using graph-based state machines. If you have ever tried LangChain, you should see its insufficiency when you want your agent to run in production. In production, more control is often needed. You may want to always force an agent to call a particular tool first. You may want to have more control over how tools are called. You may want to have different prompts for the agent, depending on the state it is in.

So, what are these “state machines”? They give you the power to loop through human interaction with LLM to do the work. It keeps track of which agent runs, which tool it used, and if you want, memory too. I don’t want to deep dive into memory for now, but it is a bit different than what we see in LangChain. The Checkpointer gives your agent “memory” by persisting its state.

StateGraph

StateGraph is a class that represents the graph. You initialize this class by passing in a state definition. This state is updated by nodes in the graph, which return operations in the form of a key-value store.

from langgraph.graph import StateGraph
from typing import TypedDict, List, Annotated
import Operator
from langchain_core.messages import BaseMessage

class State(TypedDict):
input: str
messages: Annotated[Sequence[BaseMessage], operator.add]


graph = StateGraph(State)

Nodes

After creating a StateGraph, you then add nodes with graph.add_node(name, value) syntax. The value parameter should be either a function or LCEL runnable that will be called (aka executable tool or LLM)

graph.add_node("model", model)
graph.add_node("tools", tool_executor)

Remember that we will loop through this graph, so it is important to exit somewhere in the processs. END node that is used to represent the end of the graph.

from langgraph.graph import END

graph.add_node("end", END)

Edges

After adding nodes, you can then add edges to create the graph. There are three types of edges for now:

1 - Starting Edge : This is the edge that connects the start of the graph to a particular node. The code below means that our graph will start at the ‘model’ node, as we named it before.

graph.set_entry_point("model")

2 - Normal Edge : These edges ensure that one node is always called after another. The code below means that when we call the ‘tools’ node, the ‘model’ node will always be called after it.

graph.add_edge("tools", "model")

3 - Conditional Edges :These are the edges that LLM uses to determine which node to go to first. You do not specify where to go strictly; LLM decides the destination by checking the state and user input.

A conditional edge has three parameters. The first one is the node that will determine what to do next. The second one is a function that determines which node to call next. The third one is a mapping where the keys should be possible values that the function in (2) could return, and the values should be names of nodes to go.

graph.add_conditional_edge(
"model",
should_continue,
{
"end": END,
"continue": "tools"
}
)

Compile

After we define our graph, we can compile it into a runnable. This runnable has all the same method as LangChain runnables (.invoke, .stream, .astream_log, etc)

app = graph.compile()

Multi-Agent Systems

A single agent may fail when it has too many tools to execute in sequence. Therefore, in a Multi-Agent system, we divide the problem and conquer each step with different agents, routing tasks to the appropriate expert.

There are 3 way too control the flow, collaboration, supervision and hierarchical teams. You can check the official docs for you project needs, for this post, we will follow through supervision pattern.

Agent Supervisor

We will create an agent group, each agent will have specific tools to complete the task. Agent supervisor will help us to delegate tasks.

For this example , we will have 2 agents and 1 supervisor. First agent will generate random numbers, other agent will plot a diagram for that random numbers, as we expect, supervisor delegate the tasks, when random number generator agent is done, it will give the wheel to the other one.

Let’s start with defining the basics.

from langchain_openai import ChatOpenAI
from typing import Annotated, List, Tuple, Union
from langchain.tools import BaseTool, StructuredTool, Tool
from langchain_experimental.tools import PythonREPLTool
from langchain_core.tools import tool
import random


#Model
llm = ChatOpenAI(model="gpt-3.5-turbo")

#Tools

#for plotting the diagram
python_repl_tool = PythonREPLTool()

#for generating random numbers
@tool("random_number", return_direct=False)
def random_number(input:str) -> str:
"""Returns a random number between 0-100. input the word 'random'"""
return random.randint(0, 100)

tools = [random_number,python_repl_tool]

Continue with helper functions

from langchain.agents import AgentExecutor, create_openai_tools_agent
from langchain_core.messages import BaseMessage, HumanMessage
from langchain_openai import ChatOpenAI

# function that returns AgentExecutor with given tool and prompt
def create_agent(llm: ChatOpenAI, tools: list, system_prompt: str):
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
system_prompt,
),
MessagesPlaceholder(variable_name="messages"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
]
)
agent = create_openai_tools_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools)
return executor

# agent node, funtion that we will use to call agents in our graph
def agent_node(state, agent, name):
result = agent.invoke(state)
return {"messages": [HumanMessage(content=result["output"], name=name)]}

Now, let’s start to create our graph, and add this 2 agents as a node :

import operator
from typing import Annotated, Any, Dict, List, Optional, Sequence, TypedDict
import functools
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langgraph.graph import StateGraph, END

# Random_Number_Generator as a node
random_agent = create_agent(llm, [random_number], "You get random numbers")
random_node = functools.partial(agent_node, agent=random_agent, name="Random_Number_Generator")

# Coder as a node
code_agent = create_agent(llm, [python_repl_tool], "You generate charts using matplotlib.")
code_node = functools.partial(agent_node, agent=code_agent, name="Coder")

Time to create our supervisor!

from langchain.output_parsers.openai_functions import JsonOutputFunctionsParser
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder

members = ["Random_Number_Generator", "Coder"]
system_prompt = (
"You are a supervisor tasked with managing a conversation between the"
" following workers: {members}. Given the following user request,"
" respond with the worker to act next. Each worker will perform a"
" task and respond with their results and status. When finished,"
" respond with FINISH."
)
# It will use function calling to choose the next worker node OR finish processing.
options = ["FINISH"] + members
# openai function calling
function_def = {
"name": "route",
"description": "Select the next role.",
"parameters": {
"title": "routeSchema",
"type": "object",
"properties": {
"next": {
"title": "Next",
"anyOf": [
{"enum": options},
],
}
},
"required": ["next"],
},
}
prompt = ChatPromptTemplate.from_messages(
[
("system", system_prompt),
MessagesPlaceholder(variable_name="messages"),
(
"system",
"Given the conversation above, who should act next?"
" Or should we FINISH? Select one of: {options}",
),
]
).partial(options=str(options), members=", ".join(members))


# we create the chain with llm binded with routing function + system_prompt
supervisor_chain = (
prompt
| llm.bind_functions(functions=[function_def], function_call="route")
| JsonOutputFunctionsParser()
)

Let’s create our graph! (pls, read the comments)

First we define the State and add our agent nodes, plus the Supervisor node.

# defining the AgentState that holds messages and where to go next
class AgentState(TypedDict):
messages: Annotated[Sequence[BaseMessage], operator.add]
# The 'next' field indicates where to route to next
next: str

# defining the StateGraph
workflow = StateGraph(AgentState)

# agents as a node, supervisor_chain as a node
workflow.add_node("Random_Number_Generator", random_node)
workflow.add_node("Coder", code_node)
workflow.add_node("Supervisor", supervisor_chain)

# when agents are done with the task, next one should be supervisor ALWAYS
workflow.add_edge("Random_Number_Generator", "Supervisor")
workflow.add_edge("Coder", "Supervisor")

# Supervisor decides the "next" field in the graph state,
# which routes to a node or finishes. (Remember the special node END above)
workflow.add_conditional_edges(
"Supervisor",
lambda x: x["next"],
{
"Random_Number_Generator": "Random_Number_Generator",
"Coder": "Coder",
"FINISH": END
})

# starting point should be supervisor
workflow.set_entry_point("Supervisor")


graph = workflow.compile()

Let’s try it , you can stream or directly invoke to graph.

for s in graph.stream(
{
"messages": [
HumanMessage(content="Get 10 random numbers and generate a histogram")
]
}, config={"recursion_limit": 20}
):
if "__end__" not in s:
print(s)
print("----")
output

As we see from the output, we start with the Supervisor as declared. The Supervisor routes us to the Random_Number_Generator. After the Random_Number_Generator finishes the task, it returns to the Supervisor because we added an edge. The Supervisor then routes to the Coder, which finishes and returns to the Supervisor. When the task is done, the Supervisor finishes processing.

🥳

LangSmith

LangSmith is a platform for LLM application development, monitoring, and testing. Personally, I use it for monitoring, so I will only mention that aspect. If you enable LangSmith tracing, you can debug your LLMs.

For docs, click here.

For example the run we did above looks like this:

If you want more detail :

It is very useful when you have agents with many tools. In our case, they only had one tool, but when the problem becomes more complex and you want to understand what is happening inside, LangSmith will really help you follow the steps. You can do this with your debug console as well, but why bother when you have this tool?

You enable tracing by adding an environment variable:

os.environ["LANGCHAIN_TRACING_V2"] = "true"

Further Links :

Hope you enjoyed! See you in the next one :)

--

--