Multi-Agent Conversation & Debates using LangGraph and LangChain

Conducting debate and deciding a winner using Multi-Agent orchestration with codes and example

Mehul Gupta
Data Science in your pocket

--

Photo by Product School on Unsplash

In my last post on LangGraph, we already discussed a simple example where we can improve RAG by introducing cycles using LangGraph. Taking the game further ahead, this time we will try a multi-agent debate application where

The user gives a debate topic

Two agents (for-the-motion & against-the-motion) are created internally

They debate over the topic countering the previous response by the opponent.

Once a certain conversation length threshold is hit, the jury is called

The jury summarizes the debate and decides a winner.

My debut book, LangChain in your Pocket is out !!

As I’ve already discussed the basics of LangGraph in my previous post, we will straight away jump into codes.

Note: If you’ve missed my last post, you can check LangGraph basics here

  1. Import packages and load LLM.
from typing import Dict, TypedDict, Optional
from langgraph.graph import StateGraph, END
from langchain.llms import OpenAI
import random
import time

llm = OpenAI(openai_api_key='your API')

2. Debate topic (this can be anything)

debate_topic= "Should Data Scientists write backend and API codes as well?"

3. Deciding over the two agents.

output_parser = CommaSeparatedListOutputParser()
output = llm("I wish to have a debate on {}. What would be the fighting sides called? Output just the names and nothing else as comma separated list".format(debate_topic))
classes = output_parser.parse(output)

#print(classes)
#['Data Scientists', 'Full-Stack Developers']

For the given topic, the two agents created are

‘Data Scientists’ & ‘Full-Stack Developers’

4. Defining state variables in Graph and workflow object

Classification: To check who should speak next

History: The conversation so far

Current_response: Last line spoken by any agent

Count: Conversation Length

Results: Verdict by Jury

Greeting: Welcome message

class GraphState(TypedDict):
classification: Optional[str] = None
history: Optional[str] = None
current_response: Optional[str] = None
count: Optional[int]=None
results: Optional[str]=None
greeting: Optional[str]=None

workflow = StateGraph(GraphState)

3. We will now decide over graph nodes.

prefix_start= 'You are in support of {} . You are in a debate with {} over the
topic: {}. This is the conversation so far \n{}\n.
Put forth your next argument to support {} countering {}.\
Dont repeat your previous arguments. Give a short, one line answer.'

def classify(question):
return llm("classify the sentiment of input as {} or {}. Output just the class. Input:{}".format('_'.join(classes[0].split(' ')),'_'.join(classes[1].split(' ')),question)).strip()

def classify_input_node(state):
question = state.get('current_response')
classification = classify(question) # Assume a function that classifies the input
return {"classification": classification}

def handle_greeting_node(state):
return {"greeting": "Hello! Today we will witness the fight between {} vs {}".format(classes[0],classes[1])}

def handle_pro(state):
summary = state.get('history', '').strip()
current_response = state.get('current_response', '').strip()
if summary=='Nothing':
prompt = prefix_start.format(classes[0],classes[1],debate_topic,'Nothing',classes[0],"Nothing")
argument = classes[0] +":"+ llm(prompt)
summary = 'START\n'
else:
prompt = prefix_start.format(classes[0],classes[1],debate_topic,summary,classes[0],current_response)
argument = classes[0] +":"+ llm(prompt)
return {"history":summary+'\n'+argument,"current_response":argument,"count":state.get('count')+1}

def handle_opp(state):
summary = state.get('history', '').strip()
current_response = state.get('current_response', '').strip()
prompt = prefix_start.format(classes[1],classes[0],debate_topic,summary,classes[1],current_response)
argument = classes[1] +":"+ llm(prompt)
return {"history":summary+'\n'+argument,"current_response":argument,"count":state.get('count')+1}

def result(state):
summary = state.get('history').strip()
prompt = "Summarize the conversation and judge who won the debate.No ties are allowed. Conversation:{}".format(summary)
return {"results":llm(prompt)}

workflow.add_node("classify_input", classify_input_node)
workflow.add_node("handle_greeting", handle_greeting_node)
workflow.add_node("handle_pro", handle_pro)
workflow.add_node("handle_opp", handle_opp)
workflow.add_node("result", result)

Let’s discuss each of the function/node in brief

Classify_input_node: It classifies who spoke the last sentence/line. It internally uses classify()

Handle_pro & Handle_opp: Using the “History” of the conversation so far & last response by rival, produces a counter reply.

Result: Once the conversation limit is hit, summarize the conversation and judge a winner

4. Adding Conditional Edges.

def decide_next_node(state):
return "handle_opp" if state.get('classification')=='_'.join(classes[0].split(' ')) else "handle_pro"

def check_conv_length(state):
return "result" if state.get("count")==10 else "classify_input"

workflow.add_conditional_edges(
"classify_input",
decide_next_node,
{
"handle_pro": "handle_pro",
"handle_opp": "handle_opp"
}
)

workflow.add_conditional_edges(
"handle_pro",
check_conv_length,
{
"result": "result",
"classify_input": "classify_input"
}
)

workflow.add_conditional_edges(
"handle_opp",
check_conv_length,
{
"result": "result",
"classify_input": "classify_input"
}
)

Conditions edges are added where from a node, multiple directiosn are possible. In that case, a logic is followed to choose which direction to go. We have added 3 conditional edges above.

1st edge: Once the speaker for last conversation is recognized using classify_input_node, choose the alternate speaker

2nd & 3rd edge basically introduces a cycle where if the conversation limit is not reached, go to other speaker else to the jury

5. Adding graph entry point and remaining definite edges

workflow.set_entry_point("handle_greeting")
workflow.add_edge('handle_greeting', "handle_pro")
workflow.add_edge('result', END)

6. Compiling graph and starting debate

app = workflow.compile()
conversation = app.invoke({'count':0,'history':'Nothing','current_response':''})

7. The conversation

print(conversation['history'])

START

Data Scientists: Data Scientists should write backend and API codes to ensure seamless integration of models and efficient data processing.
Full-Stack Developers: Full-Stack Developers possess expertise in both frontend and backend development, ensuring comprehensive understanding and efficient implementation of data models and APIs.
Data Scientists: Data Scientists deep understanding of data and modeling techniques enables them to optimize backend processes and API design for efficient data handling and model integration.
Full-Stack Developers: Full-Stack Developers holistic knowledge of software development ensures efficient integration of data models, APIs, and frontend components for a cohesive user experience.
Data Scientists: Data Scientists specialized knowledge allows them to optimize backend processes and API design for efficient data handling and model integration, ensuring optimal performance and accuracy in data-driven applications.
Full-Stack Developers: Full-Stack Developers expertise in both frontend and backend development ensures efficient integration and optimization of data models, APIs, and user interfaces for seamless user experiences.
.......

8. Result by the jury

print(conversation["result"])

The conversation is a debate between Data Scientists and Full-Stack Developers on who is better suited to write backend and API codes for data-driven applications. Both sides present valid arguments based on their respective expertise.

Data Scientists emphasize their deep understanding of data and modeling techniques, which enables them to optimize backend processes and API design for efficient data handling and model integration. They argue that their specialized knowledge allows for optimal performance and accuracy in data-driven applications.

On the other hand, Full-Stack Developers highlight their holistic knowledge of software development, encompassing both frontend and backend expertise. They contend that this comprehensive understanding ensures efficient integration of data models, APIs, and user interfaces, resulting in seamless user experiences.

While both roles have their merits, the debate can be judged in favor of Full-Stack Developers. Their broader skillset allows them to not only optimize backend processes and APIs but also integrate them effectively with frontend components. This holistic approach ensures a cohesive user experience and maximizes the overall performance of data-driven applications.

Therefore, Full-Stack Developers emerge as the winners of this debate due to their comprehensive expertise in software development and their ability to deliver seamless user experiences.

So this is how the entire graph we formed looks like

So, its a wrap. See you soon !!

--

--