How to implement a chatbot with ReAct prompting and function calling

Joan Boronat Ruiz
4 min readSep 27, 2023

--

Last week, I wrote an article sharing my experience creating a chatbot with OpenAI with some external tools available and how to force the model to always choose one of them.

The proposed solution does solve this main issue but it still falls short for the following cases:

  • The output has to be a text response to the initial user request considering the result of the function call as if it was a conversation.
  • The initial user task or question takes more than one function call to be solved.

For the first case one might think that setting the function_call parameter on auto might do the trick. Isn’t that the hole point of such setting?

…auto means the model can pick between an end-user or calling a function.

Except this does not fit well some of the most common prompting techniques where you need the model to reason before taking an action.

Let’s use the example from the original ReAct paper:

Yao et al. (2023)

Most implementations of the ReAct technique are previous to the functions parameter in the API call, so the solution was asking the model to return a specific format and parse the response to differentiate the different stages (Thought, Action, Observation).

Solution

To achieve such flow there are a few requisites we can outline before diving into the code:

  • The process must be iterative. Some times solving an answer might take more than one iteration as in the example.
  • During the flow, the model has to be able to finish when a final answer has been reached.
  • In each iteration the model has to be able to use external tools.
  • In each iteration the model has to reason the next step.

I will start from where we left it in the previous article, but you can probably follow along without reading it. The only difference in this case is that we will have a loop and a new function that I called final_answer.

These are functions we will be using, you can find the actual implementation in this gist. I will skip them here to keep the article concise.

def select_action(messages):
"""
Executes the select_action function utilizing the OpenAI API to determine
an optimal action based on the given messages.

Parameters:
messages (list): A list containing dictionaries with 'role' and 'content'
representing the conversation.

Returns:
dict: A dictionary containing 'thought' (the reasoning behind the action)
and 'action' (one of ["final_answer", "calculator", "get_weather",
"google_search"]).
"""


def get_action_arguments(action_name, user_input):
"""
Derives and returns the arguments for a specified action by interacting
with the OpenAI API, based on the user input provided.

Parameters:
action_name (str): The name of the action for which arguments are being derived.
user_input (str): The input from the user based on which the arguments are derived.

Returns:
dict: A dictionary containing the arguments derived for the specified action.
"""


def run_action(action_name, arguments):
if action_name == 'final_answer':
return final_answer(**arguments)
elif action_name == 'get_weather':
return get_weather(**arguments)
else:
raise NotImplementedError

The main logic of the agent are these 10 lines of code:

user_input = "What is the weather like in Barcelona?"

messages = [
{'role': 'system', 'content': system_message},
{'role': 'user', 'content': user_input}
]

while True:
# Step #1. Ask the model to reason and select an action to take
response = select_action(messages)
function_name = response['action']

# Step #2. Ask the model to get the arguments for the function
arguments = get_action_arguments(function_name, user_input)

# Step #3. Run the function
result = run_action(function_name, arguments)

# Step #4. Add the result to the conversation for the next iteration
messages.append({'role': 'function', 'name': function_name, 'content': json.dumps(result)})

# Step #5. Let's continue if the final answer hasn't been reached
if (function_name == 'final_answer'):
break


# This should be the output for the user
print(result)

# Output
#
# The weather in Barcelona is sunny with a temperature of 25ºC.

And this is an example conversation:

print(messages)
[
{
"role": "system",
"content": "You are a very helpful assistant. Your job is to choose the best posible action to solve the user question or task.\

These are the available actions:
- final_answer: responds to the user with the answer to their question/task
- calculator: run a calculation on two numbers
- get_weather: get the current weather
- google_search: get the top 5 results on google"
},
{
"role": "user",
"content": "What is the weather like in Barcelona?"
},
{
"role": "function",
"name": "get_weather",
"content": "Sunny, 28 degrees Celsius"
},
{
"role": "function",
"name": "final_answer",
"content": "The weather in Barcelona is sunny with a temperature of 28ºC."
}
]

You can find the full code here.

This is a very simple approach for implementing a chatbot using the ReAct prompting technique and the function calling feature of OpenAI.

I suspect it will get obsolete quite fast as both OpenAI and prompting techniques are evolving at incredible speeds but it is the one that offers the most consistent results so far.

Here at Magnettü 🧲, we’re leveraging the power of the most recent OpenAI models, incorporating them using the approaches we’ve outlined above. We’re always open to learning and adapting, so please share your experiences and methods of integration in the comments 🙆🏼‍♂️.

--

--