Code an AI Assistant with GPT Function Calls

Vincent Favilla
10 min readJun 15, 2023

--

OpenAI’s GPT series now supports function calling, a powerful feature that allows developers to get structured data from the model by describing functions to it. This is a game-changer in the world of AI development, as it gives the AI model the ability to work with tools, extending its capabilities beyond just generating text.

Why is Function Calling Useful?

Imagine you’re building a chatbot. Without function calling, the chatbot can answer questions based on the information it was trained on. But what if a user asks a question that requires real-time data, like the current weather or stock market prices? Or what if the user wants to perform an action, like booking a hotel room or sending an email?

This is where function calling comes in. By describing your custom functions to GPT, you’re giving it the ability to interact with your code. The model can then generate a JSON object containing arguments to call your functions.

This means your chatbot can now fetch real-time data or perform actions based on user requests. Here are some ways you can use function calling:

  • Create interactive chatbots: Your chatbot can now perform actions like booking a hotel room, sending an email, or fetching real-time data based on user requests.
  • Convert natural language into API calls: You can convert user requests like “Who are my top customers?” into API calls to your backend.
  • Extract structured data from text: You can define a function to extract all people mentioned in a Wikipedia article, for example.
  • Automated Data Analysis: Suppose you have a function that can analyze a dataset and provide insights, such as identifying trends, outliers, or making predictions. With function calling, you can have a user ask the AI in natural language to analyze a specific dataset and the AI can call your function to perform the analysis and return the results.
  • E-commerce: If you’re running an e-commerce platform, you can define functions for actions like adding items to a cart, applying discount codes, or checking out. The AI can then call these functions based on user requests, providing a conversational shopping experience.
  • Educational Tools: For an educational application, you could define functions that generate quiz questions, provide hints, or evaluate answers. The AI can interact with these functions to provide an interactive learning experience.
  • Healthcare Applications: In a healthcare setting, you could define functions that pull up patient records, schedule appointments, or provide medical advice based on symptoms. The AI can call these functions to assist healthcare professionals or patients. (Remember, any sensitive data should be handled within your local functions and not included in the conversation with the model to ensure privacy and compliance with regulations like HIPAA.)
  • Interactive Gaming: In a game, you could define functions that control the player’s actions, like moving in a certain direction, attacking, or using an item. The AI can call these functions based on the player’s natural language commands, creating a unique, voice-controlled gaming experience.

In essence, function calling allows you to create more interactive and useful AI applications by bridging the gap between the AI model and your custom code.

Let’s demo this new functionality with a simple but useful example: let’s give GPT the ability to perform arithmetic, since we know that math is a weakness of large language models.

I’ll adapt the code from the OpenAI function calls cookbook, and add a bit more commentary so you can follow along.

Setup

The first step is to install and import the required libraries:

pip install openai tenacity termcolor requests json

Then import the libraries and set your GPT model and API key:

import json
import openai
import requests

# For retrying an API call if it fails
from tenacity import retry, wait_random_exponential, stop_after_attempt

# For color coding a conversation
from termcolor import colored

GPT_MODEL = "gpt-3.5-turbo-0613"
openai.api_key = "YOUR_API_KEY"

Now let’s define some helper functions. You can keep these the same in any project you create.

First, a function to send API requests:

@retry(wait=wait_random_exponential(min=1, max=40), stop=stop_after_attempt(3))
def chat_completion_request(messages, functions=None, function_call=None, model=GPT_MODEL):
"""
This function sends a POST request to the OpenAI API to generate a
chat completion.
Parameters:
- messages (list): A list of message objects. Each object should have a
'role' (either 'system', 'user', 'function', or 'assistant')
and 'content' (the content of the message).
- functions (list, optional): A list of function objects that describe the functions that the model can call.
- function_call (str or dict, optional): If it's a string, it can be either 'auto' (the model decides whether to call a function) or 'none'
(the model will not call a function). If it's a dict, it should describe the function to call.
- model (str): The ID of the model to use.
Returns:
- response (requests.Response): The response from the OpenAI API. If the request was successful, the response's JSON will contain the chat completion.
"""
# Set up the headers for the API request
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer " + openai.api_key,
}
# Set up the data for the API request
json_data = {"model": model, "messages": messages}
# If functions were provided, add them to the data
if functions is not None:
json_data.update({"functions": functions})
# If a function call was specified, add it to the data
if function_call is not None:
json_data.update({"function_call": function_call})
# Send the API request
try:
response = requests.post(
"https://api.openai.com/v1/chat/completions",
headers=headers,
json=json_data,
)
return response
except Exception as e:
print("Unable to generate ChatCompletion response")
print(f"Exception: {e}")
return e

As of the time I’m writing this, function calling does not seem to be supported in the Python API, so we have to use POST requests instead. The function above will simplify that process considerably.

Next we can use a function to color-code a conversation, making it easier to read:

# This function supplied by OpenAI doesn't work in Colab; I have an alternative function for this in the next cell

def original_pretty_print_conversation(messages):
"""
This function takes a list of messages as input.
Each message is a dictionary with a `role` and `content`.
The `role` can be "system", "user", "assistant", or "function".
This function formats each message based on its role
and appends it to the `formatted_messages` list.
Finally, it prints each formatted message in a color corresponding
to its role.
"""

# Define a dictionary to map roles to colors
role_to_color = {
"system": "red",
"user": "green",
"assistant": "blue",
"function": "magenta",
}

# Initialize an empty list to store the formatted messages
formatted_messages = []

# Iterate over each message in the messages list
for message in messages:
# Check the role of the message and format it accordingly
if message["role"] == "system":
formatted_messages.append(f"system: {message['content']}\n")
elif message["role"] == "user":
formatted_messages.append(f"user: {message['content']}\n")
elif message["role"] == "assistant" and message.get("function_call"):
formatted_messages.append(f"assistant: {message['function_call']}\n")
elif message["role"] == "assistant" and not message.get("function_call"):
formatted_messages.append(f"assistant: {message['content']}\n")
elif message["role"] == "function":
formatted_messages.append(f"function ({message['name']}): {message['content']}\n")

# Print each formatted message in its corresponding color
for formatted_message in formatted_messages:
print(
colored(
formatted_message,
role_to_color[messages[formatted_messages.index(formatted_message)]["role"]],
)
)

Note that this method of formatting text doesn’t work in Google Colab, but my accompanying notebook has another function you can use instead.

Now let’s define a function for GPT to use:

def add_numbers(num1, num2):
return num1 + num2

Simple enough, right? But now we need to write a JSON specification to explain the function to GPT:

functions = [
{
"name": "add_numbers",
"description": "Add two numbers",
"parameters": {
"type": "object",
"properties": {
"num1": {
"type": "number",
"description": "The first number",
},
"num2": {
"type": "number",
"description": "The second number",
},
},
"required": ["num1", "num2"],
},
},
]

A few things to note here:

  • This data is what GPT sees — not your Python function. So it’s up to you to clearly describe your variables and what your function does.
  • You’re probably going to be using more than one function, but they all go inside here since they get converted to JSON and sent with your requests.
  • GPT is actually really good at writing these specifications itself! So give it an example and show it your functions and it should be able to help you write these properly.
  • The “type” here is “number” (and not “float”) because JSON schema uses “number” to represent both integers and floating-point numbers.

Next we’ll need a way to call the functions that GPT requests to use. A simple way to do this is to have a dictionary that maps function names as strings back to the original functions:

functions_dict = {
"add_numbers": add_numbers,
}

Finally, let’s start a conversation with GPT:

messages = []
messages.append({"role": "system", "content": "You are a helpful assistant."})

Using Your New Functions

This functionality won’t be all that helpful unless it can work with ambiguous user requests. So rather than giving it a math quiz, let’s see if it can figure out for itself when to use the tools it has:

messages.append(
{
"role": "user",
"content": "I have a bill for $323 and another for $295. How much do I owe?"
}
)

Now we generate a response using our chat function:

# Generate a response
chat_response = chat_completion_request(
messages, functions=functions
)

# Save the JSON to a variable
assistant_message = chat_response.json()["choices"][0]["message"]

# Append response to conversation
messages.append(assistant_message)

If we print the assistant_message we just got, we’ll see this:

{'role': 'assistant',
'content': None,
'function_call': {'name': 'add_numbers',
'arguments': '{\n "num1": 323,\n "num2": 295\n}'}}

This is GPT telling us that it’s ready to use a function rather than send a text-based response. So let’s run it!

if assistant_message["function_call"]:

# Retrieve the name of the relevant function
function_name = assistant_message["function_call"]["name"] # `add_numbers`

# Retrieve the arguments to send the function
function_args = json.loads(assistant_message["function_call"]["arguments"])
# 323 and 295

# Look up the function in our dictionary and
# call it with the provided arguments
result = functions_dict[function_name](**function_args)

print(result)

We get back 618, which is indeed correct!

The next thing we need to do is add a new message to the conversation with the function result:

messages.append({
"role": "function",
"name": function_name,
"content": str(result), # Convert the result to a string
})

Lastly, we need to tell the user what the total was:

# Call the model again to generate a user-facing message 
# based on the function result
chat_response = chat_completion_request(
messages, functions=functions
)
assistant_message = chat_response.json()["choices"][0]["message"]
messages.append(assistant_message)

# Print the final conversation
pretty_print_conversation(messages)

We get back this:

system: You are a helpful assistant.

user: I have a bill for $323 and another for $295. How much do I owe?

assistant: {'name': 'add_numbers', 'arguments': '{\n "num1": 323,\n "num2": 295\n}'}

function (add_numbers): 618

assistant: You owe a total of $618.

So the assistant is, in fact, able to report back the result with natural language. (And, for an app in production, you’ll probably elect not to display messages without content, so that the user sees only the “user” and “assistant” messages.)

Adding More Functionality

Of course, you’re likely to be giving GPT access to more than one function, so let’s repeat this process with another tool at its disposal: subtraction. This example will be helpful for seeing how GPT can successfully choose from an arbitrary selection of functions.

The code that follows will look familiar: we’re doing the same thing we did for our addition function to add a subtraction function.

# Define in Python
def subtract_numbers(num1, num2):
return num1 - num2


# Define the function specification
# — notice we're appending this to our previous function list using +=
functions += [
{
"name": "subtract_numbers",
"description": "Subtract two numbers",
"parameters": {
"type": "object",
"properties": {
"num1": {
"type": "number",
"description": "The first number",
},
"num2": {
"type": "number",
"description": "The second number",
},
},
"required": ["num1", "num2"],
},
},
]

# Add to our function dictionary
functions_dict['subtract_numbers'] = subtract_numbers

Now let’s try an even more ambiguous prompt to see how GPT does: “I have $8013 in my checking account. How much will I have after I pay my bills?”

messages.append({"role": "user", "content": "I have $8013 in my checking account. How much will I have after I pay my bills?"})

And then the same code we saw before:

  • Generate a response (which will be a request to use a function)
  • Append that response to the conversation
  • Retrieve the name of the relevant function from the JSON response
  • Retrieve the arguments and then look up the function and run it with those arguments
  • Add a new message to the conversation with the function result
  • Generate a user-facing message with that result
# Generate a response
chat_response = chat_completion_request(
messages, functions=functions
)
assistant_message = chat_response.json()["choices"][0]["message"]

# Append response to conversation
messages.append(assistant_message)

# I like putting an if statement here, but it's up to you
if assistant_message["function_call"]:

# Retrieve the name of the relevant function
function_name = assistant_message["function_call"]["name"]

# Retrieve the arguments to send the function
function_args = json.loads(assistant_message["function_call"]["arguments"])

# Look up the function and call it with the provided arguments
result = functions_dict[function_name](**function_args)

# Add a new message to the conversation with the function result
messages.append({
"role": "function",
"name": function_name,
"content": str(result), # Convert the result to a string
})

# Call the model again to generate a user-facing message
# based on the function result
chat_response = chat_completion_request(
messages, functions=functions
)
assistant_message = chat_response.json()["choices"][0]["message"]
messages.append(assistant_message)

# Print the final conversation
pretty_print_conversation(messages)

This time it’s trickier because subtraction is not commutative like addition is: it actually matters what order num1 and num2 are in. But when we print the final conversation, we see:

system: You are a helpful assistant.

user: I have a bill for $323 and another for $295. How much do I owe?

assistant: {'name': 'add_numbers', 'arguments':
'{\n "num1": 323,\n "num2": 295\n}'}

function (add_numbers): 618

assistant: You owe a total of $618.

user: I have $8013 in my checking account. How much will I have after
I pay my bills?

assistant: {'name': 'subtract_numbers', 'arguments':
'{\n "num1": 8013,\n "num2": 618\n}'}

function (subtract_numbers): 7395

assistant: After paying your bills, you will have $7395 remaining in your
checking account.

So, GPT was able to correctly infer that num1 should be 8013 and num2 should be the sum of the bills mentioned previously, 618.

Conclusion

This simple example does a great job of demonstrating the power and flexibility of function calling in GPT models. Even ambiguously phrased messages can be parsed correctly, and the appropriate function can be selected and executed. This opens up a world of possibilities for developers to create more interactive and useful AI applications.

Once you go through the code and try implementing your own example, you’ll find that the process is fairly straightforward. You can define any function you want, describe it to GPT, and then use it in your conversations. This opens up a world of possibilities for creating more interactive and dynamic AI applications.

OpenAI CEO Sam Altman reportedly commented that ChatGPT plug-ins don’t seem to have product-market fit. Whereas developers originally thought they wanted their app inside ChatGPT, they actually want ChatGPT inside their app. This new feature allows developers to do just that. By integrating GPT into your applications, you can leverage the power of AI to enhance user experience, automate tasks, and provide real-time data and insights.

So dig into my Colab notebook with the code for implementing function calls. We’re about to see an explosion of interesting new services, and I hope yours is one of them.

--

--

Vincent Favilla

I'm a data scientist & AI enthusiast, exploring trends & sharing insights. Passionate about large language models & collaborative learning. Let's grow together!