Everything you need to know about OpenAI function calling and assistants API
Learn about OpenAI function calling and assistant API with real-world use case
Introduction
OpenAI has been the frontrunner in releasing state-of-the-art large language models to advance AI technologies that can be adopted by the masses. OpenAI has released many new large language models over the years, which developers use to build a wide range of applications in various fields such as e-commerce, finance, healthcare, etc.
In the rapidly evolving landscape of AI and chatbot technology, mastering the art of function calling within OpenAI’s APIs has become a cornerstone for developers looking to push the boundaries of conversational AI. This article delves into the intricate world of OpenAI’s Chat Completions and Assistant APIs, to develop customized chatbot applications for your next AI project.
Learning objectives:
Before we dive into the details, let’s outline the key learning objectives of this article:
- Understanding of OpenAI function calling tools
- Master the Implementation of OpenAI’s Assistant APIs for customized Chatbot Applications
- Implement function calling and assistant API with real world example
Table of Contents:
- Overview of OpenAI function calling and assistant API
- Setting up an environment
- Function calling in chat completion API
- Integration of function calling with assistant API
- Conclusion
- FAQs
Overview of OpenAI function calling and assistant API
OpenAI has developed text-generation models like GPT 3.5 and GPT 4 that can be used for conversational tasks to get answers to various user queries. Conversational tools like ChatGPT can answer general questions based on available public data but fail to answer queries based on private databases or any third-party API.
This is why OpenAI developed function-calling tools and an assistant API to make conversational assistants more flexible. These tools can enhance the power of the ChatGPT and retrieve data from any private sources or third-party API to make it more customized to the relevant task at hand, such as an e-commerce bot that summarises certain order details and sends notifications to the user.
Function calling
In an API call, as per the described function, generated JSON output from the model and user query, calls one or multiple functions. function returns the response as per the input arguments and appended to the chat message for further synthesis of the output. It is important to note that the model does not call function directly instead it generates a JSON object that calls the function and returns the output.
Assistants API
Unlike chat completion API, assistant API allows developers to leverage function calling tools such as a code interpreter, retrieval, and third-party API to build AI applications. such applications can respond to user queries and return the answers using integrated functions to complete the tasks. common use cases of assistant API are as follows
- Create assistants that answer questions by calling external APIs
- Convert natural language into API calls to any third-party source
- Extract structured data from text
- Generate code to create data visualizations
Now, we will look at the practical implementation of function calling and assistant API to build custom AI applications.
Setting up an environment
First, we start by setting up an environment with the required python package to run various OpenAI commands and functions.
- Install OpenAI python package:
To install the openai package run the following command in your jupyter notebook environment. you can use any notebook environment among Google Colab, kaggle notebooks, etc.
pip install openai -q
2. Initializing the OpenAI Client:
Next, initialize the OpenAI client in your Python script. This client will facilitate your interactions with OpenAI’s services. Here’s a sample initialization
from openai import OpenAI
client = OpenAI(
api_key="your_api_key", # Replace with your actual OpenAI API key
)
Here, don’t forget to replace “your_api_key” with the actual API key that you receive from your OpenAI account.
Function calling in chat completion API
Chat completion API offers the unique capability to call intermediate functions for enhanced conversation with the users. In this section, we’ll set up a function that our chatbot can call to retrieve order details, and then integrate it with the Chat Completions API.
Defining get_order_details function
To fetch any custom data we need to define the function to retrieve data from the API that can be used to synthesize the response to the user’s query. The below function makes an API request to retrieve information about order details:
import requests
def get_order_details(order_id):
# Replace with your actual endpoint
url = "http://your_api_endpoint/order_info"
params = {'order_id': order_id}
response = requests.post(url, params=params)
if response.status_code == 200:
return response.json()['Result'][0]
else:
return f"Error: Unable to fetch order details. Status code: {response.status_code}"
# print the example order details for order_id=4
order_details = get_order_details(order_id=4)
print(order_details)
{
'order_id': 4,
'total_amount': 299.99,
'delivery_status': 'Shipped',
'current_location': 'Warehouse B',
'expected_delivery_date': '2023-01-18',
'customer_name': 'Alice Brown',
'product_id': 4,
'product_name': 'Tablet'
}
Describe functions
To integrate this function with chat completion API, we need to define a description of the function that the model can understand and generate suitable output that can be used to retrieve information from API. We create two dictionaries: available_functions
and functions
. The former maps function names to their corresponding Python functions, and the latter describes the functions for the API.
available_functions = {
"get_order_details": get_order_details,
}
functions = [
{
"name": "get_order_details",
"description": "Retrieves the details of an order given its order ID.",
"parameters": {
"type": "object",
"properties": {
"order_id": {"type": "integer", "description": "The order ID."}
},
"required": ["order_id"],
},
}
]
Implementing the get_gpt_response
Function
We implement the “get_gpt_response” function that uses chat completion API and receives AI-generated responses. This function will also specify that our get_order_details
function can be called by the model if needed.
def get_gpt_response(messages):
return client.chat.completions.create(
model="gpt-3.5-turbo",
messages=messages,
functions=functions,
function_call="auto",
)
# response from get_gpt_response
ChatCompletionMessage(content=None,
role='assistant',
function_call=FunctionCall(arguments='{"order_id":4}',
name='get_order_details'), tool_calls=None)retrieve
With the above function, we get the output of the function name and order id from the input query which can be further executed to retrieve order details. Next, we will define the execute function call that returns the response based on the input order ID and function name.
Executing the Function Call
At this stage, we will extract the function, execute the function, and generate the response for fine GPT response.
def execute_function_call(function_name,arguments):
# extract a function from function_name and arguments
function = available_functions.get(function_name,None)
if function:
arguments = json.loads(arguments)
results = function(**arguments)
else:
results = f"Error: function {function_name} does not exist"
return results
- Extract Function Details: The chatbot first extracts the function name and arguments from the
function_call
attribute in the API's response. - Execute Function: Using the extracted details, the chatbot calls the appropriate function. In our case, it’s the
get_order_details
function with the providedorder_id
. - Function Response: The
execute_function_call
function executes the specified function with the given arguments and returns the result. If there’s an error then it will return an error message.
function_name = response.choices[0].message.function_call.name
arguments = response.choices[0].message.function_call.arguments
function_response = execute_function_call(function_name, json.loads(arguments))
Submitting function response and generating final GPT response
Once the function response is generated we need to submit it back to the chat completion API to continue the conversation and incorporation of function response into the next message.
# extend conversation with assistant's reply
messages.append(response.choices[0].message)
messages.append(
{
"role": "function",
"name": function_name,
"content": str(function_response),
}
)
messages
# here's the result of the messages after appending the function response
[
{'role': 'user', 'content': 'what is my order status of order id 4'},
ChatCompletionMessage(content=None, role='assistant', function_call=FunctionCall(arguments='{"order_id":4}', name='get_order_details'), tool_calls=None),
{'role': 'function',
'name': 'get_order_details',
'content': "{'order_id': 4, 'total_amount': 299.99, 'delivery_status': 'Shipped', 'current_location': 'Warehouse B', 'expected_delivery_date': '2023-01-18', 'customer_name': 'Alice Brown', 'product_id': 4, 'product_name': 'Tablet'}"}
]
With the updated messages array, the chatbot then calls the get_gpt_response
function again. This time, the API uses the entire conversation context, including the function response, to generate its next message. The final GPT output provides a synthesized and human-readable output to the user:
# get the gpt response
resposne = get_gpt_response(messages)
print(resposne.choices[0].message.content)
"The order with ID 4 is for a Tablet with a total amount of $299.99. The order has been shipped and is currently at Warehouse B. The expected delivery date is 2023-01-18. The customer's name is Alice Brown."
This is how the chatbot generates user-friendly text using function output and GPT 3.5 once you ask a query to the AI bot. By effectively managing chatbot conversation one can build a custom AI chatbot using function calling tools of OpenAI.
Integration of function calling with assistant API
After learning about function calling, now we will learn how assistant API can help build customized applications leveraging various tools to accomplish specific tasks.
Defining tools for assistant API
To convert chat completion API into an assistant we need to define tools which is similar to function calling. the assistant uses these tools to retrieve relevant context from such tools and append it to the chat completion API for further conversation with the user.
tools = [
{
"type": "function",
"function": {
"name": "get_order_details",
"description": "Retrieves the details of an order given its order ID.",
"parameters": {
"type": "object",
"properties": {
"order_id": {
"type": "integer",
"description": "The unique identifier of the order."
}
},
"required": ["order_id"]
}
}
}
]
Creating an assistant
You can create an assistant by providing its name, instructions, model, and the tools it has access to. The instructions are crucial as they guide the assistant on how to use the provided tools and generate answers for the user’s query.
assistant = client.beta.assistants.create(
name="Ecommerce bot",
instructions="You are an ecommerce bot. Use the provided functions to answer questions. Synthesise answer based on provided function output and be consise",
model="gpt-4-1106-preview",
tools = tools
)
Create a message and run the assistant thread
To build applications we need to set up a thread so that the assistant API can manage to run messages. A thread runs the assistant and determines the tools to choose for the user query.
# a thread that runs message and openAI query
def create_message_and_run(assistant,query,thread=None):
if not thread:
thread = client.beta.threads.create()
message = client.beta.threads.messages.create(
thread_id=thread.id,
role="user",
content=query
)
run = client.beta.threads.runs.create(
thread_id=thread.id,
assistant_id=assistant.id
)
return run, thread
query = "I want to know my order status"
run,thread = create_message_and_run(assistant=assistant,query=query)
Process function call requirements
When a user’s message triggers a function, the run enters a requires_action
state, indicating a function needs to be called:
def get_function_details(run):
print("\nrun.required_action\n",run.required_action)
function_name = run.required_action.submit_tool_outputs.tool_calls[0].function.name
arguments = run.required_action.submit_tool_outputs.tool_calls[0].function.arguments
function_id = run.required_action.submit_tool_outputs.tool_calls[0].id
print(f"function_name: {function_name} and arguments: {arguments}")
return function_name, arguments, function_id
- Check Run Status: Regularly check the run’s status. When it’s
requires_action
, it means the Assistant needs a function to be executed. - Retrieve Required Action: Fetch the required action details to identify which function to call and its arguments.
def submit_tool_outputs(run,thread,function_id,function_response):
run = client.beta.threads.runs.submit_tool_outputs(
thread_id=thread.id,
run_id=run.id,
tool_outputs=[
{
"tool_call_id": function_id,
"output": str(function_response),
}
]
)
return run
Finally, once you generate a function response, submit its output back to the assistant thread so that it can generate the final output for the user query.
Conclusion
In conclusion, we learned about the use case of OpenAI function calling and assistant API to build customized chatbots for a wide range of applications in healthcare, finance, e-governance, etc. The Chat Completions and Assistant APIs offer not just tools, but gateways to creating more intuitive, responsive, and intelligent chatbot experiences. By mastering these functionalities, developers can build many user-centric applications.
Key takeaways:
- OpenAI’s capability to develop custom AI applications
- Function calling use case in developing assistant
- Assistant API to build customized chatbots using third-party data sources
References:
- https://platform.openai.com/docs/assistants/overview
- https://platform.openai.com/docs/guides/function-calling
- https://blog.futuresmart.ai/openai-function-calling-explained-chat-completions-assistants-api
FAQs
Q1: How does OpenAI function calling work?
A: OpenAI’s function calling works as the intermediary stage to retrieve data from third-party sources to respond to the user’s query in natural language using generative AI models.
Q2: What is the role of the assistant in OpenAI API?
A: Assistant unlike chat completion APIs can connect with other tools, functions, and third-party APIs to converse with the user to respond in natural language with user-friendly and detailed answers.
Q3: What is the difference between chat completion API and assistant API?
A: Chat completion APIs are used for generic conversation with information limited to trained data of the model, while assistant APIs allow to connect with third-party data sources making it a custom AI application.