OpenAI’s GPT Function Calling. Welcome to January 2023!

Poatek
Poatek
Published in
10 min readJun 21, 2023

Making up for the lost time

June 13th, 2023, OpenAI announces another “ Groundbreaking” feature, this time we are talking about the ‘Function Calling and other API updates’ which brings some new functions, fine-tuned GPT models, lower prices of usage, and longer context for chatGPT.

As you may have noticed, the title says welcome to January 2023, and if you didn’t guess why, let me explain in quick steps:

- ChatGPT was released in November of 2022, (hysterical noises about AI from society since then)
- LangChain, a framework developed to work with language models, was released shortly prior to ChatGPT
- LangChain comes shipped with a lot of functionalities, chains, API possibilities and more
- LangChain allows you to use GPT to make API calls and usability
- OpenAI released this possibility only in June, when we have a lot of more and deeper functions built by the community
- This release is actually good, just a bit late for the beginning of the party

IMO, OpenAI is really trying to catch up with the community and industry and make up for the time lost, but this doesn’t mean something bad. A company with 500 people released a software that reached 100 million users in just 2 months, which is a milestone never achieved before, TikTok took 9 months to reach the same number. Nobody could foresee this coming, not even the company itself.

And as expected, with so much hype and attention directly focused on this incredible new tool, it is obvious that a lot of people will try to be the first in the new “gold rush”, and as we can see, OpenAI fell way behind in its own race. They needed almost 6 months to release an App, and they released it only for iPhones, without some functionality that the web app has…

Not only that, they have created a tool that is being used worldwide to power a lot of other tools, and the result was better than expected, but in a business view, it would be infinite times more profitable to release this tool with some or more of the capabilities that are being released just now. This is a move that I think we would never see a big company like Apple or maybe Tesla doing, the modus operandi of them is to release something and lock the user in preventing it to be used without their allowance.

But again, not something bad, this resulted in visibility and a good amount of money in OpenAI’s pockets. For a 500 employees company, it truly feels like they are running against the clock and working on weekends to get ahead of the industry (again...).

Highlights (TL;DR):

  • gpt-3.5-turbo and gpt-4 got even faster and “steerable”
    - gpt-3.5-turbo now has 16k (of tokens) for context conversation, 4 times bigger than the previous version
  • 75% of cost reduction on SOTA embeddings models (like ada and davinci), text-embedding-ada-002 now costs $0.0001 per 1k tokens (basically free)
  • 25% of cost reduction on INPUT tokens for gpt-3.5-turbo usage
  • NEW MODELS of both versions, when using -0613 at the end of the model name, will allow the models to work with function calling
  • Previous models (gpt3.5-turbo-0301 and gpt-4-0314) are now scheduled to be deprecated

Function Calling:

Okay, function calling. This may be a handy update for those not so deeply aware of the current state of development with LLMs. For quite some time now, we could use GPT-4 applied with agents or just any tool provided by a framework to perform actions based on natural language input (voice or text).

This is a huge deal. For example, by using the SQL Tool provided by LangChain and the GPT-4 model, you could just write (or say) something like “show me the percentage of X based on Y” and like magic, the model would analyze the table by itself, understand what you asked for, and return it to you and for a bonus it would be done extremely fast if you compare it with a python function doing the same.

But okay, forget about frameworks. Now GPT does it out of the box, you just need to prepare a bit before using it, but it is simple, and I’ll show you how to do it. Maybe this will help you to build that thing that you wanted to?!

Example n1 — Simplest Function Calling Implementation:

Basically, we will implement a Function that returns the time for a specific location. WITHOUT INTERNET!!!

First things first, let’s prepare our python notebook to work properly. We will install the openai package and import some other ones.

!pip install openai

We will set the “OPENAI_API_KEY” to be used throughout the notebook as an environment key and will import json to handle responses.

import os
import json

os.environ["OPENAI_API_KEY"] = "YOUR KEY HERE"

Now, I will create a function that calculates time. The cool thing about this one is that we don’t need to call an API, or make any request to get the right answer and return data. This shows us that we can implement a lot of different functions using the power of GPT to do even more!

We import the python time zones package to handle the time zoning calculations.

The code is commented, which helps to understand what it’s doing, but I’ll briefly explain here:

  1. We provide two parameters, the local_time, which you may have a clue about what it is, and the location which is the locationlocation we want to calculate the local time.
  2. Then, we parse the local_time (which comes as a string from GPT) to datetime
  3. Get the location time zone
  4. Do some incredible calculations
  5. Return a phrase telling with the time on the location is ahead or behind the time of the user.
from datetime import datetime
import pytz

def calculate_time_difference(local_time, location):
local_time = datetime.strptime(local_time, '%Y-%m-%dT%H:%M:%S')

# Get current time in the provided location
location_time = datetime.now(pytz.timezone(location))

# Extract just the time part
local_time_only = local_time.time()
location_time_only = location_time.time()

# Calculate the difference in hours and minutes
time_difference_in_minutes = ((location_time_only.hour * 60 + location_time_only.minute) -
(local_time_only.hour * 60 + local_time_only.minute))

hours, minutes = divmod(abs(time_difference_in_minutes), 60)

# Determine if the location time is ahead or behind local time
if time_difference_in_minutes < 0:
time_difference_str = f'{hours} hours, {minutes} minutes behind local time'
else:
time_difference_str = f'{hours} hours, {minutes} minutes ahead of local time'

# Format the current time in the provided location
location_time_str = location_time.strftime('%H:%M:%S')

time_response={
"local_time": location_time_str,
"location_time": time_difference_str
}

return json.dumps(time_response)

And to test it:

# Usage
local_time_str = datetime.now().strftime('%Y-%m-%dT%H:%M:%S') # Get current local time
time_response_json = calculate_time_difference(local_time_str, 'US/Pacific') # Replace 'US/Pacific' with your desired location
time_response = json.loads(time_response_json)

print(f"Time at location: {time_response['local_time']}")
print(f"Time difference: {time_response['location_time']}")

The response is:

Time at location: 15:26:32 
Time difference: 7 hours, 0 minutes behind local time

Here, we will see the great usability of GPT working with functions! If you paid attention to the use of this function, you might have noticed that it needs to be in a specific format like 'US/Pacific', otherwise, the python timezones package will not be able to understand what timezone we want to get info.

Now look at the conversation with GPT and the message sent by the user. This function was provided by OpenAI’s documentation and received some changes to work properly with our code:

import openai

local_time = datetime.now()

def conversation_with_time():
# Step 1: send the conversation and available functions to GPT
messages = [{"role": "user", "content": "What's the time right now in Boston?"}]
functions = [
{
"name": "calculate_time_difference", # updated the name of the function to be called
"description": "Calculates the time difference for a given location", # updated the description of the function to be called

"parameters": {
"type": "object",
"properties": { # updated the properties that need to be provided
"local_time": {
"type": "string",
"description": "Current local time of the user request"
},
"location": {
"type": "string",
"description": "The city, state or country, needs to be the timezone like US/Pacific",
},
},
"required": ["local_time","location"], # updated the required parameters
},
}
]
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo-0613",
messages=messages,
functions=functions,
function_call="auto", # auto is default, but we'll be explicit
)
response_message = response["choices"][0]["message"]

# Step 2: check if GPT wanted to call a function
if response_message.get("function_call"):
# Step 3: call the function
# Note: the JSON response may not always be valid; be sure to handle errors
available_functions = {
"calculate_time_difference": calculate_time_difference,
} # only one function in this example, but you can have multiple
function_name = response_message["function_call"]["name"]
fuction_to_call = available_functions[function_name]
function_args = json.loads(response_message["function_call"]["arguments"])
function_response = fuction_to_call( # updated this method
local_time=function_args.get("local_time"),
location=function_args.get("location")
)

# Step 4: send the info on the function call and function response to GPT
messages.append(response_message) # extend conversation with assistant's reply
messages.append(
{
"role": "function",
"name": function_name,
"content": function_response,
}
) # extend conversation with function response
second_response = openai.ChatCompletion.create(
model="gpt-3.5-turbo-0613",
messages=messages,
) # get a new response from GPT where it can see the function response
return second_response


print(conversation_with_time())

And the response for this conversation is:

"message": { 
"role": "assistant",
"content": "The current time in Boston is 6:09 PM."
},

Did you see that? Did you see it?? The description of the function inside the "location" parameter says: "The city, state or country, needs to be timezone like US/Pacific"and the user just said “Boston”. GPT knows the timezone of Boston, and automatically infers the needed information for the function to work properly, returning us the correct time based on the timezone of the user request. All this without needing an internet connection, APIs, or related stuff, just a simple function and GPT’s interpretation of natural language.

Now imagine how many different implementations you can build with simple functions, and so many further usability you can give to a powerful model like GPT-4.

Example n2 — Advanced Function Calling:

In this one, I’ll be doing almost the same, but this time, we will use a true API call to return data!

First, let’s install the pyowm package, which allows us to use the OpenWeatherMap API.

Note: For this one, you need to get an OpenWeatherMap API key. It is simple, just go to their website here and create an account. Once you receive their email, confirm your account, and wait a few minutes until your API Key gets activated. (I can’t understand why this isn’t instantaneous but okay….)

Again the code is all commented to give a better understanding step by step, but briefly explaining it:

  1. Imported pyowm and provided the API KEY, get yours here
  2. Created an observation, to receive our city_name parameter and retrieve live information
  3. Define the wanted information to be returned in json formatting
from pyowm import OWM

def get_live_weather(city_name):
# Initialize the OWM with your API key
owm = OWM('your-api-key') # replace 'your-api-key' with your actual API key

# Create a weather manager
manager = owm.weather_manager()

# Get the weather in the specified city
observation = manager.weather_at_place(city_name)
weather = observation.weather

# Extract the temperature, humidity, and status
temperature = weather.temperature('celsius')["temp"]
humidity = weather.humidity
status = weather.status

weather_data={
"temperature": temperature,
"humidity": humidity,
"status": status
}

return json.dumps(weather_data)

And the response:

In London,GB, the temperature is 18.4°C, the humidity is 82%, and the weather status is Clouds.

The following snippet is a conversation function provided by OpenAI’s documentation; we just need to update some values to get it working as we want:

def conversation_with_weather():
# Step 1: send the conversation and available functions to GPT
messages = [{"role": "user", "content": "What's the weather like in Porto Alegre tomorrow?"}] # updated the message the get data about this marvelous city
functions = [
{
"name": "get_live_weather", # updated the name of the function to be called
"description": "Calls the OpenWeather API to get live data about the weather from a given location", # updated the description of the function
"parameters": {
"type": "object",
"properties": { # updated the properties that need to be provided
"city_name": {
"type": "string",
"description": "The city provided, needs to be updated to work like Boston,MA",
},
},
"required": ["city_name"], # updated the required information to work with the function
},
}
]
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo-0613",
messages=messages,
functions=functions,
function_call="auto", # auto is default, but we'll be explicit
)
response_message = response["choices"][0]["message"]

# Step 2: check if GPT wanted to call a function
if response_message.get("function_call"):
# Step 3: call the function
# Note: the JSON response may not always be valid; be sure to handle errors
available_functions = {
"get_live_weather": get_live_weather,
} # only one function in this example, but you can have multiple
function_name = response_message["function_call"]["name"]
fuction_to_call = available_functions[function_name]
function_args = json.loads(response_message["function_call"]["arguments"])
function_response = fuction_to_call(
city_name=function_args.get("city_name")
)

# Step 4: send the info on the function call and function response to GPT
messages.append(response_message) # extend conversation with assistant's reply
messages.append(
{
"role": "function",
"name": function_name,
"content": function_response,
}
) # extend conversation with function response
second_response = openai.ChatCompletion.create(
model="gpt-3.5-turbo-0613",
messages=messages,
) # get a new response from GPT where it can see the function response
return second_response


print(conversation_with_weather())

Again, after some quick and simple tweaks, combined with the inference power of GPT, the user just asks for the weather in "...Porto Alegre tomorrow" and it handles it easily, providing the correct format to be used with the API, waits for the response, and provides a natural language answer for the user:

"message": { 
"role": "assistant",
"content": "The weather in Porto Alegre today is clear with a temperature of 16.17 C and a humidity of 82%." }

As simple as that, we now have the missing piece in this puzzle to unlock the true power of this high-quality LLM. Again, this is not something new, but just being able to work out of the box so easily performing actions and being so free to say something naturally, and the model understands it and transforms it to work upon if needed is awesome. Not needing to understand a framework to create something brings peace to my mind…

This allows companies to use GPT even deeper in their own internal or external tools, providing a great and fast implementation that can be done in minutes, not even hours or days!

I hope you find this useful, and now, go create something awesome!

Check out this python notebook working on Google Colab

Image: Unsplash
Guilherme Zago

--

--

Poatek
Poatek
Editor for

We’re a software engineering company filled with the best tech talent!📍Porto Alegre, São Paulo, Miami and Lisbon linktr.ee/poatek.official