Using OpenAI API with GPT-3.5-turbo in Python

Çağlar Laledemir
2 min readOct 12, 2023

--

Photo by Levart_Photographer on Unsplash

In this post, we’ll delve into a Python code snippet that demonstrates how to create a basic chatbot using the GPT-3.5-turbo API. Let’s break down the code, step by step, to understand how to seamlessly integrate this sophisticated language model into your applications.

Setting Up OpenAI GPT-3.5-turbo:

The initial lines of code set up the OpenAI environment, providing your organization ID and API key. Replace “YOUR_ORG_ID” and “YOUR API KEY” with your actual OpenAI credentials. This ensures secure and authorized access to the GPT-3.5-turbo API.

You can utilize https://platform.openai.com/docs/api-reference/introduction for finding out your organization id and creating your own api key.

import os
import openai
openai.organization = "YOUR_ORG_ID"
openai.api_key = "YOUR API KEY"

# Use a model from OpenAI (assuming "text-embedding-ada-002" exists for this example)
model_name="gpt-3.5-turbo"

The Main Interaction Loop:

The main function initiates the core interaction loop for the chatbot. Users can input text, and the chatbot responds accordingly. The loop continues until the user types "quit," allowing for a seamless and intuitive conversational experience.

def main():
"""
Main interaction loop for the chatbot.
"""
print("Welcome to Chatbot! Type 'quit' to exit.")

user_input = ""
while user_input.lower() != "quit":
user_input = input("You: ")

if user_input.lower() != "quit":
response = chat_with_openai(user_input) # Pass user_input as an argument
print(f"Chatbot: {response}")

Interacting with OpenAI API:

The heart of the chatbot lies in the chat_with_openai function. This function interfaces with the OpenAI API, sending user prompts and receiving responses. The OpenAI API uses a chat-based approach, where a series of messages are exchanged. The function formats the user's input, sends it to the API, and extracts the chatbot's response for display.

def chat_with_openai(prompt):
"""
Sends the prompt to OpenAI API using the chat interface and gets the model's response.
"""
message = {
'role': 'user',
'content': prompt
}

response = openai.chat.completions.create(
model=model_name,
messages=[message]
)

# Extract the chatbot's message from the response.
# Assuming there's at least one response and taking the last one as the chatbot's reply.
chatbot_response = response.choices[0].message['content']
return chatbot_response.strip()


if __name__ == "__main__":
main()

This code serves as a foundational framework for a basic chatbot. Depending on your application, you can enhance the functionality by incorporating user context, handling multiple turns, or implementing specific use-case scenarios.

Explore various models beyond GPT-3.5 Turbo by visiting the OpenAI models link at https://platform.openai.com/docs/models/overview. Each model comes with distinctive features, providing opportunities for a wide range of applications.

Find the complete code on my GitHub profile: https://github.com/caglarldemir/openai_gpt-3.5-turbo_API_usage.

Delighted to share this with you! 🌟 I hope you enjoy reading it — happy coding! 💻🚀 #TechEnthusiasts #HappyCoding #ShareTheKnowledge

--

--

Çağlar Laledemir

Big Data & AI | Data Science Lead | Machine Learning & Analytics