Retrieving Information Using OpenAI Embeddings and Assistant
Hi Everyone!
In my previous post, I walked you through how to generate embeddings and store them in PostgreSQL using OpenAI’s API. Now, let’s shift our focus to how you can use those embeddings to retrieve valuable information in a more dynamic and conversational way, leveraging an assistant created through OpenAI’s API.
This post will guide you through retrieving artist information by running queries, generating embeddings from those queries, and using the assistant to provide responses. Whether you want to know about artists in specific locations or just explore the data further, this is how you can do it.
— -
Step 1: Setting Up Your Query
Let’s say you want to retrieve information about artists working in Sydney. This starts with a natural language query that we’ll convert into an embedding, which the assistant will then process to retrieve relevant results.
Here’s how you would set up the query and generate its embedding using ‘text-embedding-3-small’ model.
def get_query_embedding(query_text):
response = client.embeddings.create(
input=query_text,
model=’text-embedding-3-small’
)
return response.data[0].embedding
For our query, let’s use:
query_text = “Give me a list of artists working in Sydney”
query_embedding = get_query_embedding(query_text)
This function generates the embedding for your query text using OpenAI’s API. If you recall, embedding is a numerical vector that represents the meaning of the query, allowing the assistant to understand and retrieve relevant data.
— -
Step 2: Interacting with the Assistant
Next, we’ll use the assistant I created earlier to handle the query and retrieve the results. How to create such assistants is covered in article that focuses on exploring OpenAI’s Beta API. The assistant will be tasked with processing the query and matching it with the stored data using embeddings.
Here’s how to start the interaction:
assistant = client.beta.assistants.retrieve(ASSISTANT_ID)
thread = client.beta.threads.create()
message = client.beta.threads.messages.create(
thread_id=thread.id,
role="user",
content=f"{query_text}"
)
This creates a thread for the assistant to handle the conversation and adds a message to it, simulating a natural interaction between you and the assistant.
— -
Step 3: Running the Assistant
Now, the assistant is ready to process the query. Using OpenAI’s run.create_and_poll, we can send the query and retrieve the assistant’s response. The assistant will handle any required actions like fetching tools or using embeddings to retrieve data.
run = client.beta.threads.runs.create_and_poll(
thread_id=thread.id,
assistant_id=assistant.id
)
If the assistant requires further input or action (like tool outputs), we handle that using the following function:
def handle_tool_outputs(run, thread_id):
tool_calls = run.required_action.submit_tool_outputs.tool_calls
tool_outputs = artist_tools.get_tool_outputs(tool_calls, db_params)
if tool_outputs:
run = client.beta.threads.runs.submit_tool_outputs_and_poll(
thread_id=thread_id,
run_id=run.id,
tool_outputs=tool_outputs
)
return run
This allows the assistant to work with external tools, such as querying PostgreSQL for the artist data stored in the embeddings.
— -
Step 4: Retrieving and Displaying the Response
Once the assistant has completed its processing, we fetch the response and display it. If the query requires action, it will loop until all necessary tools are executed and data is retrieved:
def get_response(thread_id, run_id):
messages = client.beta.threads.messages.list(thread_id=thread_id, run_id=run_id)
message_content = messages.data[0].content[0].text
return message_content.value
Finally, we print the response:
if run.status == “completed”:
response_message = get_response(thread.id, run.id)
print(f”Assistant’s response: {response_message}”)
— -
Conclusion
And that’s how you can retrieve information from a PostgreSQL database using embeddings and an assistant. By combining OpenAI’s embedding models and assistant functionality, you can handle complex queries in a natural, conversational way.
Whether you’re looking for specific artist details or exploring broader trends, this method allows you to interact with your data in a smart, dynamic manner. The assistant handles the hard work, making your interactions more intuitive and efficient.
Thank you for reading, and stay tuned for more posts on making the most of OpenAI’s API.
Keep exploring!