Explain Your Python Exceptions with OpenAI
How to build an AI assistant to help you troubleshoot your Python applications
The prefect-openai
collection simplifies the integration of OpenAI into your workflow — combining Prefect’s logging with OpenAI’s interpretative capabilities.
Application
When your code crashes, it would be nice to have troubleshooting help, such as from OpenAI!
If you are unfamiliar with HTTP status codes, OpenAI can help provide additional context. This might help you understand the issue better and find a solution quicker!
For example, with the following exception as the input:
HTTPStatusError: Client error ‘403 FORBIDDEN’ for url ‘https://httpbin.org/status/403' For more information check: https://httpstatuses.com/403
OpenAI returned the following interpretation:
The 403 forbidden error means that the user trying to visit the URL is not authorized to do so. This could mean that the user is not logged in, or that the requested content is not available. Checking the URL for more information can help resolve the issue.
That seems to be way more digestible!
Usage
To achieve this functionality, you can add the following two lines to your existing code:
from prefect_openai.completion import interpret_exception
@interpret_exception("COMPLETION_MODEL_BLOCK_NAME")
import httpx
from prefect import flow
from prefect_openai.completion import interpret_exception
@flow
@interpret_exception("curie")
def example_func():
resp = httpx.get("https://httpbin.org/status/403")
resp.raise_for_status()
example_func()
This decorator works not only on flows but tasks and even plain Python functions!
If you’re using Prefect Cloud or Orion, the interpretation is available within the Prefect UI, as well:
Setup
Before being able to use interpret_exception
decorator, the OpenAICredentials
and CompletionModel
blocks must be created first.
But don’t worry — it’s straightforward.
First, create your OpenAI API key from your OpenAI account page.
Then copy, paste, and replace the placeholders below.
from prefect_openai import OpenAICredentials, CompletionModel
openai_credentials = OpenAICredentials(api_key="API_KEY_PLACEHOLDER")
completion_model = CompletionModel(
openai_credentials=openai_credentials,
model="text-curie-001",
max_tokens=1000,
temperature=0.5
)
openai_credentials.save("CREDENTIALS_BLOCK_NAME_PLACEHOLDER")
completion_model.save("COMPLETION_MODEL_BLOCK_NAME_PLACEHOLDER")
That’s it! The “COMPLETION_MODEL_BLOCK_NAME_PLACEHOLDER”
can now be used in interpret_exception
!
Alternatively, you can create the blocks directly from the Prefect UI as well:
Customization
Tweaking the model’s output is part art, part science.
If you’re interested in using a different prefix for the prompt or including part of the traceback for additional context to the model, you may use the following keyword arguments:
prompt_prefix
: the prefix to include in the prompt ahead of the traceback and exception message.traceback_tail
: the number of lines of the original traceback to include in the prompt to OpenAI, starting from the tail. If 0, only include the exception message in the prompt. Note this configuration can be costly in terms of tokens, so be sure to set this and themax_tokens
inCompletionModel
appropriately.
@interpret_exception(
"COMPLETION_MODEL_BLOCK_NAME_PLACEHOLDER",
prompt_prefix="Offer a solution:",
traceback_tail=1,
)
Internals
For those interested in how the magic happens, here’s what interpret_exception
is doing behind the scenes:
- captures the exception message
- adds a prefix, “Explain:”
- submits it as a prompt to a text completion model
- re-raises the exception with the response
The source code is available within the prefect-openai
collection.
Resources
Besides interpreting exceptions, prefect-openai
supports arbitrary text completion, as well as image generation — that’s how the cover image was generated!
If you found those use cases interesting, make sure to check out the prefect-openai
docs and star the collection repository on GitHub!