Identifying FastAPI requests in logs

David Lorenzo
Gradiant Talks
Published in
5 min readMar 6, 2021

When developing and maintaining a web application built in FastAPI (or any other web framework), it is very useful to implement logging records in the most relevant parts of our request handling processes, so we can understand what happened during the course of handling its requests.

However, we must keep in mind that our application will probably handle multiple requests at the same time. This could cause a problem when trying to read our application logs for a certain request that might have failed, as we would end up with a bunch of mixed up, unidentified records for many requests, and be unable to distinguish which records were part of the request we are trying to debug.

1. Example without request identification

Let’s use the following example to show what the problem would be:

In this example, our root endpoint will randomly fail by raising a ZeroDivisionError. For now we will be using print() statements instead of an actual logger to keep the example simpler, but the idea would be the same. Each request will print the following (pseudo) log records:

  • “Request started”, right at the beginning, when the endpoint request handler (“read_root()”) starts running
  • “Dividing x / y …”, when the “divide()” function, called from the request handler, starts running
  • “Result is z”, when the “divide()” function ends successfully
  • “Request failed: division by zero”, if the request fails
  • “Request ended”, if the request processing ends

By running the server, and then performing 4 simultaneous requests using the following async client code…:

…we will see the following log records in our server output:

As we can see, there is no way to identify which request failed and which succeeded. We just can’t follow the track of a single request throughout the logs.

2. Generating unique identifiers for each request & using context variables

The logical solution for this issue would be to append, to each log record, some sort of identifier for each request. I personally like to generate a UUID4 for this, which is a 36-characters long string with 32 random letters and numbers. Each generated UUID4 is presumably unique.

However, this solution would also involve two more steps:

  • Storing the request unique identifier in a context variable. In a nutshell, context variables are variables identified by a key, that we can set and read at any point from the workflow where they were created, without having to pass them as parameters to other functions called (as long as we are in the same thread).
  • Using a Middleware function, as featured by FastAPI. This function will run code before and after our request handlers, so we can use it to inject the Request ID for logging purposes.

The previous server code snippet would now look like this:

Notice the changes done here:

  • We define a http middleware (L20), as stated in the FastAPI documentation. In a nutshell, is an async function that takes two parameters, “request” and “call_next”. It’s mandatory to return the result of the coroutine “call_next”, which is the response generated by our request handling function (in this case, “read_root()”).
  • Our middleware generates a UUID4 string, used as Request ID for each new request (L22), and puts it on the context var (L23) that was defined in the global scope (L9). Do not confuse the context variable defined in the global scope with the actual value of that variable, which is the unique Request ID that we set from within our middleware.
  • The try/except/finally is now part of the middleware, so this logic is not implemented in our endpoints. From now on, the middleware will run for any call to any endpoint (even the /docs endpoint and non existing routes).
  • Our pseudo-logging function (“debug()”) acquires the Request ID set on the context variable (L12) and appends it to the printed log records.

This is an example of what our server logs would now look like:

Now, when receiving and processing concurrent requests, we can know which request each log record is part of. We could even group them together and have a good vision of what exactly happened for each request:

3. Using Loguru for logging

In the previous examples we did not use a proper logging tool. I personally love Loguru, as one of its features is the handling of context variables for logging purposes, through the contextualize context manager and the bind modifier.

Using Loguru, our server snippet would look like this:

Here, we replaced the default Loguru logger with a custom logger (L12), to define a custom output format that includes our Request ID from the “extra” logging fields. Also notice the usage of the “logger.contextualize” context manager in our request middleware (L22), which sets the defined kwargs as context variables (“extra” logging fields) on any logger call originated from within the context manager.

With these changes, our server logging output would look like this:

4. Returning the Request ID in response HTTP headers

Now that we are identifying our requests in our log records, we could also send this identifier on the response headers. This can be very useful, as we can perform a request and know its Request ID, that then we can search in our server log records to know exactly what happened.

This can be done by just appending the generated Request ID as part of the response headers in our middleware:

Here we changed how the response is returned in our middleware, which is now done in the “finally” block (L33), so we can always append the Request ID in the headers, no matter if the request was successful or failed due to an exception.

If we run this alternative client snippet, which outputs the Request ID obtained from the response headers…:

…we would see this:

Final considerations

These examples should cover the main idea of identifying and tracing the requests performed in our FastAPI application. Some other interesting things that could be worth looking at are:

  • Fulfull the implementation of logging across all the application.
  • Persisting our log records in a way we can find them later on. This can be done in many ways, starting with how the log records are captured, and where and how are persisted and searched later on. Remember that Loguru can output log records in a JSON, including many useful information (see the “serialize” parameter).
  • Read about traceability.

--

--