The Python logging cheatsheet. Easy and fast ways to get logging done in Python.

Andrea Capuano
Analytics Vidhya
Published in
4 min readJul 20, 2020

Having good logs is crucial for monitoring applications, understanding production behavior and catching bugs. Logging in python is well made and well documented. However at times you just need to get things done quickly and you just do not have the time to read the excellent, but quite long official documentation.

Logging is important for your understanding of applications behavior, and to help IT operations to have a dedicated control panel to quickly monitor and analyze application issues.

In this article, we’ll try to figure out quick recipes to get productive with logging. Before we get started, just some basic terminology for python logging:

  • Loggers: Main actors that provide several methods to allow runtime logging from your applications.
  • Handlers: They take your log messages and they route them to specific locations, such as files or console.
  • Formatters: They make your logs appear in the way you want them, in a specific format you can define

Without further ado, let’s just dive straight into it and get some quick recipes to get your python application logging in no time.

Scenario 1: I have a simple application, I need to setup some basic logging to file

The fastest way to get logging done for simple applications is to use logging.basicConfig, it will create a StreamHandler (and a FileHandler if we specify the filename) and add it to the ROOT logger.
Configure it as follows:

import logging
logging.basicConfig(filename='my_log_file.log',level=logging.INFO)

Then use it as follows:

import logging
from time import sleep


def setup_logger():
logging.basicConfig(filename='my_log_file.log', level=logging.INFO)


def my_app_logic():
logging.info("Just entered the function")
sleep(0.1)
logging.info("Just after the sleep")
try:
res = 1 / 0
except ZeroDivisionError:
logging.exception("Attempted division by zero")


if __name__ == '__main__':
setup_logger()
my_app_logic()

This will result in the following output on our my_log_file.log file:

INFO:root:Just entered the function
INFO:root:Just after the sleep
ERROR:root:Attempted division by zero
Traceback (most recent call last):
File "/xxxx/simple_application.py", line 14, in my_app_logic
res = 1 / 0
ZeroDivisionError: division by zero

In case you would like to also display the time, simply add the format as follows:

logging.basicConfig(filename='my_log_file.log', format='%(asctime)s - %(message)s', level=logging.INFO)

Which will change the output messages in your log files to:

2020-07-20 17:05:29,686 - Just entered the function
2020-07-20 17:05:29,790 - Just after the sleep
2020-07-20 17:05:29,790 - Attempted division by zero
...

When to use it: Simple applications where you just need to get some logging done without too much fuzz. Minimal usage of external libraries and you do not require too much control on how logging is performed.

When not to use it: When you need fine-grained control on how logging is performed or any multi-process application.

Scenario 2: I have an application, single/multi-threaded that is using several libraries and I want to get some logging done

If your application begin to be a little bit more complex and you need more control over logging, we should avoid using the ROOT logger, instead, let’s create our very own logger as follows:

logger = logging.getLogger('non_simple_example')

Let’s add handlers to send the logs to console and/or to a file, formatters to specify our favorite logging format:

# create console handler and set level to info
stream_handler = logging.StreamHandler()
stream_handler.setLevel(logging.INFO)

# create file handler and set level to info
file_handler = logging.FileHandler(filename='my_log_name.log')
file_handler.setLevel(logging.DEBUG)

# create formatter
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')

# add formatters to our handlers
stream_handler.setFormatter(formatter)
file_handler.setFormatter(formatter)

# add Handlers to our logger
logger.addHandler(stream_handler)
logger.addHandler(file_handler)

Then just use logger throughout your application, example:

logger.info("Info message")
logger.debug("Debug message")

We will have things logged to console/file according to the logging levels and handlers configuration. In previous example the file handler has a DEBUG level, while console handler has an INFO one. This means that every log that has a DEBUG level will not be shown in the console, but it will be shown in the log file.

Here’s the console ouput:

2020–07–20 17:13:34,439 — non_simple_example — INFO — Info message

And the following output in our log file:

2020-07-20 17:13:34,439 - non_simple_example - INFO - Info message
2020-07-20 17:13:34,439 - non_simple_example - DEBUG - Debug message

Find the full code here.
This approach works very well also with multithreaded applications, as a matter of fact the logging module is thread safe. However it’s not gonna work for multi-process logging which requires more effort.

When to use it: When you require some more control on logging and you want to avoid using/polluting the root logger.

When not to use it: If you have multiple processes.

Scenario 3: My logs are getting big, what can I do?

Very easy, switch from a simple FileHandler to a more sophisticated RotatingFileHandler which will switch the log files when they reach a specific size:

handler = RotatingFileHandler('my_log.log', maxBytes=1000, backupCount=10)

You can use a TimedRotatingFileHandler if you want to switch logs every end of day/week/month:

handler = TimedRotatingFileHandler(filename="my_log.log", when="midnight")

Scenario 4: I have multi process application and I need to log.

Logging from different processes is not trivial and it needs some additional effort. I would advise against going with the quick&dirty solution but to actually read through the official documentation which explains in detail what is exactly happening and what are some strategies to tackle the challenge.

An interesting way to tackle this problem, is to use a WatchedFileHandler and use an external utility such as logrotate to actually rotate the logs according to a configuration.

Conclusion

This is by no means an exhaustive guide to logging, it is meant to be a quick & dirty way to get some logging done.
For more information and in detail guide, refer to the official documentation which also provides several examples to get your feet wet.

--

--

Andrea Capuano
Analytics Vidhya

Software Engineering, Artificial Intelligence, Random thoughts