Python Logging: From Basics to Advanced Practices

Moraneus
5 min readApr 2, 2024

--

Python logging is an essential skill for developers of all levels, offering a robust and versatile way to track events, debug problems, and understand application behavior. As you progress from a beginner to a master in Python logging, you’ll find it not just useful but indispensable for developing reliable and maintainable software. Let’s embark on this journey together, uncovering the layers of Python logging, from its fundamentals to the most advanced practices.

Getting Started with Basic Logging

Python’s built-in logging module is powerful yet user-friendly, designed to meet your logging needs from simple scripts to complex applications. To start, let's explore how to set up basic logging:

import logging

# Simple configuration of logging
logging.basicConfig(level=logging.INFO)

logging.info("This is an info message")
logging.warning("This is a warning message")

This example sets up a basic configuration, specifying the minimum log level to capture. Messages at this level or higher (WARNING, ERROR, CRITICAL) are logged to the console.

Understanding Logging Levels

Logging levels are fundamental to the logging process, allowing you to categorize the severity of log messages. Python’s built-in logging module defines several standard levels, with the following being the most commonly used:

  • DEBUG: Detailed information, typically of interest only when diagnosing problems.
  • INFO: Confirmation that things are working as expected.
  • WARNING: An indication that something unexpected happened, or indicative of some problem in the near future (e.g., ‘disk space low’). The software is still working as expected.
  • ERROR: Due to a more serious problem, the software has not been able to perform some function.
  • CRITICAL: A serious error, indicating that the program itself may be unable to continue running.

Each level is associated with a numeric value that indicates its severity. When you configure logging, you set a level, and only messages with that level or higher will be tracked. This allows you to adjust the verbosity of your logging output depending on your current needs, such as debugging or monitoring production environments.

Diving Deeper: Loggers, Handlers, and Formatters

As you dive deeper, you’ll encounter the concept of loggers, handlers, and formatters:

  • Loggers are the entry point of the logging system. Each logger is identified by its name, allowing you to categorize logs and adjust logging levels.
  • Handlers send the log records (created by loggers) to the appropriate destination, like a file, stdout, or even over the network.
  • Formatters specify the exact format of the log message.

Here’s an example that demonstrates these components:

import logging

logger = logging.getLogger('example_logger')
logger.setLevel(logging.DEBUG)

# Create a file handler
file_handler = logging.FileHandler('example.log')
file_handler.setLevel(logging.ERROR)

# Create a console handler
console_handler = logging.StreamHandler()
console_handler.setLevel(logging.INFO)

# Define a formatter
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
file_handler.setFormatter(formatter)
console_handler.setFormatter(formatter)

# Add handlers to the logger
logger.addHandler(file_handler)
logger.addHandler(console_handler)

logger.info("This info won't appear in the file, only on console")
logger.error("This error will appear both in the file and on the console")

Advanced Logging Configuration

Understanding how to configure logging more deeply can unlock its full potential. Let’s consider an advanced example that uses filters and custom handlers, providing fine-grained control over logging output.

import logging

class ContextFilter(logging.Filter):
"""
This is a filter which injects contextual information into the log.
"""

def filter(self, record):
record.user_id = 'anonymous'
# Imagine a function that retrieves the user ID from the current session
# record.user_id = get_current_user_id()
return True

# Create a logger
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)

# Create a file handler that logs even debug messages
fh = logging.FileHandler('debug.log')
fh.setLevel(logging.DEBUG)

# Create a console handler with a higher log level
ch = logging.StreamHandler()
ch.setLevel(logging.ERROR)

# Create a formatter and set it for both handlers
formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(user_id)s - %(message)s')
fh.setFormatter(formatter)
ch.setFormatter(formatter)

# Add a filter to inject user ID
filter = ContextFilter()
logger.addFilter(filter)

# Add the handlers to the logger
logger.addHandler(fh)
logger.addHandler(ch)

logger.debug('This is a debug message')
logger.critical('This is a critical message')

In this example, we introduce a custom filter, ContextFilter, that adds contextual information (e.g., user ID) to every log record. This can be incredibly useful for debugging issues in applications with multiple users. We also configure different handlers for writing logs to a file and the console, each with different log levels and the same formatter that includes our custom field.

Structured Logging with structlog

As applications become more complex, structured logging becomes invaluable. It’s about logging information in a standardized format or schema, often in JSON. This makes logs easier to read by humans and machines, facilitating debugging and analysis. A popular library for structured logging in Python is structlog.

To use structlog, you first need to install it (pip install structlog), then set it up in your application:

import structlog

structlog.configure(
processors=[
structlog.processors.JSONRenderer()
]
)

logger = structlog.get_logger()
logger.info("login_attempt", username="anonymous", result="success")

This logs messages in a structured JSON format, making them easy to parse and filter with log management tools.

Here’s a more comprehensive example that showcases the power of structlog:

import logging
import structlog

# Configure standard Python logging
logging.basicConfig(format="%(message)s", level=logging.INFO)
root_logger = logging.getLogger()

# Configure structlog
structlog.configure(
processors=[
structlog.stdlib.filter_by_level,
structlog.stdlib.add_logger_name,
structlog.stdlib.add_log_level,
structlog.processors.TimeStamper(fmt="iso"),
structlog.stdlib.PositionalArgumentsFormatter(),
structlog.processors.JSONRenderer(),
],
context_class=dict,
logger_factory=structlog.stdlib.LoggerFactory(),
wrapper_class=structlog.stdlib.BoundLogger,
cache_logger_on_first_use=True,
)

logger = structlog.get_logger(__name__)

logger.info("user_logged_in", user_id="12345", ip_address="192.168.1.1")

This configuration enhances your log messages with valuable metadata, such as timestamps and log levels, and outputs them in a beautifully formatted JSON structure, making them easy to parse and analyze.

Advanced Logging with loguru

For many developers, loguru , a third-party library, offers an even simpler interface for logging with added capabilities out-of-the-box. Its installation is straightforward (pip install loguru), and it eliminates the boilerplate code associated with the standard logging module.

Here’s how you can use loguru:

from loguru import logger

logger.add("file_{time}.log", rotation="500 MB", retention="10 days", compression="zip", enqueue=True)

logger.info("This message is logged asynchronously to a rotated and compressed file.")

loguru supports automatic rotation, compression, and cleaning of log files, making log management a breeze. In this setup, loguru handles log rotation (creating a new log file after the current one reaches 500 MB) and retention (keeping log files for no more than 10 days) automatically. It also compresses old log.

Mastering Logging: Tips and Best Practices

  1. Level Appropriately: Use logging levels judiciously to categorize the importance of the log messages.
  2. Structured Logging: Embrace structured logging for complex applications to make logs more queryable and insightful.
  3. Contextual Information: Leverage logger’s ability to capture and log contextual information, such as user ID or session ID, to make debugging easier.
  4. Performance: Be mindful of logging performance. Logging excessively can slow down your application. Use appropriate log levels and consider async logging for high-volume applications.
  5. Security: Never log sensitive information. Always sanitize or hash any personal or sensitive data before logging.

By now, you should have a solid understanding of Python logging, from its basics to advanced practices. Logging is more art than science, requiring you to balance between the granularity of logs and the overhead introduced. As you experiment with these tools and techniques, you’ll develop an intuition for effective logging, elevating your Python mastery to new heights.

Your Support Means a Lot! 🙌

If you enjoyed this article and found it valuable, please consider giving it a clap to show your support. Feel free to explore my other articles, where I cover a wide range of topics related to Python programming and others. By following me, you’ll stay updated on my latest content and insights. I look forward to sharing more knowledge and connecting with you through future articles. Until then, keep coding, keep learning, and most importantly, enjoy the journey!

Happy programming!

--

--