Pushing Logs to Loki Without Using Promtail

Graham Atlee
Geek Culture
Published in
6 min readSep 19, 2021

If you’ve ever used Loki for your log analysis then you’re likely familiar with Promtail. Promtail is the loveable sidekick that scrapes your machine for log targets and pushes them to Loki.

However, as you might know, Promtail can only be configured to scrape logs from a file, pod, or journal. But what if you have a use case where your logs must be sent directly to Loki? For instance, imagine if we could send logs to Loki using Python’s built-in logging module directly from our program. It turns out I had that same exact need and this is how I was able to solve it.

Prerequisites

  • You have a Loki instance created and ready for your logs to be pushed to
  • You have Grafana already set up and you know the basics of querying for logs using LogQL
  • You have Python installed on your machine and have some scripting experience

Overview

First, I will note that Grafana Loki does list an unofficial python client that can be used to push logs directly to Loki. However, I wouldn’t recommend using that as it is very bareboned and you may struggle to get your labels in the format Loki requires them.

https://github.com/sleleko/devops-kb/blob/master/python/push-to-loki.py

Fortunately, someone has already solved our problem and it’s called the python-logging-loki library. Essentially, it’s an open-source solution that enables you to send your logs directly to Loki using Python’s logging package.

The way it works is by attaching a handler named “LokiHandler” to your logging instance. And every time you log a message using one of the module-level functions (ex. logger.error), the LokiHandler makes a POST directly to the Loki HTTP API . It also abstracts away having to correctly format the labels for Loki.

Initial setup

First, install the python logging loki package using pip.

pip install python-logging-loki

Now, we import our two main dependencies objects: logging and logging_loki. Then we initialize the Loki Handler object with parameters; the URL for our Loki data source, and the version we’re using. If you’re using Loki ≥ 0.0.4 use version 1. For the URL it's important that the path appended to the host is the push endpoint exposed by the Loki HTTP API (/loki/api/v1/push).

import logging
import logging_loki
logging_loki.emitter.LokiEmitter.level_tag = "level"# assign to a variable named handler
handler = logging_loki.LokiHandler(
url="http://loki:3100/loki/api/v1/push",
version="1",
)
# create a new logger instance, name it whatever you want
logger = logging.getLogger("my-logger")

Line 4 is very important if you’re using Grafana to visualize log metrics. By default, the LokiEmitter’s level tag is set to ‘severity’. However, all log queries in Grafana automatically visualize tags with ‘level’ (you will see this later). This issue was raised on Github that ‘level’ should be used instead of ‘severity’. But a fellow user instead provided a workaround with the code we have on line 4.

Let’s do some logging

Now that we have our LokiHandler setup we can add it as a handler to Python’s logging object.

logger.addHandler(handler)

Every time we call a logging function the Loki Handler will automatically push the log stream with the correct labels to Loki. If you want to send additional labels to Loki you can place them in the “tags” object when calling the function

# now use the logging object's functions as you normally wouldlogger.error(
"Something bad happened",

extra={"tags": {"service": "my-service"}},
)
logger.warning(
"Something bad happened but we can keep going",
extra={"tags": {"service": "my-service"}},
)
# extra={"tags": {"service": "my-service", "one": "more thing"}}

Now go to your Grafana instance and query for your logs using one of the labels. Additionally, you can see that a color scheme is being applied to each log line because we set the level_tag to “level” earlier, and Grafana is picking up on it.

Next, if you click on one of your logs line you should see all of the labels that were applied to the stream by the LokiHandler.

Let's try some other functions…but wait.

Now if you’re really eager you might have already tried calling the functions logger.info() or logger.debug() like so:

logger.info(
"Here's Something to read",
extra={"tags": {"service": "my-service"}},
)
logger.debug(
"Something on purpose",
extra={"tags": {"service": "my-service"}},
)

However, if you go check in Grafana they’re not there!? The reason you’re not seeing the logs appearing in the dashboard is due to a phenomenon in Python’s logging module known as level-based filtering. This is another issue that was raised on Github.

Brad Solomon from RealPython does an excellent deep dive into the logging module’s architecture to explain why this is happening. In short, Python’s logging levels are just an enum-like structure that map to integer values.

CRITICAL = 50
ERROR = 40
WARNING = 30 <--- default level
INFO = 20
DEBUG = 10
NOTSET = 0
# this will return the currently set level
logger.getEffectiveLevel()

Right now the logging object’s default log level is set to WARNING. This meaning that any value lower than a warning will not pass through. That is why we are not seeing debug & info logs but we can see warning, error, or critical come through.

An Easy Fix

Luckily, we can fix this with just one line of code. Place this right after instantiating the logging object:

logger.setLevel(logging.DEBUG)

What this does is sets the threshold for this handler to DEBUG (10). So now any logging messages which are less severe than DEBUG will be ignored.

Try running your program again and then query for your logs in Grafana. You should now see the info and debug logs coming through.

Adding your own log levels

If you take a scroll through Python’s logging documentation you will notice there are functions provided only for error, warning, info, and debug. But what if your application demands more log levels? For example, ‘verbose’ or ‘trace’.

First, we have to tell the logging object we want to add a new name called ‘TRACE’. Logging has a built-in function called addLevelName() that takes 2 parameters: level, and level name.

logging.addLevelName(15, "TRACE")

Remember, since we set our log level to debug, it must have a value higher than 10 if we want it to pass through. Next, we have to create a function that will handle trace logs.

def trace(self, message, *args, **kws):
if self.isEnabledFor(15):
# Yes, logger takes its '*args' as 'args'.
self._log(15, message, args, **kws)
logging.Logger.trace = trace

Then, we dynamically add it to the logging object. Now, within our code, we can call logger.trace() like so:

logger.trace(
"Something to follow",
extra={"tags": {"service": "my-service"}},
)

And then we can query for it in Grafana and see the color scheme is applied for trace logs.

Final Program

Here’s the full program that this article covers. Feel free to refit it for your logging needs. Thanks for reading!

Bonus: rainbow logs!

--

--