Django icon icon by Icons8

Dockerized Django Logging via Grafana Loki

Moeen Zamani
6 min readApr 27, 2020

--

Maintaining a software system with multiple moving parts requires a proper logging system. Reviewing logs is a huge part of investigative development. If something fails at production, having relevant and informative logs results in a quick fix. A competent logging system makes this process more expeditious. One way to achieve this is, using the ELK stack, but it could be cumbersome. What if we had the comfort of tail + grep in a more elegant and scalable way. That’s exactly what Grafana Loki is. A highly-available log aggregation system to make this possible through Grafana. In this article, we set up a dockerized Django app and then send its logs to Loki for investigation.

The source code is available at GitHub. Make sure to have Docker and docker-compose installed before proceeding.

Django Application

The Django app we develop for this project is quite simple since it’s not the matter of discussion. The project is called blog and contains only one app named post.

post/models.py
post/views.py

So we’ll have two endpoints located at:

  • /api/v1/post/generate/
  • /api/v1/post/modify/

The first one creates a post and returns successfully, the other one raises an exception. That’s it for the business logic. In order to feed Django logs to Loki, we have to set up correct formatting and output. We output Django logs in JSON format to stdout which will be then caught by Docker and sent to Loki. The same goes for gunicorn. It’s not obligatory to JSON format your logs but it will cause better readability as you will see later. We’ll use python-json-logger to format logs.

blog/settings.py
gunicorn-logging.conf

Right now have all the configurations we need to produce structured logs in JSON format. These logs can be sent to Loki for future filtering.

Application Container and Services

Now that we have our application setup, we need to dockerize it and define required services with docker-compose.

Dockerfile

and the compose file:

docker-compose.yml

You’ll notice we defined a general logging via Loki driver with endpoint http://localhost:3100 but we haven't installed Loki yet. We need to install Loki plugin and start its service (here we run it as a Docker container). Learn more about Loki docker plugin here.

$ docker plugin install grafana/loki-docker-driver:latest --alias loki --grant-all-permissions$ docker network create blog-net$ docker run -d --name loki --restart unless-stopped -p 3100:3100 --network blog-net grafana/loki:latest "-config.file=/etc/loki/local-config.yaml"

Loki will start on blog-net network which is the same as our stack. Now we can build and start our services:

$ docker-compose build
$ docker-compose up -d

docker-compose Issue

You might see WARNING: no logs are available with the ‘loki’ log driver message after starting postgres and app containers without -d option. This is because docker-compose won't print any logs if a plugin is used. Docker logs should still work fine though:

$ docker logs <container_id>

Exploring Loki

Visit http://localhost:3000 and enter admin as username and password for Grafana. Click Explore on the left, then Add data source -> Loki. Enter http://loki:3100 for URL and click on Save & Test. No other parameters required.

Adding Loki as data source

Click Explore again and you should see this view:

Explore Loki in Grafana

Click on Log labels -> container_name and choose a container. Here django-loki_app_1 is chosen:

Django container logs retrieved by Loki

These are the logs generated by gunicorn and Django. Let’s make this live using hey:

$ hey -n 100 -c 1 -q 1 http://localhost:8000/api/v1/post/generate/

then click on Live button on top right corner in Grafana.

Grafana live view for Loki logs

Exit live mode and stop hey. Next we’ll make an exception by sending a request to http://localhost:8000/api/v1/post/modify/:

$ curl http://localhost:8000/api/v1/post/modify/

If you run the query again you’ll see our exception error caught:

Error indicated by red bar on the left

Log labels are sent by Loki plugin. You can change them with driver options in docker-compose file. Log fields however are set by gunicorn. Visit documentation for more details.

We can do simple filtering much like grep. Finding exceptions occurred with "Oh No!" message can be done by entering

{ container_name = "django-loki_app_1" } |= "Oh No!"

in query bar:

Filter “Oh No!” logs

Obviously this can get more complex with regular expressions. The following filter types are currently supported:

  • |= line contains string.
  • != line doesn’t contain string.
  • |~ line matches regular expression.
  • !~ line does not match regular expression.

For more details about LogQL, Loki’s query language, refer to the documentation.

Another Use Case

Chances are you’re using Grafana for monitoring along with prometheus. This makes an interesting use case if you get Loki involved. Let’s add gunicorn monitoring to our stack and find out. Add these services to stack:

Added prometheus to docker-compose.yml

Restart docker-compose stack to pull and run new containers, then add prometheus as a datasource (default values with url http://prometheus:9090) in Grafana. After everything's up, run hey in a separate tab to generate some traffic:

$ hey -n 1000 -c 1 -q 1 http://localhost:8000/api/v1/post/generate/

While hey is accessing the API, let’s simulate a downtime by shutting down postgres.

$ docker-compose stop postgres

This will stop postgres while hey is sending requests, which leads to database connection issue. In a real-world scenario, you might get notified that postgres is down and you’re returning a lot of 500 status codes. You can check that with prometheus. Open explore section and see the chart for response codes:

Rate of status codes (500) returned

As you’re seeing, the number of 500 response codes has spiked. You can check that with logs right there. Click on the split button on top. On the right pane, select Loki as source then choose app container from log labels. Now you can see database connection error logs along with prometheus metrics value in the specified time range.

Prometheus metric with Loki logs, side-by-side

This was maybe a naive use case but the ability to check logs side-by-side with other metrics in Grafana is quite handy.

In this project, we piped stdout logs produced by our Docker container to Loki server. Loki can also read log files and feed off of them using promtail. You might use promtail to index syslog or Nginx logs for instance. Loki has also other distribution options for Tanka, Helm charts, Kubernetes, or just binary packages. Loki is relatively new and there’s room for improvement but it’s already a great tool. I hope this was useful.

--

--