Visualize Traefik logs in Kibana
It’s really easy! (when you know how)
You’re probably here because you’ve decided to follow the trend of centralized logs. So let’s go straight to the point: do you use Traefik as reverse proxy? In this guide we will configure the ELK stack to collect our Traefik logs!
First of all, a review of concepts:
- Traefik: a “cloud native” reverse proxy / load balancer
- Filebeat: a lightweight shipper for logs
- Logstash (optional): a server-side data processing pipeline that ingests data from a multitude of sources
- ElasticSearch: search and analytics engine, where we store our logs
- Kibana: visualizing platform for the Elasticsearch data
I’m going to assume you already have your Elastic stack installed and ready to use, so you only need some configuration files to achieve our goal! (am I wrong? then maybe you should follow this guide).
Configuring Traefik log
Traefik v1.5 added the possibility to use JSON format for the logs.
That’s great for us, as it makes the log processing much easier!
—
You only need to add this section in your “traefik.yml” file:
# file: /etc/traefik/traefik.yml
...[accessLog]
filePath = “/var/log/traefik/access.log.json”
format = “json”...
Configuring Filebeat
Now we need to send the logs to Logstash. The easiest way to do that is using Filebeat.
After following the installation instructions, we need to configure it. Most of the examples out there explain how to send a simple plain-text log file, but we have a JSON file! Shouldn’t it be easier?
Once again, it’s not really difficult when you know how…
# file: /etc/filebeat/filebeat.yml
filebeat.inputs:
- type: log
paths:
- /var/log/traefik/*.json
json.keys_under_root: true
json.add_error_key: true
fields_under_root: true
fields:
tags: ['beats','json','traefik']output.logstash:
hosts: ['<elastic-host>:5044']
ssl.certificate_authorities: ["/etc/filebeat/certs/logstash.crt"]
The configuration above will send the processed JSON data to Logstash, so you don’t even need to apply any filter.
You may define your own tags (they will help you to process the logs later) and edit your logstash host address. Finally, if you have the SSL enabled in your Logstash, don’t forget to copy the certificate and specify the path.
Note: in Filebeat 6.3 they renamed “prospections” to “inputs”. More info
Configuring Logstash to ingest the data (optional)
We’re almost done. Now we need to configure Logstash to ingest our Filebeat data. If you don’t have any Beats component enabled yet, you can use the following configuration file as template:
# file: /etc/logstash/conf.d/11_beats.conf
input {
beats {
port => 5044
ssl => true
ssl_certificate => "/etc/pki/tls/certs/logstash.crt"
ssl_key => "/etc/pki/tls/private/logstash.key"
}
}output {
if [@metadata][beat] {
elasticsearch {
hosts => ["http://localhost:9200"]
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
}
}
}
As you can see, it stores our beats (files, metrics, etc) in the ElasticSearch service.
Of course, you can skip this step, sending your JSON directly from Filebeat to Elastic.
Kibana
Let’s check if we obtain all the processed fields from Traefik in Kibana!
Great! So now we can finally draw a pie chart!!! :D