Nginx log aggregation using grafana loki

Bharath Sampath
Ankercloud Engineering
6 min readFeb 21, 2024

What is Grafana Loki?

Grafana Loki is an open-source log aggregation system for Monitoring and Observability.

Integrates with a variety of data sources, including Nginx access logs, to assist in performance analysis and problem-solving for our web server.

Step:1 Configuring Nginx Access Logs in JSON Format for Loki:

Loki can efficiently ingest and work with JSON-formatted logs.

Enabling us to monitor and analyze Nginx access logs effectively. To integrate the output of nginx access logs from the raw format to JSON format with Loki for a log aggregation system and follow the below steps for log aggregation.

Step 1.1: Custom JSON Log Format Configuration

From the Nginx configuration file “/etc/nginx/nginx.conf”, Add the below custom JSON log format using the log_format directive in http block. This format specifies how the log entries will be structured in JSON.

map $http_referer $httpReferer {

default “$http_referer”;

“” “(direct)”;

}

map $http_user_agent $httpAgent {

default “$http_user_agent”;

“” “Unknown”;

}

log_format json_analytics escape=json ‘{‘

‘“time_local”: “$time_local”, ‘

‘“remote_addr”: “$remote_addr”, ‘

‘“request_uri”: “$request_uri”, ‘

‘“status”: “$status”, ‘

‘“http_referer”: “$httpReferer”, ‘

‘“http_user_agent”: “$httpAgent”, ‘

‘“server_name”: “$server_name”, ‘

‘“request_time”: “$request_time” ‘

‘}’;

Step 1.2: Configuration of Log file in Virtual host

From the virtual host configuration of nginx, update the below statement for the custom JSON log format (json_log) as we defined earlier.

access_log /var/log/nginx/json_access.log json_analytics;

Step 1.3: Restart the service to get modified changes

Test and Apply Changes. Verify the Nginx configuration for syntax errors. If the configuration test is successful, reload Nginx to apply the changes.

nginx -t

systemctl restart nginx

Step:2 Setup Grafana:

Grafana is an open-source, feature-rich, and highly customizable platform for monitoring, observability, and data visualization.

Step 2.1: Update our Ubuntu system

sudo apt update && sudo apt upgrade -y

Step 2.2: Add Grafana GPG key then install Grafana APT repository:

curl -fsSL https://packages.grafana.com/gpg.key|sudo gpg — dearmor -o /etc/apt/trusted.gpg.d/grafana.gpg

sudo add-apt-repository “deb https://packages.grafana.com/oss/deb stable main”

Step 2.3: Install Grafana

sudo apt update && sudo apt -y install grafana

Step 2.4: Start Grafana-service

sudo systemctl start grafana-server && sudo systemctl enable grafana-server

Grafana is now installed and can be accessible via the server’s IP and port 3000. (http://server_IP:3000)

Step:3 Setup Loki:

Loki is an open-source log aggregation and querying system developed by Grafana Labs. It is designed to work seamlessly with other monitoring and observability tools like Prometheus and Grafana. Loki is used to collect, store, and query log data efficiently, making it an integral part of a modern observability stack.

Now, we proceed to install Loki with the steps below:

Step 3.1: Download the latest Loki binary file in the Ubuntu server.

curl -s https://api.github.com/repos/grafana/loki/releases/latest | grep browser_download_url | cut -d ‘“‘ -f 4 | grep loki-linux-amd64.zip | wget -i -

Step 3.2: Unzip the binary file to /usr/local/bin

unzip loki-linux-amd64.zip

sudo mv loki-linux-amd64 /usr/local/bin/loki

Step 3.3: Confirm installed version:

loki — version

Step 3.4: Create a YAML configuration file for Loki

Create required data directories: sudo mkdir -p /data/loki

Step 3.5: Create new configuration file.

sudo vim /etc/loki-local-config.yaml

Add the following configuration to the file:

auth_enabled: false

server:

http_listen_port: 3100

grpc_listen_port: 9096

common:

instance_addr: 127.0.0.1

path_prefix: /tmp/loki

storage:

filesystem:

chunks_directory: /tmp/loki/chunks

rules_directory: /tmp/loki/rules

replication_factor: 1

ring:

kvstore:

store: inmemory

query_range:

results_cache:

cache:

embedded_cache:

enabled: true

max_size_mb: 100

schema_config:

configs:

- from: 2020–10–24

store: boltdb-shipper

object_store: filesystem

schema: v11

index:

prefix: index_

period: 24h

ruler:

alertmanager_url: http://localhost:9093

query_scheduler:

max_outstanding_requests_per_tenant: 4096

Step 3.6: Create Loki service:

Create the following file under /etc/systemd/system to daemonize the Loki service:

sudo tee /etc/systemd/system/loki.service<<EOF

[Unit]

Description=Loki service

After=network.target

[Service]

Type=simple

User=root

ExecStart=/usr/local/bin/loki -config.file /etc/loki-local-config.yaml

[Install]

WantedBy=multi-user.target

EOF

Step 3.7: Reload system daemon then start Loki service:

sudo systemctl daemon-reload

sudo systemctl start loki.service

Step 3.8: Check and see if the service has started successfully:

systemctl status loki

We can now access Loki metrics via http://server-IP:3100/metrics

Step:4 Set Up Promtail Agent:

Promtail is an agent used for log scraping, enrichment, and forwarding in the context of Grafana’s Loki log aggregation and querying system.

Configuration of Promtail agent in the target server, on which we need to monitor the nginx logs.

Step 4.1: Download Promtail binary zip

curl -s https://api.github.com/repos/grafana/loki/releases/latest | grep browser_download_url | cut -d ‘“‘ -f 4 | grep promtail-linux-amd64.zip | wget -i -

Step 4.2: Extract the file into the directory “/usr/local/bin” Once the file is downloaded

unzip promtail-linux-amd64.zip

sudo mv promtail-linux-amd64 /usr/local/bin/promtail

Step 4.3: Check version:

promtail — version

Step 4.4: Create a YAML configuration file for Promtail

sudo vim /etc/promtail-local-config.yaml

Add the following content to the file:

server:

http_listen_port: 9080

grpc_listen_port: 0

positions:

filename: /data/loki/positions.yaml

clients:

- url: http://<loki server ip>:3100/loki/api/v1/push

scrape_configs:

- job_name: dev-loc

static_configs:

- targets:

- localhost

labels:

job: nginx-logs

host: <server host name>

__path__: /var/log/nginx/json_access.log

pipeline_stages:

- json:

expressions:

status: “status”

source: log

Step 4.5: Create a service for Promtail

sudo tee /etc/systemd/system/promtail.service<<EOF

[Unit]

Description=Promtail service

After=network.target

[Service]

Type=simple

User=root

ExecStart=/usr/local/bin/promtail -config.file /etc/promtail-local-config.yaml

[Install]

WantedBy=multi-user.target

EOF

Step 4.6: Reload and start Promtail service with the following commands

sudo systemctl daemon-reload

sudo systemctl start promtail.service

systemctl status promtail.service

Step 4.7: Configure Loki Data Source

  • Login to Grafana web interface and select ‘Explore’. We will be prompted to create a data source.
  • Click on Add data source then select Loki from the available options
  • Input the following values for Loki:

Name: Loki

URL: http://127.0.0.1:3100

Click the option “Save&Test”. We should see a notification that the data source was added successfully.

Step:5 Visualize Logs on Grafana with Loki

We can now visualize logs using Loki. Click on Explore then select Loki at the Data source.

Step 5.1 Create a dashboard to visualize the logs

Navigate to the Dashboard-> New Dashboard -> + Add Visualization, and select the data source “Loki”.

Using the below sample query we can filter out the logs via status code and visualize them in the dashboard.

Step 5.2 Time series dashboard:

2xx status except 200:

count(count_over_time({host=~”$host”} | json | status = “2..” | status != “200”[$__interval]))

3xx status code except 301 and 302 status code:

count(count_over_time({host=~”$host”} | json | status =~”3..” | status != “301” | status != “302”[$__interval]))

4xx status code:

count(count_over_time({host=~”$host”} | json | status =~”4..”[$__interval]))

Step 5.3 Logs dashboard:

3xx except 301 and 302

{host=~”$host”} | json | status =~ “3..” | status != “301” | status != “302”

4xx status code:

{host=~”$host”} | json | status =~ “4..”

Here are sample dashboard and panels that contain the above query.

Step:6 Alerting Rule configuration

With Grafana Loki, we can centralize our logs, query them efficiently, and set up alerts based on log data.

Create the alert rules that trigger if any application response error code appears.

Step 6.1: Configure the SMTP credentials in the grafana.ini configuration.

Step 6.2: To configure the alerts

Navigate to Alerting-> alert rules -> create new alert rules with the above sample queries.

Finally, received a mail notification based on the response code from the nginx log via Loki.

--

--