Grafana: monitor Nginx Proxy Manager website
Learn how to monitor your website’s Nginx logs with Grafana
Contents
Introduction
Dashboards
Step 1 — Configure a custom log_format on your Nginx Proxy Manager
Step 2 — Set up log scraping tools
Step 3 — Setting up Grafana
Step 4 — Websites dashboards
Step 5 — Conclusion
Coming soon…
Introduction
To monitor the Nginx logs of your websites, you’ll need 4 different tools. Nginx Proxy Manager (or simple Nginx but you’ll have to adapt this tutorial then), Promtail, Loki and Grafana. In my case, I used the docker version of each of these tools.
Dashboards
With this tutorial you should be able to have Grafana dashboards like these:
Step 1 — Configure a custom log_format on your Nginx Proxy Manager
Here is the docker-compose.yml
for my NPM container:
version: '3.8'
services:
app:
image: 'jc21/nginx-proxy-manager:latest'
container_name: nginx-proxy-manager
restart: unless-stopped
ports:
- '80:80'
- '81:81'
- '443:443'
volumes:
- ./data:/data
- ./letsencrypt:/etc/letsencrypt
First of all, to be able to monitor our logs, we need to write them in a special format.
To do this, you’ll need to create a new folder called “custom” in the Nginx folder.
cd /data/compose/nginx # change to the correct directory
mkdir custom
Create a new file named http_top.conf
which will place our configurations at the top of the HTTP block. Place the recommended log format for Nginx logs into this file.
log_format json_analytics escape=json '{'
'"time_local": "$time_local", '
'"remote_addr": "$remote_addr", '
'"request_uri": "$request_uri", '
'"status": "$status", '
'"server_name": "$server_name", '
'"request_time": "$request_time", '
'"request_method": "$request_method", '
'"bytes_sent": "$bytes_sent", '
'"http_host": "$http_host", '
'"http_x_forwarded_for": "$http_x_forwarded_for", '
'"http_cookie": "$http_cookie", '
'"server_protocol": "$server_protocol", '
'"upstream_addr": "$upstream_addr", '
'"upstream_response_time": "$upstream_response_time", '
'"ssl_protocol": "$ssl_protocol", '
'"ssl_cipher": "$ssl_cipher", '
'"http_user_agent": "$http_user_agent", '
'"remote_user": "$remote_user" '
'}';
Create a file named server_proxy.conf
in the custom directory we created earlier.
touch server_proxy.conf
Place the following in the new file. Adapt paths to your needs.
access_log /data/logs/all_proxy_access.log json_analytics;
error_log /data/logs/all_proxy_error.log warn;
Of course, you’ll need to restart your Nginx controller before anything takes effect.
Example of output:
Now you should be able to see this output format in your logs.
{
"time_local": "19/Sep/2023:15:04:54 +0000",
"remote_addr": "192.168.10.77",
"request_uri": "/webpage",
"status": "200",
"server_name": "your-domain.com",
"request_time": "0.002",
"request_method": "GET",
"bytes_sent": "356",
"http_host": "your-domain.com",
"http_x_forwarded_for": "",
"http_cookie": "",
"server_protocol": "HTTP/2.0",
"upstream_addr": "192.168.1.13:8080",
"upstream_response_time": "0.003",
"ssl_protocol": "TLSv1.3",
"ssl_cipher": "TLS_AES_128_GCM_SHA256",
"http_user_agent": "Mozilla/5.0",
"remote_user": "",
}
Step 2 — Set up log scraping tools
I’ve included below a plan of the final infrastructure, with links between the various tools. In my case, I have three websites: Dokuwiki, Plex and Grafana.
For the time being, we’re only interested in Promtail, Loki and Grafana. We’ll look at Blackbox and Prometheus later.
What’s the role of the differents tools ?
Before launching our containers, it’s important to understand the role of each of the products we’re setting up.
Grafana Loki does not index the contents of the logs but only indexes the labels of the logs. This reduces the efforts involved in processing and storing logs. Promtail, just like Prometheus, is a log collector for Loki that sends the log labels to Grafana Loki for indexing. — [computingforgeeks]
Now we can reate a new docker-compose.yml
for the 3 tools. You can find bellow the compose of the containers.
version: "3"
networks:
loki:
services:
loki:
image: grafana/loki:2.8.0
ports:
- "3100:3100"
command: -config.file=/etc/loki/local-config.yaml
networks:
- loki
volumes:
- ./loki-data:/loki # Loki persistent data folder
- ./config-loki:/etc/loki # Loki config folder
promtail:
image: grafana/promtail:2.8.0
volumes:
- /data/compose/nginx/logs/:/var/log # path to your NPM logs
- ./config-promtail/config.yaml:/etc/promtail/config.yaml # Promtail config folder
networks:
- loki
grafana:
environment:
- GF_PATHS_PROVISIONING=/etc/grafana/provisioning
- GF_AUTH_ANONYMOUS_ENABLED=false # disable anonymous login on Grafana
entrypoint:
- sh
- -euc
- |
mkdir -p /etc/grafana/provisioning/datasources
cat <<EOF > /etc/grafana/provisioning/datasources/ds.yaml
apiVersion: 1
datasources:
- name: Loki
type: loki
access: proxy
orgId: 1
url: http://loki:3100
basicAuth: false
isDefault: true
version: 1
editable: false
EOF
/run.sh
image: grafana/grafana:9.3.13 # keep this fix version of Grafana
ports:
- "3000:3000"
networks:
- loki
volumes:
- ./grafana-data:/var/lib/grafana
Don’t run docker-compose yet, we still need to set up the environment for these containers. To do this, we’ll need to create several folders, as you can see below.
├── config-loki
├── config-promtail
├── docker-compose.yaml
├── grafana-data # set permission (chown) to "472" so that grafana can write in it
└── loki-data # set this one to "10001" for loki
config-loki: this folder will host our configuration file for loki.
config-promtail: this is the promtail configuration folder.
grafana-data: Grafana will need persistent data, so this is where it will be saved. Make sure to set the permission of the folder to “472”. If not, Grafana will not be able to write in it
loki-data: same for loki with the permission “10001” this time.
Config files
Now let’s create the configuration files for Promtail and Loki.
├── config-loki
│ └── local-config.yaml
├── config-promtail
│ └── config.yaml
├── docker-compose.yaml
├── grafana-data
└── loki-data
Promtail — config.yaml
server:
http_listen_port: 9080
grpc_listen_port: 0
positions:
filename: /tmp/positions.yaml
clients:
- url: http://loki:3100/loki/api/v1/push
scrape_configs:
- job_name: system
static_configs:
- targets:
- localhost
labels:
job: varlogs
__path__: /var/log/*log
Loki — local-config.yaml
auth_enabled: false
server:
http_listen_port: 3100
common:
path_prefix: /loki
storage:
filesystem:
chunks_directory: /loki/chunks
rules_directory: /loki/rules
replication_factor: 1
ring:
kvstore:
store: inmemory
schema_config:
configs:
- from: 2020-10-24
store: boltdb-shipper
object_store: filesystem
schema: v11
index:
prefix: index_
period: 24h
ruler:
alertmanager_url: http://localhost:9093
query_scheduler:
max_outstanding_requests_per_tenant: 2048
You can now start the containers with the following command:
docker-compose up -d
Step 3 — Setting up Grafana
Now that our containers are up and running, you can go to the following url to configure Grafana :
http://your-server-address:3000
You will be taken to this page, where you will be asked to log in. The default username and password for Grafana is “admin” “admin”.
You will then be asked to change the default password to the one of your choice.
From the Grafana home panel, click on the Settings wheel and Data sources.
Here you’ll see that Loki is already configured as a data source.
Now click on the Dashboard tab, then “New” and finally “Import”.
From here you can click on “Upload JSON file” and enter the dashboard JSON file you want. This is how you’ll be able to add the dashboards we’ll see in step 4.
Step 4 — Websites dashboards
On Grafana, dashboards are imported in JSON format. Below is a link to the 2 dashboards I’ve created.
For this part, you’ll need to download the “website-all.json
” and “websites.json
” dashboards. The “website-all.json” dashboard will be used to view all traffic passing through the Nginx Proxy Manager. The “websites.json” dashboard will show traffic for each site individually.
Once you’ve downloaded the files, you can simply add them to your Grafana dashboard by adding the JSON file via the graphical interface as seen in step 3.
Websites — Dashboard
Once you’ve added the “websites” dashboard, open it and, in the fields at the top of the dashboard, select the file that contains all the NPM logs, not just those of a particular host (this is the file we entered in step 1 in the “server_proxy.conf
” file).
Nothing will appear yet, so before we can finally see our logs in the graphs, we’ll have to change another variable in the table. For the moment, in the “Website” search field, there are templates that I’ve set by default. The idea here is to replace these templates with your own sites.
To do this, click on the dashboard settings wheel and then on “Variables”.
Click on the “website” variable, then under “Custom options” change the default domain names and replace them with your own.
Once you have saved your changes, go to the “Website” search field and select the website you want to see from the domain names you have added.
Website All — Dashboard
For the “website all” dashboard, it’s easier. Once imported, all you have to do is select the file containing all the NPM logs, and you’re ready to go!
Step 5 — Conclusion
You’re now ready to monitor your Nginx logs! In this tutorial I haven’t gone into detail about the various tools used. The idea here was that you should be able to monitor logs without necessarily needing to understand everything in detail. If you’d like to learn more about Grafana, I think other writers on medium have explained this tool very well.
If you have any questions about this project, please leave a comment below this post, and I’ll try to answer them as soon as possible :)
Coming soon…
Soon I’ll be releasing another tutorial, this time on Prometheus and Blackbox, which will enable us to monitor the servers hosting our sites, and much more besides.