Integrating Pi-hole and Elastic Stack with Docker

Marcus Silva
18 min readJun 5, 2024

--

I will share a detailed guide on how to integrate two powerful tools: Pi-hole and Elastic Stack. We will cover the basic concepts of each technology and provide a step-by-step process for achieving this integration using Docker.

What is Pi-hole?

Pi-hole is a tool that acts as a custom DNS (Domain Name System) server. Think of the internet as a huge library with billions of books. When you want to find a specific book, you don’t need to scour the entire library, right? Instead, you go to a librarian and ask where you can find the desired book.

Just as the librarian knows where each book is in the library, DNS knows where each website is on the internet. It maps website names, like google.com or facebook.com, to their corresponding numerical addresses, the IP addresses, such as 192.0.2.1.

But Pi-hole has an extra advantage: it prevents you from accessing domains that contain ads or trackers, ensuring a safer and faster browsing experience.

What is Elastic Stack?

Elastic Stack is a suite of tools from Elastic designed to collect, store, search, and visualize data in a scalable and efficient manner. It consists of several technologies, including Elasticsearch, Logstash, Kibana, and Beats.

Imagine you have a factory that produces millions of products a day. Each product has its own unique barcode, and you need to track everything from the moment it is manufactured to the moment it is sold. Let’s see how Elastic Stack would apply in this case:

  • Elasticsearch: A distributed search and analytics engine designed to handle large volumes of data. It’s like a giant warehouse where you store all your products, organizing them intelligently so you can find them quickly when needed.
  • Logstash: A data processing pipeline suite that ingests, processes, and sends logs and other data from various sources to Elasticsearch. It’s like an assembly line in the factory that receives all the products, checks their condition, and prepares them for storage.
  • Kibana: A data visualization and analysis platform that allows you to create interactive dashboards and graphical visualizations of the data stored in Elasticsearch. It’s like the factory control panel, where you can view and analyze all your products.
  • Beats: A set of lightweight agents that collect specific data, such as performance metrics, application logs, and security information, and send them to Logstash or directly to Elasticsearch. They are like sensors placed in different parts of the factory to monitor the production process.

What is Docker?

Docker is an open-source platform that allows you to develop, ship, and run applications inside containers. Imagine you are building a LEGO set, where each piece is a separate component of your application, such as the database, the web server, and the user interface.

Docker lets you place each component in its own container, like small boxes, with everything it needs to function correctly. So, instead of manually installing and configuring each part of your application on different computers, you can use Docker to create pre-configured containers that can run anywhere. This makes developing, deploying, and scaling applications much easier and more consistent.

Without further ado… let’s get to work!

Setting Up the Environment

To ensure efficient operation of the above solutions, it is essential that the environment to be used meets the necessary requirements:

  • Minumum CPU: Processor with at least 4 cores.
  • Minimum Memory: At least 4 GB of RAM.
  • Minimum Storage: At least 50 GB of free disk space, preferably SSD.
  • Operating System: Linux distributions, such as Ubuntu, Debian, CentOS, or Fedora, are highly recommended due to support and compatibility with the solutions we will use shortly.
  • Docker: We will need Docker CE configured in our environment to manage applications via containers. If your system does not yet have it, follow the installation instructions in the Docker Engine documentation.

In our lab, we will use Ubuntu Server 22.04.4 LTS, but you can choose the distribution you feel most comfortable with, just adapt the commands to your operating system. First, open the terminal and create a folder for our project called lab, and access it:

sudo mkdir lab && cd lab

Start by updating the operating system packages with the commands below:

sudo apt update && sudo apt upgrade -y

Planning the Architecture

Let’s start planning our environment by installing Pi-hole and the Elastic Stack solutions using Docker Compose. This tool simplifies running applications with multiple containers, allowing you to define configurations and start services with a single command.

Pi-hole will act as an intermediary between the client and the modem/router, accepting or rejecting DNS requests for the internet. It generates events (logs) and stores them in the pihole.log file. This file can be read using Logstash or a Beats family agent, Filebeat.

In this case, I will use Logstash to collect the logs from the file generated by Pi-hole and bring some statistics through Pi-hole’s API (application programming interface). The data will be enriched and forwarded to Elasticsearch.

To monitor our environment, I will use the Beats family agent, Metricbeat, to apply observability on the performance of Elasticsearch and Kibana. All data collected by Logstash and Metricbeat, stored in Elasticsearch, will be visualized through Kibana.

The skeleton of the applications and their respective configurations will be defined in the docker-compose.yml file. Open your terminal, create this file in the directory with your text editor of choice (vi, nano, etc.), and insert the following lines:

version: "3.8"

networks:
default:
name: lab
external: false

services:
init-volumes:
image: busybox
container_name: init-volumes
user: root
command: >
sh -c '
if [ ! -d /pihole-lab/etc-pihole ]; then mkdir -p /pihole-lab/etc-pihole; fi && \
if [ ! -d /pihole-lab/etc-dnsmasq.d ]; then mkdir -p /pihole-lab/etc-dnsmasq.d; fi && \
if [ ! -d /pihole-lab/log ]; then mkdir -p /pihole-lab/log; fi && \
if [ ! -d /elasticsearch-lab/data ]; then mkdir -p /elasticsearch-lab/data; fi && \
if [ ! -d /kibana-lab/data ]; then mkdir -p /kibana-lab/data; fi && \
if [ ! -d /logstash-lab/data ]; then mkdir -p /logstash-lab/data; fi && \
if [ ! -d /logstash-lab/pipeline ]; then mkdir -p /logstash-lab/pipeline; fi && \
if [ ! -d /logstash-lab/conf ]; then mkdir -p /logstash-lab/conf; fi && \
if [ ! -d /metricbeat-lab/data ]; then mkdir -p /metricbeat-lab/data; fi && \
chmod -R g+rwx /elasticsearch-lab /elasticsearch-lab/data && \
chgrp -R 0 /elasticsearch-lab /elasticsearch-lab/data && \
chmod -R 755 /kibana-lab /kibana-lab/data /logstash-lab /logstash-lab/data /metricbeat-lab /metricbeat-lab/data && \
chown -R 1000:0 /kibana-lab /kibana-lab/data
'
volumes:
- ./pihole-lab/etc-pihole:/pihole/etc-pihole
- ./pihole-lab/etc-dnsmasq.d:/pihole/etc-dnsmasq.d
- ./pihole-lab/log:/pihole/log
- ./escerts:/escerts
- ./elasticsearch-lab/data:/elasticsearch-lab/data
- ./kibana-lab/data:/kibana-lab/data
- ./logstash-lab/data:/logstash-lab/data
- ./logstash-lab/pipeline:/logstash-lab/pipeline
- ./logstash-lab/conf:/logstash-lab/conf
- ./metricbeat-lab/data:/metricbeat-lab/data
- ./:/host

pihole-lab:
depends_on:
init-volumes:
condition: service_completed_successfully
container_name: pihole-lab
hostname: pihole-lab
image: pihole/pihole:latest
ports:
- 53:53/tcp
- 53:53/udp
- ${PIH_PORT}:80/tcp
environment:
TZ: ${PIH_TZ}
WEBPASSWORD: ${PIH_PASSWORD}
volumes:
- ./pihole-lab/etc-pihole:/etc/pihole
- ./pihole-lab/etc-dnsmasq.d:/etc/dnsmasq.d
- ./pihole-lab/log:/var/log/pihole
restart: unless-stopped

setup:
depends_on:
init-volumes:
condition: service_completed_successfully
image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
container_name: certs
volumes:
- ./escerts:/usr/share/elasticsearch/config/certs
user: "0"
command: >
bash -c '
if [ x${ELASTIC_PASSWORD} == x ]; then
echo "Set the ELASTIC_PASSWORD environment variable in the .env file";
exit 1;
elif [ x${KIBANA_PASSWORD} == x ]; then
echo "Set the KIBANA_PASSWORD environment variable in the .env file";
exit 1;
fi;
if [ ! -f config/certs/ca.zip ]; then
echo "Creating CA";
bin/elasticsearch-certutil ca --silent --pem -out config/certs/ca.zip;
unzip config/certs/ca.zip -d config/certs;
fi;
if [ ! -f config/certs/certs.zip ]; then
echo "Creating certs";
echo -ne \
"instances:\n"\
" - name: elasticsearch-lab\n"\
" dns:\n"\
" - elasticsearch-lab\n"\
" - localhost\n"\
" ip:\n"\
" - 127.0.0.1\n"\
" - name: kibana-lab\n"\
" dns:\n"\
" - kibana-lab\n"\
" - localhost\n"\
" ip:\n"\
" - 127.0.0.1\n"\
> config/certs/instances.yml;
bin/elasticsearch-certutil cert --silent --pem -out config/certs/certs.zip --in config/certs/instances.yml --ca-cert config/certs/ca/ca.crt --ca-key config/certs/ca/ca.key;
unzip config/certs/certs.zip -d config/certs;
fi;
echo "Setting file permissions"
chown -R root:root config/certs;
find . -type d -exec chmod 755 \{\} \;;
find . -type f -exec chmod 755 \{\} \;;
echo "Waiting for Elasticsearch availability";
until curl -s --cacert config/certs/ca/ca.crt https://elasticsearch-lab:9200 | grep -q "missing authentication credentials"; do sleep 30; done;
echo "Setting kibana_system password";
until curl -s -X POST --cacert config/certs/ca/ca.crt -u "elastic:${ELASTIC_PASSWORD}" -H "Content-Type: application/json" https://elasticsearch-lab:9200/_security/user/kibana_system/_password -d "{\"password\":\"${KIBANA_PASSWORD}\"}" | grep -q "^{}"; do sleep 10; done;
echo "All done!";
'
healthcheck:
test: ["CMD-SHELL", "[ -f config/certs/elasticsearch-lab/elasticsearch-lab.crt ]"]
interval: 1s
timeout: 5s
retries: 120

elasticsearch-lab:
depends_on:
setup:
condition: service_healthy
container_name: elasticsearch-lab
image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
labels:
co.elastic.logs/module: elasticsearch
volumes:
- ./escerts:/usr/share/elasticsearch/config/certs
- ./elasticsearch-lab/data:/usr/share/elasticsearch/data
ports:
- ${ES_PORT}:9200
environment:
- node.name=elasticsearch-lab
- cluster.name=${CLUSTER_NAME}
- discovery.type=single-node
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
- bootstrap.memory_lock=true
- xpack.security.enabled=true
- xpack.security.http.ssl.enabled=true
- xpack.security.http.ssl.key=certs/elasticsearch-lab/elasticsearch-lab.key
- xpack.security.http.ssl.certificate=certs/elasticsearch-lab/elasticsearch-lab.crt
- xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.key=certs/elasticsearch-lab/elasticsearch-lab.key
- xpack.security.transport.ssl.certificate=certs/elasticsearch-lab/elasticsearch-lab.crt
- xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.transport.ssl.verification_mode=none
- xpack.license.self_generated.type=${LICENSE}
mem_limit: ${ES_MEM_LIMIT}
ulimits:
memlock:
soft: -1
hard: -1
healthcheck:
test:
[
"CMD-SHELL",
"curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
]
interval: 10s
timeout: 10s
retries: 120

kibana-lab:
depends_on:
elasticsearch-lab:
condition: service_healthy
container_name: kibana-lab
image: docker.elastic.co/kibana/kibana:${STACK_VERSION}
labels:
co.elastic.logs/module: kibana
volumes:
- ./escerts:/usr/share/kibana/config/certs
- ./kibana-lab/data:/usr/share/kibana/data
ports:
- ${KIBANA_PORT}:5601
environment:
- SERVERNAME=kibana-lab
- ELASTICSEARCH_HOSTS=https://elasticsearch-lab:9200
- ELASTICSEARCH_USERNAME=kibana_system
- ELASTICSEARCH_PASSWORD=${KIBANA_PASSWORD}
- ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES=config/certs/ca/ca.crt
- SERVER_SSL_ENABLED=true
- SERVER_SSL_CERTIFICATE=config/certs/kibana-lab/kibana-lab.crt
- SERVER_SSL_KEY=config/certs/kibana-lab/kibana-lab.key
- SERVER_SSL_CERTIFICATEAUTHORITIES=config/certs/ca/ca.crt
- XPACK_SECURITY_ENCRYPTIONKEY=${ENCRYPTION_KEY}
- XPACK_ENCRYPTEDSAVEDOBJECTS_ENCRYPTIONKEY=${ENCRYPTION_KEY}
- XPACK_REPORTING_ENCRYPTIONKEY=${ENCRYPTION_KEY}
mem_limit: ${KB_MEM_LIMIT}
healthcheck:
test:
[
"CMD-SHELL",
"curl -k -s -I https://localhost:5601 | grep -q 'HTTP/1.1 302 Found'",
]
interval: 10s
timeout: 10s
retries: 120

This docker-compose file defines a configuration to manage multiple services in Docker containers. The structure is organized into services, volumes, and networks, allowing secure and efficient integration and configuration.

First, the configuration specifies a network named lab, used by all services to facilitate internal communication. The init-volumes service will be used to create necessary directories and set correct permissions for persistent data of other services, ensuring volumes are ready before the main services are executed.

The pihole-lab will be our local DNS server and will initialize after the execution of the init-volumes service to ensure that the necessary volumes are configured. Environment variables such as TZ and WEBPASSWORD are defined to customize Pi-hole’s configuration (if you have issues installing Pi-hole on Ubuntu or Fedora systems, it is recommended to follow the steps in this link).

For the Elastic Stack-related services, such as setup, elasticsearch-lab, and kibana-lab, docker-compose will carry out a security certificate and authentication configuration process. The setup service is responsible for creating SSL certificates (data encryption between client and server) necessary for secure communication between the stack solutions. The elasticsearch-lab and kibana-lab services depend on the correct configuration of the certificates before starting.

Before starting our docker-compose, it is necessary to declare the variables in a .env file. Docker will use these variables to configure the services defined in the compose. In the current directory, enter the information below into a .env file (create it if it does not exist):

#Password for the 'elastic' user (at least 6 characters)
#In your environment, it is recommended to use a secure password; for demonstration purposes, I will use a simple password.
ELASTIC_PASSWORD=laboratory

#Password for the 'kibana_system' user (at least 6 characters)
#In your environment, it is recommended to use a secure password; for demonstration purposes, I will use a simple password.
KIBANA_PASSWORD=laboratory

#Elastic product version
STACK_VERSION=8.13.4

#Cluster name
CLUSTER_NAME=cluster-lab

#License for use: 'basic' or 'trial' to automatically start the 30-day trial license
LICENSE=basic
#LICENSE=trial

#Port to expose Elasticsearch HTTP API to the host
ES_PORT=9200

#Port to expose Kibana to the host
KIBANA_PORT=5601

#Memory limit for Elasticsearch and Kibana (in bytes)
ES_MEM_LIMIT=2147483648 # 2 GB
KB_MEM_LIMIT=2147483648 # 2 GB

#This key is used to encrypt cookies and other data in Kibana.
#It is recommended to use a random hexadecimal value with 32 or more characters (Generator: https://www.browserling.com/tools/random-hex).
ENCRYPTION_KEY=c34d38b3a14956121ff2170e5030b471551370178f43e5626eec58b04a30fae2

#Port to expose Pi-hole to the host.
#The default port is 80, but you can adjust it according to your needs.
PIH_PORT=8008

#Time zone for Pi-hole
PIH_TZ=America/Sao_Paulo

#Password for Pi-hole
#In your environment, it is recommended to use a secure password; for demonstration purposes, I will use a simple password.
PIH_PASSWORD=laboratory

Now we can initialize our solutions with the command sudo docker compose up -d:

Docker will read the configuration file to understand which services and dependencies are needed. Then, it checks if the container images are already available locally; if not, it downloads them from the image repository (Docker Hub). After that, the specified networks and volumes are created, preparing the ground for the containers to operate in isolation and with persistent storage if necessary.

After this step, the containers are initialized based on the provided configurations, such as environment variables and port mappings. Once the containers are running, you can list and manage these processes using the sudo docker container ls -a command, which shows all active containers and details about their status (the — format option allows you to generate a customized output, but it is not mandatory to use it):

To monitor events and troubleshoot potential issues during the container creation process, you can also view the logs of each container through the sudo docker container logs -f <container name> command.

Let’s check if each service is active before proceeding. Access the URLs listed below in your preferred browser and log in with the respective credentials declared in the .env file.

If you encounter a message in the browser stating that the site’s certificate is invalid, it is because the SSL certificate generated through the setup container is self-signed, which by default the browser does not recognize as a valid certificate. To proceed, locate the advanced options on the page and continue to the site.

Pi-hole:

  • Password: laboratory
  • URL: http://<localhost or host IP>:8008/admin

Elasticsearch:

  • User: elastic | Password: laboratory
  • URL: https://<localhost or host IP>:9200
  • Save the value of the cluster_uuid key, as we will need it later.

Kibana:

  • User: elastic | Password: laboratory
  • URL: https://<localhost or host IP>:5601

At this stage of our environment, the DNS mechanism is ready to be used with Pi-hole, just manually enter the DNS server configuration of your device or router pointing to the IP address of the host where Pi-hole is installed.

Additionally, Elasticsearch and Kibana are ready to receive and visualize our data. Let’s create some files that will help us collect events and metrics from Pi-hole through Logstash.

logstash-lab/conf/pipelines.yml

- pipeline.id: pihole_logs
path.config: "/usr/share/logstash/pipeline/pihole_logs.conf"
- pipeline.id: pihole_metrics
path.config: "/usr/share/logstash/pipeline/pihole_metrics.conf"

logstash-lab/pipeline/pihole_logs.conf

input {
file {
path => "/usr/share/logstash/lab/pihole.log"
}
}

filter {
grok {
patterns_dir => ["/usr/share/logstash/lab/"]
match => {
"message" => "%{LOGTIME:timestamp} %{LOGPROG:program}: ((%{LOGACTIONFROM:action} _%{LOGPORT:port}._%{LOGPROT:protocol}.%{LOGDOMAIN:domain} %{LOGDIRECTIONFROM:direction} %{LOGEOLFROM:src})|(%{LOGACTIONTO:action} _%{LOGPORT:port}._%{LOGPROT:protocol}.%{LOGDOMAIN:domain} %{LOGDIRECTIONTO:direction} %{LOGEOLTO:dst})|(%{LOGACTIONIS:action} _%{LOGPORT:port}._%{LOGPROT:protocol}.%{LOGDOMAIN:domain} %{LOGDIRECTIONIS:direction} %{LOGEOLIS:dst})|(%{LOGPATH:path} %{LOGDOMAIN:domain} %{LOGDIRECTIONIS:direction} %{LOGEOLIS:dst})|(%{LOGACTIONFROM:action} %{LOGDOMAIN:domain} %{LOGDIRECTIONFROM:direction} %{LOGEOLFROM:src})|(%{LOGACTIONTO:action} %{LOGDOMAIN:domain} %{LOGDIRECTIONTO:direction} %{LOGEOLTO:dst})|(%{LOGACTIONIS:action} %{LOGDOMAIN:domain} %{LOGDIRECTIONIS:direction} %{LOGEOLIS:dst})|(?<Others>.+))"
}
}

date {
match => [ "timestamp", "MMM d HH:mm:ss","MMM dd HH:mm:ss" ]
timezone => "America/Sao_Paulo"
target => "@timestamp"
}


if [dst] !~ /^(\d{1,3}\.){3}\d{1,3}$/ {
mutate {
rename => { "dst" => "response" }
gsub => [ "response", "[<>]", "" ]
}
}

if [dst] =~ /^(\d{1,3}\.){3}\d{1,3}$/ and [dst] !~ "^10\." and [dst] !~ "^127\.0\." and [dst] !~ "^192\.168\." and [dst] !~ "^172\.(1[6789]|2[0-9]|30|31)\.[0-9]{1,3}\.[0-9]{1, 3}" and [dst] != "0.0.0.0" {
geoip {
source => "dst"
target => "destination"
fields => ["city_name", "country_name", "country_code2", "region_name", "region_code", "latitude", "longitude"]
}
}

if [src] =~ /^(\d{1,3}\.){3}\d{1,3}$/ and [src] !~ "^10\." and [src] !~ "^127\.0\." and [src] !~ "^192\.168\." and [src] !~ "^172\.(1[6789]|2[0-9]|30|31)\.[0-9]{1,3}\.[0-9]{1, 3}" and [src] != "0.0.0.0" {
geoip {
source => "src"
target => "source"
fields => ["city_name", "country_name", "country_code2", "region_name", "region_code", "latitude", "longitude"]
}
}

mutate {
remove_field => [ "@version", "message", "[event]", "[host]", "[log]", "timestamp" ]
}
}

output {
elasticsearch {
index => "pihole-logs_%{+dd.MM.yyyy}"
hosts=> "${ELASTIC_HOSTS}"
user=> "${ELASTIC_USER}"
password=> "${ELASTIC_PASSWORD}"
ssl_certificate_authorities => "${CA_CERT}"
template => "/usr/share/logstash/lab/template.json"
template_name => "pihole-logs"
template_overwrite => true
}
}

logstash-lab/pipeline/pihole_metrics.conf

input {
http_poller {
urls => {
urlname => "${PIH_ENDPOINT}"
}
request_timeout => 20
schedule => { every =>"20s"}
codec => "json"
}
}

filter {
mutate {
add_field => { "[@metadata][index]" => "pihole_metrics-%{+YYYY.MM.dd}" }
remove_field => [ "@version", "[event]" ]
}
}

output {
elasticsearch {
index => "pihole-logs_%{+dd.MM.yyyy}"
hosts => "${ELASTIC_HOSTS}"
user => "${ELASTIC_USER}"
password => "${ELASTIC_PASSWORD}"
ssl_certificate_authorities => "${CA_CERT}"
template => "/usr/share/logstash/lab/template.json"
template_name => "pihole-logs"
template_overwrite => true
}
}

logstash-lab/data/template.json

{
"index_patterns": ["pihole-logs*"],
"template": {
"settings": {
"number_of_shards": 1,
"number_of_replicas": 0
},
"mappings": {
"properties": {
"@timestamp": {
"type": "date"
},
"Others": {
"type": "keyword"
},
"action": {
"type": "keyword",
"ignore_above": 256
},
"ads_blocked_today": {
"type": "integer"
},
"ads_percentage_today": {
"type": "float"
},
"clients_ever_seen": {
"type": "integer"
},
"destination": {
"properties": {
"geo": {
"properties": {
"city_name": {
"type": "keyword",
"ignore_above": 256
},
"country_iso_code": {
"type": "keyword",
"ignore_above": 256
},
"country_name": {
"type": "keyword",
"ignore_above": 256
},
"location": {
"properties": {
"lat": {
"type": "float"
},
"lon": {
"type": "float"
}
}
},
"region_code": {
"type": "keyword",
"ignore_above": 256
},
"region_name": {
"type": "keyword",
"ignore_above": 256
}
}
}
}
},
"direction": {
"type": "keyword",
"ignore_above": 256
},
"dns_queries_all_replies": {
"type": "integer"
},
"dns_queries_all_types": {
"type": "integer"
},
"dns_queries_today": {
"type": "integer"
},
"domain": {
"type": "keyword",
"ignore_above": 256
},
"domains_being_blocked": {
"type": "integer"
},
"dst": {
"type": "keyword",
"ignore_above": 256
},
"gravity_last_updated": {
"properties": {
"absolute": {
"type": "integer"
},
"file_exists": {
"type": "boolean"
},
"relative": {
"properties": {
"days": {
"type": "integer"
},
"hours": {
"type": "integer"
},
"minutes": {
"type": "integer"
}
}
}
}
},
"privacy_level": {
"type": "integer"
},
"program": {
"type": "keyword",
"ignore_above": 256
},
"queries_cached": {
"type": "integer"
},
"queries_forwarded": {
"type": "integer"
},
"reply_BLOB": {
"type": "integer"
},
"reply_CNAME": {
"type": "integer"
},
"reply_DNSSEC": {
"type": "integer"
},
"reply_DOMAIN": {
"type": "integer"
},
"reply_IP": {
"type": "integer"
},
"reply_NODATA": {
"type": "integer"
},
"reply_NONE": {
"type": "integer"
},
"reply_NOTIMP": {
"type": "integer"
},
"reply_NXDOMAIN": {
"type": "integer"
},
"reply_OTHER": {
"type": "integer"
},
"reply_REFUSED": {
"type": "integer"
},
"reply_RRNAME": {
"type": "integer"
},
"reply_SERVFAIL": {
"type": "integer"
},
"reply_UNKNOWN": {
"type": "integer"
},
"response": {
"type": "keyword",
"ignore_above": 256
},
"src": {
"type": "keyword",
"ignore_above": 256
},
"status": {
"type": "keyword",
"ignore_above": 256
},
"unique_clients": {
"type": "integer"
},
"unique_domains": {
"type": "integer"
}
}
}
}
}

logstash-lab/data/pattern

LOGTIME ^(Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)\s{1,2}[0-9]{1,2} [0-9]{2}:[0-9]{2}:[0-9]{2}
LOGPROG dnsmasq\[\d{1,}\]
LOGACTIONFROM query\[(A{1,5}|HTTPS|SOA|TXT|PTR|SVCB|SRV|NAPTR|NS|type=\d{1,5})\]
LOGACTIONTO forwarded
LOGACTIONIS reply|regex blacklisted|exactly blacklisted|special domain|cached|gravity blocked|Rate\-limiting|config|%{LOGACTIONOTHER}
LOGACTIONOTHER (Apple iCloud Private Relay domain)
LOGACTION %{LOGACTIONIS}|%{LOGACTIONTO}|%{LOGACTIONFROM}|%{LOGACTIONOTHER}
LOGDIRECTIONFROM from
LOGDIRECTIONIS is
LOGDIRECTIONTO to
LOGDOMAIN ({%LOGIP}|error|((?:[A-Z0-9a-z-_~:\/?#\[\\\-@!\$&'\(\)\*\+,:%=]*)\.?)*)
LOGEMAIL [a-zA-Z][a-zA-Z0-9_.+-=:]+@%{LOGDOMAIN}
LOGIPV4ELEMENT [0-9]{1,3}
LOGIPV6ELEMENT ([0-9]|[a-f]|[A-F]){0,4}:{1,2}
LOGIPV4 %{LOGIPV4ELEMENT}\.%{LOGIPV4ELEMENT}\.%{LOGIPV4ELEMENT}\.%{LOGIPV4ELEMENT}
LOGIPV6 %{LOGIPV6ELEMENT}{1,8}
LOGIP %{LOGIPV4}|%{LOGIPV6}
LOGEOLIS .+$
LOGEOLFROM %{LOGIPV4}
LOGEOLTO %{LOGIPV4}
LOGPORT \d+
LOGPROT https|http
LOGPATH \/(?:[^\/\0]+\/)*[^\/\0]+

The pipelines.yml file configures Logstash pipelines, specifying which configuration files should be used to process different types of logs, while the pihole_logs.conf and pihole_metrics.conf ingestion pipelines define how Logstash should process and enrich Pi-hole specific logs.

In the input section, it is specified where Logstash should get the logs from. In the filter section, transformations are applied to the data with Grok (a plugin for parsing and extracting patterns from unstructured text) and GeoIP (a plugin for identifying an IP address and geographic information about it). In the output section, the data is sent to Elasticsearch for storage.

Grok pattern | Alvo (Target value)

Additionally, the template.json file defines the index mapping structure in Elasticsearch to store Pi-hole log data, specifying the field types and their properties, such as date, text, and numbers, to facilitate data indexing and search. Finally, the pattern file contains predefined regular expressions to identify and extract specific patterns in Pi-hole logs through Grok.

We need the Pi-hole API key for Logstash to perform HTTP queries. This key is found inside the file located at the path pihole-lab/etc-pihole/setupVars.conf, so we can insert it into the .env file using the command below:

PIH_API_KEY=$(grep WEBPASSWORD pihole-lab/etc-pihole/setupVars.conf | cut -d'=' -f2) && sudo sh -c "echo '\n# Pi-hole API Key:\nPIH_API_KEY=$PIH_API_KEY\n' >> .env"

This command performs two operations: first, it searches for the Pi-hole API key in the setupVars.conf configuration file and extracts it using the grep and cut commands. Then, it adds this key to the .env file using sudo sh -c to obtain superuser permissions and echo to write the key in the format accepted by Docker to declare variables.

Now let’s configure our Metricbeat to collect metrics from our environment. Create a file inside the metricbeat-lab folder called metricbeat.yml and insert the code below:

metricbeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false

metricbeat.modules:
- module: elasticsearch
xpack.enabled: true
period: 10s
hosts: ${ELASTIC_HOSTS}
ssl.certificate_authorities: "certs/ca/ca.crt"
ssl.certificate: "certs/elasticsearch-lab/elasticsearch-lab.crt"
ssl.key: "certs/elasticsearch-lab/elasticsearch-lab.key"
username: ${ELASTIC_USER}
password: ${ELASTIC_PASSWORD}
ssl.enabled: true

- module: logstash
xpack.enabled: true
period: 10s
hosts: ${LOGSTASH_HOSTS}

- module: kibana
metricsets:
- stats
period: 10s
hosts: ${KIBANA_HOSTS}
username: ${ELASTIC_USER}
password: ${ELASTIC_PASSWORD}
ssl.certificate_authorities: "certs/ca/ca.crt"
xpack.enabled: true

- module: docker
metricsets:
- "container"
- "cpu"
- "diskio"
- "healthcheck"
- "info"
#- "image"
- "memory"
- "network"
hosts: ["unix:///var/run/docker.sock"]
period: 10s
enabled: true


processors:
- add_host_metadata: ~
- add_docker_metadata: ~

setup.kibana:
host: ${KIBANA_HOSTS}
ssl:
certificate: "certs/elasticsearch-lab/elasticsearch-lab.crt"
key: "certs/elasticsearch-lab/elasticsearch-lab.key"
certificate_authorities: "certs/ca/ca.crt"
username: ${ELASTIC_USER}
password: ${ELASTIC_PASSWORD}
protocol: "HTTPS"

monitoring:
enabled: true
elasticsearch:
hosts: ${ELASTIC_HOSTS}
username: ${ELASTIC_USER}
password: ${ELASTIC_PASSWORD}
ssl:
certificate: "certs/elasticsearch-lab/elasticsearch-lab.crt"
certificate_authorities: "certs/ca/ca.crt"
key: "certs/elasticsearch-lab/elasticsearch-lab.key"
protocol: "https"
cluster_uuid: "heq4k4ZiQRWBiNlFTGc_QQ"

setup.template.settings:
index.number_of_replicas: 0

output.elasticsearch:
hosts: ${ELASTIC_HOSTS}
username: ${ELASTIC_USER}
password: ${ELASTIC_PASSWORD}
ssl:
certificate: "certs/elasticsearch-lab/elasticsearch-lab.crt"
certificate_authorities: "certs/ca/ca.crt"
key: "certs/elasticsearch-lab/elasticsearch-lab.key"

The above configurations define how Metricbeat should collect and send metrics from Metricbeat itself, Elasticsearch, Logstash, Kibana, and Docker to Elasticsearch, ensuring security through SSL certificates and user authentication.

Remember when I asked you to save the cluster_uuid value when we accessed Elasticsearch through the browser? We will use it in the above file through the monitoring.cluster_uuid configuration.

With the configurations defined, we can update our docker-compose. Insert the following lines at the end of the file:

 logstash-lab:
depends_on:
pihole-lab:
condition: service_healthy
elasticsearch-lab:
condition: service_healthy
container_name: logstash-lab
hostname: logstash-lab
image: docker.elastic.co/logstash/logstash:${STACK_VERSION}
labels:
co.elastic.logs/module: logstash
volumes:
- ./escerts:/usr/share/logstash/certs
- ./logstash-lab/data:/usr/share/logstash/lab
- ./pihole-lab/log/pihole.log:/usr/share/logstash/lab/pihole.log
- ./logstash-lab/pipeline:/usr/share/logstash/pipeline
- ./logstash-lab/conf/pipelines.yml:/usr/share/logstash/config/pipelines.yml
environment:
- xpack.monitoring.enabled=false
- ELASTIC_USER=elastic
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
- ELASTIC_HOSTS=https://elasticsearch-lab:9200
- CA_CERT=certs/ca/ca.crt
- PIH_ENDPOINT=http://pihole-lab/admin/api.php?summaryRaw&auth=${PIH_API_KEY}

metricbeat-setup:
depends_on:
elasticsearch-lab:
condition: service_healthy
kibana-lab:
condition: service_healthy
image: docker.elastic.co/beats/metricbeat:${STACK_VERSION}
container_name: metricbeat-setup
command: setup -v -c "/usr/share/metricbeat/metricbeat.yml"
volumes:
- ./escerts:/usr/share/metricbeat/certs
- "./metricbeat-lab/metricbeat.yml:/usr/share/metricbeat/metricbeat.yml:ro"
environment:
- ELASTIC_USER=elastic
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
- ELASTIC_HOSTS=https://elasticsearch-lab:9200
- KIBANA_HOSTS=https://kibana-lab:5601

metricbeat-lab:
depends_on:
elasticsearch-lab:
condition: service_healthy
kibana-lab:
condition: service_healthy
metricbeat-setup:
condition: service_started
image: docker.elastic.co/beats/metricbeat:${STACK_VERSION}
container_name: metricbeat-lab
command: ["--strict.perms=false", "-system.hostfs=/hostfs"]
hostname: metricbeat-lab
user: root
volumes:
- ./escerts:/usr/share/metricbeat/certs
- ./metricbeat-lab/data:/usr/share/metricbeat/data
- "./metricbeat-lab/metricbeat.yml:/usr/share/metricbeat/metricbeat.yml:ro"
- "/var/run/docker.sock:/var/run/docker.sock:ro"
- "/sys/fs/cgroup:/hostfs/sys/fs/cgroup:ro"
- "/proc:/hostfs/proc:ro"
- "/:/hostfs:ro"
environment:
- ELASTIC_USER=elastic
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
- ELASTIC_HOSTS=https://elasticsearch-lab:9200
- KIBANA_HOSTS=https://kibana-lab:5601
- LOGSTASH_HOSTS=http://logstash-lab:9600

We added the Logstash service to our docker-compose, mapping the configuration and pipeline files as volumes to be used by the container. With Metricbeat, we added two services, one configuring dashboards within Kibana and creating index templates in Elasticsearch, and the other collecting metrics from our environment by mapping the necessary volumes and granting the necessary permissions to collect host metrics.

With the final adjustments, we can execute the commands sudo docker compose down && sudo docker compose up -d to apply the new configurations:

After running the commands, let’s go to Kibana to check if the data is reaching Elasticsearch. Access the paths below through the side menu located on the left:

  • ≡ > Management > Stack Monitoring:
  • ≡ > Observability > Overview > Hosts:
  • ≡ > Analytics > Dashboards > [Metricbeat Docker] Overview ECS:

At the end of all these processes, we can see that our environment is healthy and operational according to the configurations we performed. Now we need to visualize the events generated by Pi-hole, so let’s access the path ≡ > Management > Stack Management > Kibana > Data Views and create a view pointing to the Pi-hole indices as shown in the image below:

Access the path ≡ > Analytics > Discover and select the view we created a moment ago by clicking on the blue balloon in the upper left corner that contains the name of the current view. It is expected that we will see this result on the screen:

From this view, we can create our dashboard through the path ≡ > Analytics > Dashboards > Create dashboard. Feel free to combine your creativity with your need to combine indicators and create screens to monitor the main metrics of Pi-hole, like the one I developed through Kibana:

Plus:

By acting as a DNS server, we can configure a local domain in Pi-hole to respond to the IP of your application instead of manually entering it when accessing the site. Expand the left side menu and access the path Local DNS > DNS Records, add the desired domain and the address of the host where the application is hosted, as shown in the image below:

Pi-hole Local DNS Setting
Output (we were able to access our Kibana through the configured domain)

We can combine this approach with the use of a reverse proxy, such as NGINX and Apache, we can also insert custom domains for exception and/or blocking of DNS resolutions in Pi-hole, but we will leave that for part two of this article…

I hope you enjoyed the content! I would be happy to hear your opinion, so don’t forget to leave your feedback in the comments.

See you soon. Until next time!

--

--