Docker Swarm and Prometheus

Daz Wilkin
6 min readJul 2, 2017

I have a growing collection of IoT devices and interest in container clustering. I spend most time with Kubernetes but, for home hacking, Docker Swarm is suitable and keeps me familiar with both technologies.

Prometheus monitoring itself and 7 nodes
Swarm Visualizer does an excellent job of summarizing

This is a short summary of running a Swarm that spans 3 architectures and deploying Prometheus and Prometheus Node Exporter to them. The Swarm includes an Intel-based skull canyon (awesome machine!) as the leader; 2x Pine64’s (under-rated!); 3x Pine Zeros and 2x Pine Zero Ws (awesomes):

docker node lsHOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS
pine64-01 Ready Active
pine64-02 Ready Active
skull-canyon Ready Active Leader
zero-01 Ready Active
zero-02 Ready Active
zero-03 Ready Active
zero-w-01 Ready Active
zero-w-02 Ready Active

Different devices run different ARM architectures and each architecture requires a specific Domain image. I leveraged Emile Vauge’s whoami and Prometheus Node Exporter.

whoami

Building a static Go binary called whoami.zero on the skull for the Zeros requires:

env \
GOOS=linux \
GOARCH=arm \
GOARM=6 \
go build -o whoami.zero app.go

and for the Pines requires:

env \
GOOS=linux \
GOARCH=arm64 \
go build -o whoami.pine app.go

The Dockerfiles are similar for both. Replace “zero” with “pine” as needed. I created two Dockerfiles, one postfixed “.zero” and one “.pine”:

FROM scratch

COPY whoami.zero /

ENTRYPOINT ["/whoami.zero"]
EXPOSE 8000

Then, I’ve been using the following bash formula for build-push. Again, replace “zero” with “pine” as needed. Don’t forget to include that final “.” in the build step!

export TAG=$(date +%y%m%d%H%M)

docker build \
--tag=dazwilkin/whoami-zero:${TAG} \
--file=Dockerfile.zero \
.

docker push dazwilkin/whoami-zero:${TAG}

If you’d prefer to use my pre-built images, these are available as:

https://hub.docker.com/r/dazwilkin/whoami-zero/tags/
https://hub.docker.com/r/dazwilkin/whoami-pine/tags/

Node Exporter

Thanks to fish’s blog, I was encouraged to try Node Exporter on the Zeros. I used a slightly more advanced Go build this time. For the Zeros:

env \
GOOS=linux \
GOARCH=arm \
GOARM=6 \
CGO_ENABLED=0 \
go build \
--ldflags '-extldflags "-static"' \
-o node_exporter.zero \
node_exporter.go

and for the Pines:

env \
GOOS=linux \
GOARCH=arm64 \
CGO_ENABLED=0 \
go build \
--ldflags '-extldflags "-static"' \
-o node_exporter.pine \
node_exporter.go

The Dockerfiles are similar to before. Don’t forget to replace “zero” with “pine” and create both images:

FROM scratch

COPY node_exporter.zero /bin/node_exporter.zero

EXPOSE 9100
ENTRYPOINT [ "/bin/node_exporter.zero" ]

And the build-deploy. Don’t forget to include that final “.” in the build step!

export TAG=$(date +%y%m%d%H%M)docker build \
--tag=dazwilkin/pine-exporter:${TAG} \
--file=Dockerfile.zero \
.

docker push dazwilkin/zero-exporter:${TAG}

If you’d prefer to use my pre-built images, these are available as:

https://hub.docker.com/r/dazwilkin/zero-exporter/
https://hub.docker.com/r/dazwilkin/pine-exporter/

Docker Labels

Docker will attempt to deploy service processes to available nodes in the Swarm. The images described above will only work on ARM (not the skull canyon) run-times and the Zero images will only run on the Zeros not the Pines and vice-versa.

Docker provides various constraints when deploying services. I found it preferable to add user-defined labels to the ARM nodes and used the ARM version number 6(==Zero) and 8(==Pine). It’s then possible to use this label as a constraint when deploying services to ensure the correct image is sent to the correct architecture.

The easiest way to apply the labels is to grep each type of machine and then apply the labels to them in one loop. Here’s the script for the Zeros. It filters the docker node ls output by the Zero names and passes the node IDs into xargs that calls docker node update and applies the label to each of the Zeros:

docker node ls \
--filter=name=zero \
--format="{{.ID}}" \
| xargs \
--replace={} \
docker node update \
--label-add=arm=6 \
{}

The easiest way I found to enumerate nodes’ labels is by using docker node inspect and filtering the output with the awesome jq:

docker node ls \
--filter=name=zero \
--format="{{.ID}}" \
| xargs \
--replace={} \
docker node inspect {} \
| jq ".[0].Spec.Labels"

Services

Docker Swarm Visualizer is excellent. I recommend you start it now and watch what happens. I publish it to 8888 as 8080 is used commonly:

docker run \
--name=swarm-visualizer \
--interactive \
--tty \
--detach \
--publish=8888:8080 \
--env=HOST=localhost \
--volume=/var/run/docker.sock:/var/run/docker.sock \
manomarks/visualizer

Then:

http://localhost:8888

NB how the nodes correctly reflect the labels previously applied to.

OK… We’ll create node-exporters, whoamis and monitor with Prometheus

Because we’re deploying distinct services for the Zeros (ARMv6) and the Pines (ARMv8), each service needs a distinct name and publish port. I’ll use a convention of ‘z’ names and X001 ports for the Zeros and ‘p’ names and X002 ports for the Pines:

docker service create \
--name=nodez \
--publish=9101:9100 \
--constraint=node.labels.arm==6 \
--mount=type=bind,source=/proc,target=/host/proc,readonly \
--mount=type=bind,source=/sys,target=/host/sys,readonly \
--mount=type=bind,source=/,target=/rootfs,readonly \
--replicas=5 \
dazwilkin/zero-exporter:1706202032 \
-collector.sysfs /host/sys \
-collector.filesystem.ignored-mount-points "^/(sys|proc|dev|host|etc)($|/)"
docker service create \
--name=nodep \
--publish=9102:9100 \
--constraint=node.labels.arm==8 \
--mount=type=bind,source=/proc,target=/host/proc,readonly \
--mount=type=bind,source=/sys,target=/host/sys,readonly \
--mount=type=bind,source=/,target=/rootfs,readonly \
--replicas=2 \
dazwilkin/pine-exporter:1707021014 \
-collector.sysfs /host/sys \
-collector.filesystem.ignored-mount-points "^/(sys|proc|dev|host|etc)($|/)"
docker service create \
--name=whoamiz \
--publish=8001:8000 \
--constraint=node.labels.arm==6 \
--replicas=5 \
dazwilkin/whoami-zero:1706181526
docker service create \
--name=whoamip \
--constraint=node.labels.arm==8 \
--publish=8002:8000 \
--replicas=4 \
dazwilkin/whoami-pine:1707021117

NB how the services use constraint=node.labels.arm to restrict their deployments to the correct machine architectures. I create node exporters, one for each Zero (there are 5) and one for each Pine (there are 2).

Once the images are pulled and containers created, Visualizer should show something similar to the following:

Swarm Visualizer shows each node, each container and the node labels

Prometheus

All that remains is to deploy Prometheus to the skull canyon with a configuration pointing it to itself (self-monitoring) and the Zeros and the Pines. I recommend uses IP addresses for each of these but I use a combination of IPs and domain names. Here’s my prometheus.yml:

global:
scrape_interval: 15s
external_labels:
monitor: 'monitor'
scrape_configs:
- job_name: 'prometheus'
scrape_interval: 5s
static_configs:
- targets: ['localhost:9090']
- job_name: "zero"
scrape_interval: "5s"
static_configs:
- targets: [
'192.168.1.191:9101',
'192.168.1.192:9101',
'192.168.1.193:9101',
'zero-w-01:9101',
'zero-w-02:9101'
]
- job_name: "pine"
scrape_interval: "5s"
static_configs:
- targets: [
'pine64-01:9102',
'pine64-02:9102'
]

You may reproduce the “job_name” sections freely. The array of targets should correspond to the different types of your machines. In this case, in the “zero” section, I have 5 targets, one for each of the 5 Zeros and, in the “pine” section, 2 targets, one for each of the Pines. Change these to your machines’ IP addresses.

Then, assuming you create the Prometheus service in the same directory as the prometheus.yml:

docker service create \
--name=prometheus \
--publish=9090:9090 \
--constraint=node.role==manager \
--mount=type=bind,\
source=$PWD/prometheus.yml,\
target=/etc/prometheus/prometheus.yml \
prom/prometheus

Then to open Prometheus and list these targets:

http://localhost:9090/targets
Prometheus listing itself, 5 Zero and 2 Pine targets

And, I use a default Prometheus graph but you should use any|all of the available metrics. Click on the “Graph” option and, from the dropdown select the metric of your choosing (I’m using “go_routines”) and then click “Execute” and then click “Graph”:

Prometheus graphing “go_routines” from itself, 5 Zeros and 2 Pines

That’s it!

It should be trivial to extend this using Grafana for visualization. Deploy Grafana and choose Prometheus as a datasource and then localhost:9090 and you should be able to graph metrics.

Summary

A whirlwind tour through monitoring of Docker Swarm heterogeneous (AMD and ARM) nodes using Prometheus, Prometheus Node Exporter and a simple whoami app.

--

--