Running Hashicorp Nomad, Consul, Pihole and Gitea on Raspberry Pi 3 B+
Hello, everyone!
I’d like to share my experience of setting up some useful services for my LAN, using Raspberry Pi 3 B+ and Hashicorp Nomad.
Hardware
I think, by now, everyone knows about Raspberry Pi single-board computer, not much to say about it. :)
Here are the specs of the Model 3 B+ which I currently use:
For data storage, I strongly suggest to invest into an external storage solution.
For example, a NAS or a RAID enclosure.
Attach the data storage to Raspberry Pi and mount it somewhere on the filesystem, for example: /mnt/storage
, it will be used to store data related to Nomad and Consul.
OS/Software
For OS, I’m using openSUSE Tumbleweed ARM, because — why not? :)
Haven’t encountered any issues yet running it.
You should be able to use any OS as long as it has support for ARM and software stack we’re going to be using.
Software Stack
After you’ve installed an OS and finished initial configuration of your Raspberry Pi, it’s time to install the software stack that we will use to run and manage our services.
We will use the following software:
Docker
We’re going to use Docker to run Gitea and Pihole containers.
Both Gitea and Pihole have official Docker images available, so it’s a good starting point.
Nomad
Nomad will be used as a container manager and resources/task scheduler.
Initially, I was looking to try the K3S project for the aforementioned purpose,
but, after some usage, I found it to be a bit of an overkill. Also, I’m not very familiar with Kubernetes, and Nomad had a less steep learning curve.
After some research and playing around, Nomad seemed to be better suited for the task.
Consul
Consul will be used mainly for service-discovery purposes.
Nomad does not provide any service-discovery functionality by itself, but relies on Consul to provide that functionality.
Software Installation/Configuration
Docker
If you’re running openSUSE Tumbleweed, you can install Docker CE from openSUSE repositories. Docker package version should be fairly close with Docker release notes.
If you’re running Raspbian/Ubuntu Core, you will have to add a Debian Docker repository or Ubuntu Docker repository and install Docker from there.
If you’re running Windows… Sorry, can’t help with that. :)
After Docker has been installed, start and enable docker
service and add your user to docker
group.
Nomad
Download Nomad ZIP-archive and unpack the binary to /usr/bin
or /usr/local/bin:
user@raspberrypi:~> nomad -version
Nomad v0.11.0 (5f8fe0afc894d254e4d3baaeaee1c7a70a44fdc6)
For storing Nomad configuration, create the following directories in the data directory created earlier:
- conf
- data
- jobs
- nomad.d
For example:
user@raspberrypi:~>/mnt/storage/nomad # tree -L 1
.
├── conf
├── data
├── jobs
└── nomad.d
After directories have been created, cd
into nomad.d
and create the following files:
- server.hcl — Nomad server configuration:
server {
enabled = true
}data_dir = "/mnt/storage/nomad/conf"
datacenter = "DC0"bind_addr = "0.0.0.0"ports {
http = 4646
rpc = 4647
serf = 4648
}consul {
address = "127.0.0.1:8500"
}acl {
enabled = false
token_ttl = "30s"
policy_ttl = "60s"
}
- client.hcl — Nomad client configuration:
client {
enabled = true
network_interface = "eth0"
server_join {
retry_join = [
"127.0.0.1"
]
retry_max = 3
retry_interval = "15s"
}
host_volume "gitea-data" {
path = "/mnt/storage/nomad/data/gitea/data"
read_only = false
}
host_volume "gitea-db" {
path = "/mnt/storage/nomad/data/gitea/db"
read_only = false
}
}
Create nomad
user and change Nomad directory ownership:
useradd nomad -s /bin/false -d /mnt/storage/nomad/nomad.d -G docker
chown -R nomad:nomad /mnt/storage/nomad
After Nomad has been configured, add nomad.service
systemd service file in /etc/systemd/system/
:
[Unit]
Description=Nomad
Documentation=https://nomadproject.io/docs/
Wants=network-online.target
After=network-online.target# When using Nomad with Consul it is not necessary to start Consul first. These
# lines start Consul before Nomad as an optimization to avoid Nomad logging
# that Consul is unavailable at startup.
Wants=consul.service
After=consul.service[Service]
Type=simple
User=nomad
Group=nomad
ExecReload=/bin/kill -HUP $MAINPID
ExecStart=/usr/bin/nomad agent -config /mnt/storage/nomad/nomad.d
ExecStop=/bin/kill $MAINPID
KillMode=process
KillSignal=SIGINT
LimitNOFILE=65536
LimitNPROC=infinity
Restart=on-failure
RestartSec=2
StartLimitBurst=3
TasksMax=infinity
OOMScoreAdjust=-1000[Install]
WantedBy=multi-user.target
Run systemctl enable nomad
to enable it to run on boot.
Consul
Download Nomad ZIP-archive and unpack the binary to /usr/bin
or /usr/local/bin
:
user@raspberrypi:~> consul version
Consul v1.7.1
Protocol 2 spoken by default, understands 2 to 3 (agent will automatically use protocol >2 when speaking to compatible agents)
For storing Consul configuration, create the following directories in the data directory create earlier:
- conf
- consul.d
- data
For example:
user@raspberrypi:/mnt/storage/consul> tree -L 1
.
├── conf
├── consul.d
└── data
After directories have been created, cd
into consul.d
and create the following file:
server.json — Consul server configuration:
{
"server": true,
"datacenter": "DC0",
"data_dir": "/mnt/storage/consul/data",
"ui": true,
"bind_addr": "127.0.0.1",
"bootstrap_expect": 1,
"ports": {
"grpc": 8502
},
"connect": {
"enabled": true
},
"encrypt": "generated_encrypt_key"
}
Create consul
user and change Consul directory ownership:
useradd consul -s /bin/false -d /mnt/storage/consul/consul.d
chown -R consul:consul /mnt/storage/consul
After Consul has been configured, add consul.service
systemd service file in /etc/systemd/system/
:
[Unit]
Description="HashiCorp Consul - A service mesh solution"
Documentation=https://www.consul.io/
Requires=network-online.target
After=network-online.target
ConditionFileNotEmpty=/mnt/storage/consul/consul.d/server.json
[Service]
Type=simple
User=consul
Group=consul
ExecStart=/usr/bin/consul agent -config-dir=/mnt/storage/consul/consul.d/
ExecReload=/usr/bin/consul reload
KillMode=process
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
Run systemctl enable consul
to enable it to run on boot.
After Nomad and Consul have been configured — run systemctl start nomad
Check if Nomad is running:
user@raspberrypi:/mnt/storage/nomad> systemctl status nomad
● nomad.service - Nomad
Loaded: loaded (/etc/systemd/system/nomad.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2020-04-13 22:28:16 UTC; 52min ago
Docs: https://nomadproject.io/docs/
Main PID: 6872 (nomad)
Tasks: 159
CGroup: /system.slice/nomad.service
├─ 5774 /usr/bin/nomad logmon
├─ 5775 /usr/bin/nomad logmon
├─ 5777 /usr/bin/nomad logmon
├─ 6188 /usr/bin/nomad docker_logger
├─ 6190 /usr/bin/nomad docker_logger
├─ 6257 /usr/bin/nomad docker_logger
├─ 6872 /usr/bin/nomad agent -config /mnt/storage/nomad/nomad.d
├─16878 /usr/bin/nomad logmon
├─17056 /usr/bin/nomad docker_logger
├─20441 /usr/bin/nomad logmon
├─20442 /usr/bin/nomad logmon
├─20739 /usr/bin/nomad docker_logger
├─20775 /usr/bin/nomad docker_logger
├─28289 /usr/bin/nomad logmon
└─28537 /usr/bin/nomad docker_logger
Open http://raspberrypi_ip_address:4646
in your web-browser, you should see Nomad web-ui.
Adding Nomad Jobs
Gitea
Gitea is a very nice and lightweight git-server written in Go.
I’m using Gitea to host and manage my personal projects and anything code-related.
To add a Nomad job for Gitea server, do the following:
- Create
gitea
directory in/mnt/storage/nomad/jobs/
- Create
tpl
directory in/mnt/storage/nomad/jobs/gitea
- Create
job.nomad
file in/mnt/storage/nomad/jobs/gitea
user@raspberrypi:/mnt/storage/nomad> tree -L 1 gitea/
gitea/
├── job.nomad
└── tpl
- Create host volumes to store persistent data for Gitea application and database:
mkdir /mnt/storage/nomad/data/gitea/data
mkdir /mnt/storage/nomad/data/gitea/db
- Add the following to
gitea/job.nomad
:
job "gitea" {
region = "global"datacenters = [
"DC0",
]type = "service"group "svc" {
count = 1volume "gitea-data" {
type = "host"
source = "gitea-data"
read_only = false
}volume "gitea-db" {
type = "host"
source = "gitea-db"
read_only = false
}restart {
attempts = 5
delay = "30s"
}task "app" {
driver = "docker"volume_mount {
volume = "gitea-data"
destination = "/data"
read_only = false
}config {
image = "gitea/gitea:linux-arm64"port_map {
http = 3000
ssh_pass = 22
}
}env = {
"APP_NAME" = "Gitea: Git with a cup of tea"
"RUN_MODE" = "prod"
"SSH_DOMAIN" = "git.example.com"
"SSH_PORT" = "22"
"ROOT_URL" = "http://git.example.com/"
"USER_UID" = "1002"
"USER_GID" = "1002"
"DB_TYPE" = "postgres"
"DB_HOST" = "${NOMAD_ADDR_db_db}"
"DB_NAME" = "gitea"
"DB_USER" = "gitea"
"DB_PASSWD" = "gitea"
}resources {
cpu = 200
memory = 256network {
port "http" {}port "ssh_pass" {
static = "2222"
}
}
}service {
name = "gitea-gui"
port = "http"
}
}task "db" {
driver = "docker"volume_mount {
volume = "gitea-db"
destination = "/var/lib/postgresql/data"
read_only = false
}config {
image = "postgres:10-alpine"port_map {
db = 5432
}
}env {
"POSTGRES_USER" = "gitea"
"POSTGRES_PASSWORD" = "gitea"
"POSTGRES_DB" = "gitea"
}resources {
cpu = 200
memory = 128network {
port "db" {}
}
}
}
}
}
- Run
nomad plan job.nomad
- You should see something like this:
+/- Job: "gitea"
+/- Stop: "true" => "false"
Task Group: "svc" (1 create)
Task: "app"
Task: "db"Scheduler dry-run:
- All tasks successfully allocated.Job Modify Index: 50291
To submit the job with version verification run:nomad job run -check-index 50291 job.nomadWhen running the job with the check-index flag, the job will only be run if the
server side version matches the job modify index returned. If the index has
changed, another user has modified the job and the plan's results are
potentially invalid.
- Run
nomad run job.nomad
- You should see something like this:
==> Monitoring evaluation "d92bfc68"
Evaluation triggered by job "gitea"
Evaluation within deployment: "d3896093"
Allocation "b3bd095a" created: node "e79c2d81", group "svc"
Evaluation status changed: "pending" -> "complete"
==> Evaluation "d92bfc68" finished with status "complete"
- Open Nomad UI in web-browser, you should see
gitea
job allocation running:
- Press
http
link, it should redirect you to Gitea UI:
- Press
Sign In
button and follow installation instructions - For SSH pass-through configuration, follow the official Gitea documentation
Pihole
Pihole is a lot of cool tools in a single package:
I’m using Pihole to:
- Filter ads and other unwanted traffic on my network
- DNS cache
To add a Nomad job for Pihole server, repeat the same steps as for Gitea server, the only difference being, we will use Docker bind mounts instead of host volumes; There was a reason, why I didn’t use host volumes for Pihole server, but I forgot it :)
I assume, Nomad host volumes should work fine with Pihole.
- Add the following to
pihole/job.nomad
:
job "pihole" {
region = "global"datacenters = [
"lan0",
]type = "service"group "svc" {
count = 1restart {
attempts = 5
delay = "15s"
}task "app" {
driver = "docker"config {
image = "pihole/pihole:latest"mounts = [
{
type = "bind"
target = "/etc/pihole"
source = "/mnt/storage/nomad/data/pihole/pihole"
readonly = false
},
{
type = "bind"
target = "/etc/dnsmasq.d"
source = "/mnt/storage/nomad/data/pihole/dnsmasq.d"
readonly = false
},
]port_map {
dns = 53
http = 80
}dns_servers = [
"127.0.0.1",
"1.1.1.1",
]
}env = {
"TZ" = "insert_your_timezone"
"WEBPASSWORD" = "insert_your_password"
"DNS1" = "insert_your_dns_server_ip"
"DNS2" = "no"
"INTERFACE" = "eth0"
"VIRTUAL_HOST" = "insert_your_virtual_host_fqdn"
"ServerIP" = "insert_your_raspberry_pi_server_ip"
}resources {
cpu = 100
memory = 128network {
port "dns" {
static = 53
}port "http" {}
}
}service {
name = "pihole-gui"
port = "http"
}
}
}
}
- Run
nomad plan job.nomad
- You should see something like this:
+/- Job: "pihole"
+/- Stop: "true" => "false"
Task Group: "svc" (1 create)
Task: "app"Scheduler dry-run:
- All tasks successfully allocated.Job Modify Index: 50291
To submit the job with version verification run:nomad job run -check-index 50291 job.nomadWhen running the job with the check-index flag, the job will only be run if the
server side version matches the job modify index returned. If the index has
changed, another user has modified the job and the plan's results are
potentially invalid.
- Run
nomad run job.nomad
- You should see something like this:
==> Monitoring evaluation "d92bfc68"
Evaluation triggered by job "pihole"
Evaluation within deployment: "d3896093"
Allocation "b3bd095a" created: node "e79c2d81", group "svc"
Evaluation status changed: "pending" -> "complete"
==> Evaluation "d92bfc68" finished with status "complete"
- Open Nomad UI in web-browser, you should see
pihole
job allocation running:
- Press
http
link and it should redirect you to Pihole admin console:
- Login to the admin console using password specified in
job.nomad
file for Pihole Nomad task - Proceed to configure everything to your liking and enjoy using these awesome tools! :)
UPDATE: 2020/05/26
Ingress
So, you’ve configured your Raspberry Pi, deployed Gitea and Pihole applications, your ex-girlfriend somehow found out about this and now wants to get back together :)
But, something is missing?
Fabulous DNS names for your services instead of cold-hard IP addresses and randomly-assigned ports!
To achieve this, you would need the following:
- Nginx container as a reverse-proxy
- A couple of Nomad/Consul templates with required configuration
- Fabulous DNS names configured on the router pointing to your Raspberry Pi
Adding Nginx job is the same as adding Gitea and Pihole, with the execption that we’re going to be using Nomad/Consul templates for Nginx configuration:
- Create
ingress
andtpl
directoies in/mnt/storage/nomad/jobs/
- Create
job.nomad
file in/mnt/storage/nomad/jobs/ingress
user@raspberrypi:/mnt/storage/nomad> tree -L 1 ingress/
ingress/
├── job.nomad
└── tpl
- Add the following to
ingress/tpl/nginx.conf.tpl
:
user nginx;
worker_processes 1;error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
gzip on;
include /etc/nginx/conf.d/*.conf;
}
- Add the following to
ingress/tpl/proxy.conf.tpl
:
server {
listen 80;
server_name git.example.lan;
{{ range service "gitea-gui" }}
set $upstream {{ .Address }}:{{ .Port }};
{{ end }}
location / {
proxy_pass http://$upstream;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
proxy_read_timeout 90;
}
}
server {
listen 80;
server_name pihole.example.lan;
{{ range service "pihole-gui" }}
set $upstream {{ .Address }}:{{ .Port }};
{{ end }}
location / {
proxy_pass http://$upstream;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
proxy_read_timeout 90;
}
}
- Add the following to
ingress/job.nomad
:
job "ingress" {
region = "global"
datacenters = [
"lan0",
]
type = "service"
group "svc" {
count = 1
restart {
attempts = 5
delay = "15s"
}
task "app" {
driver = "docker"
config {
image = "nginx:stable-alpine"
mounts = [
{
type = "bind"
target = "/etc/nginx/nginx.conf"
source = "local/ingress/nginx.conf"
},
{
type = "bind"
target = "/etc/nginx/conf.d/proxy.conf"
source = "local/ingress/proxy.conf"
},
]
port_map {
http = 80
}
}
template {
source = "/mnt/storage/nomad/jobs/ingress/tpl/nginx.conf.tpl"
destination = "local/ingress/nginx.conf"
change_mode = "signal"
change_signal = "SIGINT"
}
template {
source = "/mnt/storage/nomad/jobs/ingress/tpl/proxy.conf.tpl"
destination = "local/ingress/proxy.conf"
change_mode = "signal"
change_signal = "SIGINT"
}
resources {
cpu = 100
memory = 128
network {
port "http" {
static = 80
}
}
}
service {
name = "ingress"
port = "http"
}
}
}
}
- After you’ve deployed
ingress
job to Nomad, you should be able to seenginx.conf.tpl
andproxy.conf.tpl
templates rendered tonginx.conf
andproxy.conf
inapp/files/local
tab in job allocation configuration screen:
- You also can view rendered files using Nomad CLI:
user@raspberrypi:/mnt/storage/nomad/jobs/ingress # nomad alloc fs 6d722fd7 app/local/ingress/proxy.conf
server {
listen 80;
server_name git.example.lan;
set $upstream 192.168.88.31:21127;
location / {
proxy_pass http://$upstream;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
proxy_read_timeout 90;
}
}
server {
listen 80;
server_name pihole.example.lan;
set $upstream 192.168.88.31:30963;
location / {
proxy_pass http://$upstream;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
proxy_read_timeout 90;
}
}
- After
nginx.conf
andproxy.conf
have been created, they are mounted inside the Nginx container as Docker bind-mounts. - Configure some DNS names for your services on your router and add them to
proxy.conf.tpl
- If needed, restart
ingress
Nomad job - Ingress service now should be ready to serve the applications
- Have a piece of Pie to celebrate :)
P.s
If you notice any typos or wrong information — ping me in the comments and I will update it. Thank you :)