Tutorial on running Nomad and Consul as a systemd daemon service, part 1
This tutorial consists of three articles which illustrate how we run and configure Nomad, Consul, and user customised programs.
In a distributed system scenario, Nomad automates the process of deploying software applications in each individual node. As well as a applications scheduler, nomad serves as a resource manager. Nomad collects information on the available resources and capabilities per host. And it allocates applications depends on the available resource on each node.
To deploy nomad service, we need to have a nomad agent running on each machine in a cluster illustrated in the above figure. A Nomad agent can be in a server mode or client mode. Server maintains the information about the cluster and allocates applications to clients depending on the task definition and clients’ available resources. It is highly recommended to have multiple servers within one cluster. In case one server fails, we still can get the information of the cluster. Client is a very lightweight process that registers the host machine, performs heartbeating, and runs any tasks that are assigned to it by the servers.
In the following example codes, we will have two chips (can also be any other platforms which support Linux OS like raspberry pi) acting as clients and one chip acting as a server.
All of the chips run their nomad agent services as a systemd daemon process which guarantee to relaunch the nomad process after it fails or the reboot of chip.
In client1 and client2 (CHIP1 and CHIP2):
Put nomad program binary into this path:
/usr/bin/nomad
Add the following file:
sudo mkdir /etc/nomad
sudo vim /etc/nomad/nomad-client.hcl
In nomad-client.hcl
:
# Increase log verbositylog_level = “DEBUG”data_dir = “/var/lib/nomad”client {
enabled = true
node_class = “node” servers = [“192.168.1.3:4647”] #HERE we assume the internal ip address of the server (CHIP3). options = { “docker.privileged.enabled” = “true”
“docker.volumes.enabled” = “true” }}# Modify our port to avoid a collision with server1ports { http = 5656}
Set nomad service as a daemon process:
sudo vim /etc/systemd/system/nomad-client.service
In nomad-client.service
:
[Unit]Description=Nomad clientWants=network-online.target
After=network-online.target[Service]ExecStart= /bin/sh -c “/usr/bin/nomad agent -config /etc/nomad/nomad-client.hcl -bind=$(/sbin/ifconfig wlan0 | grep ‘inet addr:’ | cut -d: -f2 | awk ‘{ print $1}’)”Restart=always
RestartSec=10[Install]WantedBy=multi-user.target
To enable this daemon process on both chips:
sudo systemctl enable nomad-client.service
sudo systemctl start nomad-client.service
In server (CHIP3):
Put nomad program binary into this path:
/usr/bin/nomad
Add the following file:
sudo mkdir /etc/nomad
sudo vim /etc/nomad/nomad-server.hcl
In nomad-server.hcl
:
# Increase log verbositylog_level = “DEBUG”# Setup data dirdata_dir = “/var/lib/nomad”server { enabled = true
bootstrap_expect = 1}
Set nomad service as a daemon process:
sudo vim /etc/systemd/system/nomad-server.service
In nomad-server.service
:
[Unit]
Description=Nomad serverWants=network-online.target
After=network-online.target[Service]ExecStart= /bin/sh -c “/usr/bin/nomad agent -config=/etc/nomad/nomad.hcl -bind=$(/sbin/ifconfig enp3s0 | grep ‘inet addr:’ | cut -d: -f2 | awk ‘{ print $1}’)”Restart=always
RestartSec=10[Install]WantedBy=multi-user.target
To enable this daemon process:
sudo systemctl enable nomad-server.service
sudo systemctl start nomad-server.service
Now, let’s see the magic happens!
We can check the chips’ status by the following commands:
ssh chip@192.168.1.3 ## we ssh into serverexport NOMAD_ADDR=http://192.168.1.11:4646
nomad node-statusID DC Name Class Drain Status
51836cea dc1 chip node false ready
9b1e2f1b dc1 chip node false ready
At this point, we can allocate applications to our clients by writing the job description and launching it. Yet before doing this, I would like to integrate another module Consul from Hashi Corp to make our system more scalable. Please read the second article.