Reliable apps with (HA)Proxy — Part 4.

Service Discovery Ice cream

We want to show you how to use HAProxy and Hashicorp Consul for fun and profit. You can even sprinkle a lightweight orchestrator on top of it, for better taste (for this we will use Nomad, a lightweight alternative to Kubernetes).

The starting point are two blog posts on the HAProxy and Hashicorp website:

Our scenario covers running machine(s) with services such as httpd (apache, nginx), database services, or anything else which can be reached by HAProxy. You can run your machines on bare-metal, on AWS EC2, Google Cloud, Hetzner, containers. It is very flexible and easy to understand. The whole configuration will be greater than sum of parts!

To simplify things for this blog post, we’ll run everything locally with Docker containers, with an added twist of using Podman, alternative docker implementation, perfectly capable running systemd inside container, so we can easily simulate “real” machines, running multiple services. In our case, we want to run Consul agent and Apache web server on every machine.

Node setup (Consul & Apache)

Let’s start first with Consul server setup. We will make it available to the default Podman network for containers (10.88.0.0/16). This is a single node setup, nothing too fancy.

consul.hcl:

datacenter = "haproxy"
server = true
data_dir = "/var/lib/consul/"
bind_addr = "10.88.0.1"
client_addr = "127.0.0.1"
bootstrap = true
bootstrap_expect = 1
enable_syslog = true
log_level = "INFO"

Please note that this expects that Podman default bridge network is running, so Consul can bind to 10.88.0.1 (i.e. you should have at least one container already running).

Setup is rather simple, we’ve used Fedora base image, since it has systemd support out of the box:

FROM fedora
RUN dnf -y install yum-utils; \\
dnf config-manager --add-repo <https://rpm.releases.hashicorp.com/fedora/hashicorp.repo>; \\
dnf -y install httpd consul; \\
dnf clean all; \\
systemctl enable httpd; \\
systemctl enable consul
COPY consul.json http_service.json /etc/consul.d/
EXPOSE 80
CMD [ "/sbin/init" ]

The client consul configuration is two files, one for consul itself, one for exposed service, in JSON format, so we don’t mix it with server setup:

consul.json

{
"server": false,
"datacenter": "haproxy",
"log_level": "INFO",
"enable_syslog": true,
"leave_on_terminate": true,
"start_join": ["10.88.0.1"]
}

http_service.json

{
"service": {
"name": "http",
"port": 80
}
}

We can now start two machines with podman run -ti httpd and check the situation with consul:

$ consul members
Node Address Status Type Build Protocol DC Partition Segment
xxxxxxx 10.88.0.1:8301 alive server 1.12.2 2 haproxy default <all>
193a0e634f9a 10.88.0.20:8301 alive client 1.12.2 2 haproxy default <default>
d1da0f13c496 10.88.0.19:8301 alive client 1.12.2 2 haproxy default <default>
$ dig @127.0.0.1 -p 8600 http.service.haproxy.consul. SRV
...
;; QUESTION SECTION:
;http.service.haproxy.consul. IN SRV
;; ANSWER SECTION:
http.service.haproxy.consul. 0 IN SRV 1 1 80 d1da0f13c496.node.haproxy.consul.
http.service.haproxy.consul. 0 IN SRV 1 1 80 193a0e634f9a.node.haproxy.consul.
;; ADDITIONAL SECTION:
d1da0f13c496.node.haproxy.consul. 0 IN A 10.88.0.19
d1da0f13c496.node.haproxy.consul. 0 IN TXT "consul-network-segment="
193a0e634f9a.node.haproxy.consul. 0 IN A 10.88.0.20
193a0e634f9a.node.haproxy.consul. 0 IN TXT "consul-network-segment="

Nice, Consul runs as a DNS resolver, and it exposes all machines with registered http services. It even exports SRV records, so our service doesn’t actually need to be on predefined/privileged ports (80/443). For this test, though we’ve used the default port 80 from Fedora Apache package.

HAProxy

HAProxy configuration is trivial, especially if you know about resolvers:

resolvers consul
nameserver consul 127.0.0.1:8600
accepted_payload_size 8192
hold valid 5s
listen apache_ice_cream
bind :8080
balance roundrobin
server-template apache 1-10 _http._tcp.service.consul resolvers consul resolve-opts allow-dup-ip resolve-prefer ipv4 check

This can provide for up to 10 deployed servers, all registered through Consul. We use _SERVICENAME._tcp.service.consul as a generic way to get exposed service, independent from Consul cluster name.

Nomad

Nomad service configuration, config.hcl:

client {
enabled = true
}
server {
enabled = true
bootstrap_expect = 1
}
datacenter = "haproxy"
name = "nomad01"
plugin "nomad-driver-podman" {
config {
socket_path = "unix://run/podman/podman.sock"
}
}

Apache job configuration, apache.nomad:

job "apache" {
datacenters = ["haproxy"]
type = "service"
group "web" {
task "httpd" {
driver = "podman"
config {
image = "docker://docker.io/library/httpd:latest"
init = false
}
resources {
cpu = 2000
memory = 1024
}
}
}
}

Wrap-up

Let’s run some nomad commands, to start first and additional containers:

nomad job start apache.nomad  ## this starts our first container
nomad job scale apache web 2 ## scale group with another container

And we can see the debug output in HAProxy

apache_ice_cream/apache1 changed its IP from (none) to 10.88.0.40 by DNS additional record.
apache_ice_cream/apache1 changed its FQDN from (null) to f45addea6709.node.haproxy.consul by 'SRV record'
[WARNING] (30077) : Server apache_ice_cream/apache1 ('f45addea6709.node.haproxy.consul') is UP/READY (resolves again).
Server apache_ice_cream/apache1 ('f45addea6709.node.haproxy.consul') is UP/READY (resolves again).
[WARNING] (30077) : apache_ice_cream/apache2 changed its IP from (none) to 10.88.0.41 by DNS additional record.
apache_ice_cream/apache2 changed its IP from (none) to 10.88.0.41 by DNS additional record.
apache_ice_cream/apache2 changed its FQDN from (null) to acea6eeb1aa9.node.haproxy.consul by 'SRV record'
[WARNING] (30077) : Server apache_ice_cream/apache2 ('acea6eeb1aa9.node.haproxy.consul') is UP/READY (resolves again).
Server apache_ice_cream/apache2 ('acea6eeb1aa9.node.haproxy.consul') is UP/READY (resolves again).

This Nomad example is a bit longer than others, per aspera ad astra. It is a small demonstration that you don’t need Kubernetes scale complications (pun intended) to create something relatively simple, flexible, scalable and manageable:

  • Consul, with a little help of (agent) friends keeps tracks of nodes/exposed services
  • HAProxy routes traffic to registered services, using SRV records (supplied by Consul). It doesn’t blindly trust Consul, but runs checks on services.
  • Nomad orchestrates the infra and can track resource usage and apply scaling actions as necessary.
The Service Discovery Ice cream essay is part of a series of "recipes" that explore the ways of bulding reliable applications with (HA)Proxy.PS. If you have any questions on any of the above outlined thoughts, feel free to share them in the comment section.Click here to read Part 5.: Belgian OLAP Cube — Turn the tables on failures with shards

Ministry of Programming is a supercharged startup studio specialized in building startups and new products💡 We were voted in the top 1000 fastest growing companies in Europe by Financial Times. Twice.

We offer product management, design, development, and investment services to support entrepreneurs and startups towards product success.

Building your next startup? We would love to hear more. If you want to work with us on your startup feel free to reach out at — https://ministryofprogramming.com/contact/

--

--