Kubernetes the hard way on bare metal/VMs — Configuring the load balancer

Part of the Kubernetes the hard way on bare metal/VM. This is designed for beginners.

Drew Viles
5 min readDec 14, 2021
HaProxy

Introduction

This guide is part of the Kubernetes the hard way on bare metal/VMs series. On its own this may be useful to you however since it’s tailored for the series, it may not be completely suited to your needs.

Load balancing!

Before continuing let’s take a look at Kelsey’s setup for this section from kubernetes-the-hard-way and try to break it down. Don’t run these commands!

Get the k8s public IP address.

KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \
--format 'value(address)')

Then setup a http health check to the request path of /healthz for the host “kubernetes.default.svc.cluster.local”

gcloud compute http-health-checks create kubernetes \
--description "Kubernetes Health Check" \
--host "kubernetes.default.svc.cluster.local" \
--request-path "/healthz"

Now allow, via firewall rules, specific GCE networks.

I’ve looked into what this is and when you spin up the kube-controller-manager service , it auto adds the following default flag (as well as many others):
— cloud-provider-gce-lb-src-cidrs=”130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16"
The section below relates to this flag and is required for GCE. See https://cloud.google.com/load-balancing/docs/https/#firewall_rules for more info — you don’t need this.

gcloud compute firewall-rules create kubernetes-the-hard-way-allow-health-check \
--network kubernetes-the-hard-way \
--source-ranges 209.85.152.0/22,209.85.204.0/22,35.191.0.0/16 \
--allow tcp

Add the http-health-checks to a new target pool

gcloud compute target-pools create kubernetes-target-pool \
--http-health-check kubernetes

Add the controller instances to the target pool as the destination

gcloud compute target-pools add-instances kubernetes-target-pool \
--instances controller-0,controller-1,controller-2

Finally add a forwarding rule from the K8S public ip to the k8s target pool.

gcloud compute forwarding-rules create kubernetes-forwarding-rule \
--address ${KUBERNETES_PUBLIC_ADDRESS} \
--ports 6443 \
--region $(gcloud config get-value compute/region) \
--target-pool kubernetes-target-pool

So looking at this, you could probably use something like the Kemp Free Load Balancer, HAProxy, Nginx or many others to achieve what is shown here.
Just create a pool of ‘real’ servers (the controllers), and then create a VIP (Virtual IP)which would map to the controller pool.
You can set this up as round robin and ensure L7 is enabled.
You can also set up the health check which will validate the “liveness” of the server using the /healthz path.

Choosing a load balancer

In this tutorial you will be using the HAProxy for this setup. other LoadBalancers are available but this is what I’ll be setting up with.

You should have created an additional VM called k8s-controllers-lb which will be used to install HAProxy.

Install HAProxy

Inside the Vm, install HAProxy with the following command

sudo apt install haproxy -y

Prepare HTTPS access

I won’t be going into too much depth here on certificates but you can either generate one with LetsEncrypt/CertBot or using a self signed cert (or of course buy one if you don’t like free things).

A quick self signed cert can be configured on the lab machine with the following command. It will create a key.pem file and cert.pem file:

openssl req  -nodes -new -x509  -keyout /etc/haproxy/api_key.pem -out /etc/haproxy/api_cert.pem -days 365Generating a 4096 bit RSA private key
................................................................................++
.........................................................++
writing new private key to 'key.pem'
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:GB
State or Province Name (full name) [Some-State]:ENTER STATE
Locality Name (eg, city) []:ENTER CITY
Organization Name (eg, company) [Internet Widgits Pty Ltd]:YOUR_ORG
Organizational Unit Name (eg, section) []:Kubernetes
Common Name (e.g. server FQDN or YOUR name) []:k8s-controllers-lb.YOUR_DOMAIN
Email Address []:YOUR EMAIL

Now place the private and public key into one file:

cat /etc/haproxy/api_key.pem /etc/haproxy/api_cert.pem > /etc/haproxy/k8s_api.pem

You need to either configure a domain for your public IP if that is an option for you OR the easier option is to open your hosts file and add:
192.168.0.101 k8s-controllers-lb.YOUR_DOMAIN
Change the domain for whatever you entered into the SSL certificate.

Create the frontend and backend for the API

Once installed, we can create a pool of controllers by setting up the haproxy config. Run the following to configure HAProxy.

global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
stats timeout 30s
user haproxy
group haproxy
daemon
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
# See: https://ssl-config.mozilla.org/#server=haproxy&server-version=2.0.3&config=intermediate
ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets
tune.ssl.default-dh-param 2048
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
### Controllers ###
frontend apiservers
bind *:80
bind *:443 ssl crt /etc/haproxy/k8s_api.pem
http-request redirect scheme https unless { ssl_fc }
mode http
option forwardfor
default_backend k8s_apiservers
frontend kube_api
bind *:6443
mode tcp
option tcplog
default_backend k8s_apiservers_6443
backend k8s_apiservers
mode http
balance roundrobin
option forwardfor
option httpchk GET / HTTP/1.1\r\nHost:kubernetes.default.svc.cluster.local
default-server inter 10s fall 2
server k8s-controller-0 192.168.0.110:80 check
server k8s-controller-1 192.168.0.111:80 check
server k8s-controller-2 192.168.0.112:80 check
backend k8s_apiservers_6443
mode tcp
option ssl-hello-chk
option log-health-checks
default-server inter 10s fall 2
server k8s-controller-0 192.168.0.110:6443 check
server k8s-controller-1 192.168.0.111:6443 check
server k8s-controller-2 192.168.0.112:6443 check

Reload the service and you should see that the status of the service is up.
You can check the logs and it will show the backend services as up.

systemctl restart haproxy

The final thing you need to do is setup one more pool for port 6443 so remote kubectl will work; ← check this

Testing

You’re done!

Simply browse to your domain, in my case https://k8s-controllers-lb.YOUR_DOMAIN/healthz and you’ll see the following page.

If you get a 404 it’s likely you still have the default nginx site config in place. Remove this or adjust the headers on your load balancer to resolve the 404.

The /healthz path should return “ok”

Now, you’ve got your loadbalancer setup, let’s test from the lab machine

curl -k https://k8s-controllers-lb.viles.uk/healthz
ok
curl --cacert pki/ca/ca.pem https://192.168.0.110:6443/version
{
"major": "1",
"minor": "23",
"gitVersion": "v1.23.0",
"gitCommit": "ab69524f795c42094a6630298ff53f3c3ebab7f4",
"gitTreeState": "clean",
"buildDate": "2021-12-07T18:09:57Z",
"goVersion": "go1.17.3",
"compiler": "gc",
"platform": "linux/amd64"
}

Conclusion

You’ve configured the load balancer for the controllers.

Next: Setting up the workers

Unlisted

--

--