Kubernetes the hard way on bare metal/VMs — Configuring the load balancer
Part of the Kubernetes the hard way on bare metal/VM
--
Introduction
This guide is part of the Kubernetes the hard way on bare metal/VMs series. On its own this may be useful to you however since it’s tailored for the series, it may not be completely suited to your needs.
Load balancing!
Before continuing let’s take a look at Kelsey’s setup for this section from kubernetes-the-hard-way and try to break it down. Don’t run these commands!
Get the k8s public IP address.
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \
--format 'value(address)')
Then setup a http health check to the request path of /healthz for the host “kubernetes.default.svc.cluster.local”
gcloud compute http-health-checks create kubernetes \
--description "Kubernetes Health Check" \
--host "kubernetes.default.svc.cluster.local" \
--request-path "/healthz"
Now allow, via firewall rules, specific GCE networks.
I’ve looked into what this is as I don’t know GCE very well but what it basically is doing is when you spin up the kube-controller-manager service, it auto adds the following default flag (as well as many others):
--cloud-provider-gce-lb-src-cidrs=”130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16"
The section below relates to this flag and is required for GCE. See https://cloud.google.com/load-balancing/docs/https/#firewall_rules for more info — you don’t need this.
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-health-check \
--network kubernetes-the-hard-way \
--source-ranges 209.85.152.0/22,209.85.204.0/22,35.191.0.0/16 \
--allow tcp
Add the http-health-checks to a new target pool
gcloud compute target-pools create kubernetes-target-pool \
--http-health-check kubernetes
Add the controller instances to the target pool as the destination
gcloud compute target-pools add-instances kubernetes-target-pool \
--instances controller-0,controller-1,controller-2
Finally add a forwarding rule from the K8S public ip to the k8s target pool.
gcloud compute forwarding-rules create kubernetes-forwarding-rule \
--address ${KUBERNETES_PUBLIC_ADDRESS} \
--ports 6443 \
--region $(gcloud config get-value compute/region) \
--target-pool kubernetes-target-pool
So looking at this, you could probably use something like the Kemp Free Load Balancer, HAProxy, Nginx or many others to achieve what is shown here.
Just create a pool of ‘real’ servers (the controllers), and then create a VIP (Virtual IP)which would map to the controller pool.
You can set this up as round robin and ensure L7 is enabled.
You can also set up the health check which will validate the “liveness” of the server using the /healthz path.
Choosing a load balancer
In this tutorial you will be using the Free Load Balancer supplied by Kemp (if you want to follow along that is).
I’m not sponsored or paid by them and other options can be used but this is a quick and easy setup. Yes, it requires signup but again, it is 100% free, you’ll just need them later for the free licensing.
Once you have the KVM disk downloaded (or whatever version you need if using something else for example the OVF for Virtualbox), spin a new VM up with the disk attached and follow through the setup.
Boot the VM
Once booted you’ll be greeted with the screen shown below where it will advise you to go to the IP address configured for the loadbalancer.
Go to the address, skip past the SSL warning if you get one and accept the license agreement. Now you’ll need those credentials you created earlier when you downloaded the disk file.
Once you’ve signed in, You’ll be presented with a license screen. Click on the “Free LoadMaster” option.
You’ll now be prompted to set a password and then login… maybe two or three times! The username is “bal”.
You’re in!
See, easy. :-/
Create a pool
Go to “Virtual Services” and click on “Add New”
In here you can add a new server ‘pool’. Enter a V(virtual)IP address for the virtual server pool. I’ll be using 192.168.0.210. This will be the IP that is used to access the controllers. Set the port to 80 and name it k8s-http
On the next screen you’ll configure the pool.
Configure the pool
Go to the “Real Servers” section, click the “Add New…” button and fill out the details. Once you’ve added one, it’ll refresh the screen so you can add the other controllers.
Set “Real Server Address” to the IP of the controllers.
In this example you’ll create 3 with 192.168.0.110, 192.168.0.111 and 192.168.0.112.
When you’re done, click back and you’ll see them listed under the “Real Servers” section.
Change “Real Server Check Method” under the “Real Servers” section to “HTTP Protocol” if it’s not already set.
Set “URL” to “/healthz” and click “Set URL”.
Change “HTTP Method” to “GET”.
Click “Show headers” and enter “Host : kubernetes.default.svc.cluster.local” then click “Set Header”.
Now spin up your controllers — You should see that the status of the service is up.
Create HTTPS access
You now have a (basic) load balancer that can be used for the Kubernetes controllers later on however you should note that you’ve only set it up for port 80 so far. You’ll need to repeat the process above for other ports such as HTTPS (443) and 6443 (K8s API) — Don’t worry about setting up any certificates just yet, you’ll do that in a moment.
Technically what you’ve setup for https above isn’t going to work properly because you haven’t configured any certs. I won’t be going into too much depth here but you can either generate one with LetsEncrypt or using a self signed cert (or of course buy one if you don’t like free things).
A quick self signed cert can be configured on the lab machine with the following command. It will create a key.pem file and cert.pem file:
openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -days 365Generating a 4096 bit RSA private key
................................................................................++
.........................................................++
writing new private key to 'key.pem'
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:GB
State or Province Name (full name) [Some-State]:ENTER STATE
Locality Name (eg, city) []:ENTER CITY
Organization Name (eg, company) [Internet Widgits Pty Ltd]:DeeToTheVee
Organizational Unit Name (eg, section) []:Kubernetes
Common Name (e.g. server FQDN or YOUR name) []:k8s-controllers-lb.deetothevee.co.uk
Email Address []:YOUR EMAIL
Add the cert to the loadbalancer
Click on “Certificates and Security”, click on “SSL Certificates” and then click on “Import Certificate”. Select the cert and key you created/have, enter the passphrase you set for the private key (if appropriate) and a name for the cert.
You’re not going to use this as the administrative cert (if you want one for that, generate a new one with the correct FQDN). Instead, go back to the k8s-https service you created earlier and click modify.
In here you’ll expand the SSL properties and assign the cert. To do this expand on “SSL Properties” and check the “SSL Acceleration” box. If a warning of any kind appears just acknowledge it and you’ll see more options shown.
Uncheck all “Supported Protocols” except “TLS 1.2”. Next move the cert you added from “Available certificates” to “Assigned certificates” and click “Set Certificates”.
Now you need to either configure a domain for your public IP if that is an option for you OR the easier option is to open your hosts file and add:
192.168.0.210 k8s-controllers-lb.deetothevee.co.uk
Change the domain for whatever you entered into the SSL certificate.
The final thing you need to do is setup one more pool for port 6443 so remote kubectl will work; you could proxy this too if you wanted too.
This should just be configured in the exact same way as the HTTP setup but should listen on port 6443. You can do this quickly by modifying k8s-http and clicking “Duplicate VIP”, then editing the “New Port” to be “6443”. Make sure you change the real servers ports too.
Testing
You’re done!
Simply browse to your domain, in my case https://k8s-controllers-lb.deetothevee.co.uk/healthz and you’ll see the following page.
If you get a 404 it’s likely you still have the default nginx site config in place on the kube-proxy. Remove this or adjust the headers on your load balancer to resolve the 404.
Now, you’ve got your loadbalancer setup, let’s test from the lab machine
curl -k https://k8s-controllers-lb.deetothevee.co.uk:6443/version#Or per node
curl --cacert pki/ca/ca.pem https://192.168.0.110:6443/version
curl --cacert pki/ca/ca.pem https://192.168.0.111:6443/version
curl --cacert pki/ca/ca.pem https://192.168.0.112:6443/version##Results
{
"major": "1",
"minor": "13",
"gitVersion": "v1.13.0",
"gitCommit": "ddf47ac13c1a9483ea035a79cd7c10005ff21a6d",
"gitTreeState": "clean",
"buildDate": "2018-12-03T20:56:12Z",
"goVersion": "go1.11.2",
"compiler": "gc",
"platform": "linux/amd64"
}
Conclusion
You’ve configured the load balancer for the controllers.
Next: Setting up the workers