Bare-metal OpenShift with MetalLB LoadBalancer
Jeganathan Swaminathan ( jegan@tektutor.org )
In a bare-metal OpenShift setup, the LoadBalancer Service will not work by default, unlike the AWS ROSA or Azure RedHat OpenShift.
This article assumes you already have a working RedHat OpenShift v4.x cluster. I used RedHat OpenShift v4.10.5.
My local RedHat OpenShift cluster looks as shown below
dispenser(jegan@tektutor.org)$ oc version
Client Version: 4.10.0-202203141248.p0.g6db43e2.assembly.stream-6db43e2
Server Version: 4.10.5
Kubernetes Version: v1.23.3+e419edf(jegan@tektutor.org)$ oc get nodes
NAME STATUS ROLES AGE VERSION
master-1.ocp.tektutor.org Ready master,worker 41m v1.23.3+e419edf
master-2.ocp.tektutor.org Ready master,worker 41m v1.23.3+e419edf
master-3.ocp.tektutor.org Ready master,worker 41m v1.23.3+e419edf
worker-1.ocp.tektutor.org Ready worker 24m v1.23.3+e419edf
worker-2.ocp.tektutor.org Ready worker 24m v1.23.3+e419edf
Let’s create a new project in OpenShift
oc new-project tektutor
Let’s create a simple deployment
oc create deploy nginx --image=bitnami/nginx:1.20
The expected output is
(jegan@tektutor.org)$ oc create deploy nginx --image=bitnami/nginx:1.20
deployment.apps/nginx created
Let’s list and check the nginx deployment status
(jegan@tektutor.org)$ oc get deploy,rs,po
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx 1/1 1 1 30sNAME DESIRED CURRENT READY AGE
replicaset.apps/nginx-6845cfdd6 1 1 1 30sNAME READY STATUS RESTARTS AGE
pod/nginx-6845cfdd6-v8vk6 1/1 Running 0 30s
Let’s scale the nginx deployment
oc scale deploy/nginx --replicas=4
The expected output is
(jegan@tektutor.org)$ oc scale deploy/nginx --replicas=4
deployment.apps/nginx scaled
Let’s check the pods
oc get pods
The expected output is
(jegan@tektutor.org)$ oc get po
NAME READY STATUS RESTARTS AGE
nginx-679c8f9884-2jzsf 1/1 Running 0 28s
nginx-679c8f9884-5m6cn 1/1 Running 0 28s
nginx-679c8f9884-7c9nk 1/1 Running 0 58s
nginx-679c8f9884-lrq8r 1/1 Running 0 28s
Let’s create a LoadBalancer service for the nginx deployment as shown below
(jegan@tektutor.org)$ oc expose deploy/nginx --type=LoadBalancer --port=8080
service/nginx exposed
Let’s check the LoadBalancer service details
oc get svc
oc describe svc/nginx
The expected output is
(jegan@tektutor.org)$ oc get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx LoadBalancer 172.30.251.96 <pending> 8080:31591/TCP 4s
(jegan@tektutor.org)$ oc describe svc/nginx
Name: nginx
Namespace: tektutor
Labels: app=nginx
Annotations: <none>
Selector: app=nginx
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 172.30.251.96
IPs: 172.30.251.96
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 31591/TCP
Endpoints: 10.128.0.69:8080,10.128.2.8:8080,10.130.0.67:8080 + 1 more...
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
In the output shown above, the External IP for the nginx LoadBalancer service is in Pending state. This will remain in Pending state forever, as there is no LoaderBalancer installed in OpenShift bare-metal setup by default. Hence, we need to install a LoadBalancer.
We need to create a namespace called metallb-system.
oc create ns metallb-system
Let’s install MetalLB LoadBalancer as shown below. Let’s do this using RedHat OpenShift web console as an Administrator (kube-admin).
Once you select MetalLB Operator, your screen will look similar to the below screenshot.
Make sure you selected metallb-system namespace before installing MetalLB and Click on the Install
button, then you will get a screen like shown below.
Click on Install
button.
The MetalLB installation will take a few minutes to complete. Once the installation is complete, you will see a screen similar to the screenshot shown below.
We need to create a MetalLB LoadBalancer instance and start it on our cluster.
Create a file named metallb.yml
apiVersion: metallb.io/v1beta1
kind: MetalLB
metadata:
name: metallb
namespace: metallb-system
Let’s create the MetalLB instance
oc apply -f metallb.yml
The expected output is
(jegan@tektutor.org)$ oc apply -f metallb.yml
metallb.metallb.io/metallb created
Let’s verify if the controller is running
oc get deployment -n metallb-system controller
The expected output is
(jegan@tektutor.org)$ oc get deployment -n metallb-system controller
NAME READY UP-TO-DATE AVAILABLE AGE
controller 1/1 1 1 27s
Let’s verify if the daemonset speaker pods are running in all nodes
oc get daemonset -n metallb-system speaker
The expected output is
(jegan@tektutor.org)$ oc get daemonset -n metallb-system speaker
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
speaker 5 5 5 5 5 kubernetes.io/os=linux 89s
Now check the IP addresses of your OpenShift cluster nodes
oc get nodes -o wide
The expected output is
(jegan@tektutor.org)$ oc get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master-1.ocp.tektutor.org Ready master,worker 83m v1.23.3+e419edf 192.168.122.76 <none> Red Hat Enterprise Linux CoreOS 410.84.202203141348-0 (Ootpa) 4.18.0-305.40.2.el8_4.x86_64 cri-o://1.23.1-12.rhaos4.10.git1607c6e.el8
master-2.ocp.tektutor.org Ready master,worker 83m v1.23.3+e419edf 192.168.122.164 <none> Red Hat Enterprise Linux CoreOS 410.84.202203141348-0 (Ootpa) 4.18.0-305.40.2.el8_4.x86_64 cri-o://1.23.1-12.rhaos4.10.git1607c6e.el8
master-3.ocp.tektutor.org Ready master,worker 83m v1.23.3+e419edf 192.168.122.16 <none> Red Hat Enterprise Linux CoreOS 410.84.202203141348-0 (Ootpa) 4.18.0-305.40.2.el8_4.x86_64 cri-o://1.23.1-12.rhaos4.10.git1607c6e.el8
worker-1.ocp.tektutor.org Ready worker 65m v1.23.3+e419edf 192.168.122.152 <none> Red Hat Enterprise Linux CoreOS 410.84.202203141348-0 (Ootpa) 4.18.0-305.40.2.el8_4.x86_64 cri-o://1.23.1-12.rhaos4.10.git1607c6e.el8
worker-2.ocp.tektutor.org Ready worker 65m v1.23.3+e419edf 192.168.122.141 <none> Red Hat Enterprise Linux CoreOS 410.84.202203141348-0 (Ootpa) 4.18.0-305.40.2.el8_4.x86_64 cri-o://1.23.1-12.rhaos4.10.git1607c6e.el8
Based on the above output, we can observe all the above VMs are in the subnet 192.168.122.0/24. Hence pick an IP range in the same subnet which isn’t taken already. I chose 192.168.122.90 to 192.168.122.100.
Let’s create an AddressPool that MetalLB can use
metallb-address-pool.yml
apiVersion: metallb.io/v1alpha1
kind: AddressPool
metadata:
namespace: metallb-system
name: tektutor-metallb-addresspool
spec:
protocol: layer2
addresses:
- 192.168.122.90-192.168.122.100
Let’s create the AddressPool from the above manifest file.
oc apply -f metallb-address-pool.yml
The expected output is
(jegan@tektutor.org)$ oc apply -f metallb-address-pool.yml
addresspool.metallb.io/doc-example created
Now let’s check the nginx LoadBalancer service that we create earlier
oc get svc
oc describe svc/nginx
The expected output is shown below. The nginx LoadBalancer service is now Load balanced by MetalLB Load Balancer at IP address 192.168.122.90.
(jegan@tektutor.org)$ oc get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
metallb-operator-controller-manager-service ClusterIP 172.30.57.161 <none> 443/TCP 29m
nginx LoadBalancer 172.30.251.96 192.168.122.90 8080:31591/TCP 43m
webhook-service ClusterIP 172.30.103.207 <none> 443/TCP 29m
(jegan@tektutor.org)$ oc describe svc/nginx192.168.122.90
Name: nginx
Namespace: tektutor
Labels: app=nginx
Annotations: <none>
Selector: app=nginx
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 172.30.251.96
IPs: 172.30.251.96
LoadBalancer Ingress: 192.168.122.90
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 31591/TCP
Endpoints: 10.128.0.69:8080,10.128.2.8:8080,10.130.0.67:8080 + 1 more...
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal IPAllocated 2m16s metallb-controller Assigned IP ["192.168.122.90"]
Normal nodeAssigned 2m16s metallb-speaker announcing from node "master-3.ocp.tektutor.org"
Now, let us test if the LoadBalancer service is accessible at the external IP 192.168.122.90 as shown below
curl http://192.168.122.90:8080
The expected output is
(jegan@tektutor.org)$ curl http://192.168.122.90:8080
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p><p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p><p><em>Thank you for using nginx.</em></p>
</body>
</html>
You can follow the author to get notified when he publishes new articles.
If you found this post helpful, please click the clap 👏 button below a few times to show your support for the author 👇