Load Balancer Service type for Kubernetes

Kubernetes Advocate
AVM Consulting Blog
9 min readAug 8, 2020
External IP

Load Balancing in K8s

Load Balancing means to distribute a set of tasks over a set of resources, Distribution of traffic to make the overall process effectively

Thanks to Ahmet Alp Balkan for the diagrams

Load Balancing is often perceived as a complex technology, yet changing application architectures and the growth of virtualization and cloud are driving requirements for power and flexibility without sacrificing ease-of-use.

LB in K8s

Once you’ve got your application running in Kubernetes, its’ scheduler makes certain that you just have the quantity of desired pods continually running. this suggests that the application pods are created and deleted unexpectedly and you must not depend on a specific pod. However, you should be ready to access your application in a very inevitable manner. And to try to that, Kubernetes provides the only sort of load balancing traffic, specifically a Service.

Service in k8s defines the abstraction of pods and processes to access them and we call them microservices framework.

External IPs

Let us take an example that you are running an application a backend which has 4 pods, The 4 pods you are using are exchangeable. Frontend does not care about what backend they are using. So the point here is whenever the pod's changes at the backend, The FE clients must not know or are aware of that and don't keep track

External IPS

What is External IP Service?

The explanation is understandable for most people. The most important thing here is to be sure which IP is used to reach the Kubernetes cluster. To connect to the Kubernetes cluster we can always bind the service to the IP using external IP service type

Architecture

You can see that in the above architecture both the cluster nodes have their IP. The IP address 10.240.0.2 on Node 1 is bind to httpd service where the actual pod resides in Node 2 and the IP address 10.240.0.3is bind to Nginx service in that the actual pod resides in Node 1. The underlying Overlay network makes this possible. When we curl IP address 10.240.0.2, we should see the response from httpd service while curl 10.240.0.3, we should respond from Nginx service.

Advantages & Disadvantages of External IP

The advantage of using External IP is:

  • You have full control of the IP that you use. You can use IP that belongs to your ASN instead of a cloud provider’s ASN.

The disadvantage of External IP is:

  • The simple setup that we will go thru right now is NOT highly available. That means if the node dies, the service is no longer reachable and you’ll need to manually remediate the issue.
  • There is some manual work that needs to be done to manage the IPs. The IPs are not dynamically provisioned for you thus it requires manual human intervention

How to use External IP service?

Setup

Again, we will use the same diagram as our reference for our cluster setup, except with different IP and different hostname. This is not a good real-life example, but it’s easy to distinguish which is which when we’re verifying the setup. SO in live use cases where you are wanting to expose database on one EX IP and the other application of second external IP

I have provisioned 2 VMs for this scenario k3s-external-ip-master will be our Kubernetes master node and has an IP of 10.240.0.2. k3s-external-ip-worker will be Kubernetes worker and has IP of 10.240.0.3

Exposing an External IP Address to Access an Application in a Cluster

  1. Install kubectl.
  2. Use a cloud provider like Google Kubernetes Engine or Amazon Web Services to create a Kubernetes cluster. …
  3. Configure kubectl to communicate with your Kubernetes API server.

Step 1: Setup Kubernetes cluster

Here you are going to install Kubernetes cluster on the master node and worker node will join the cluster

$ k8sup install --ip <master node ip> --user <username>$ k8sup join --server-ip <master node ip> --ip <worker node ip> --user <username>

You should be seeing something like this now

$ kubectl get nodesNAME                                 STATUS   ROLES    AGE     VERSIONk3s-external-ip-master               Ready    master   18m24s   v1.17.2-k3s.2k3s-external-ip-worker               Ready    <none>   12m21s   v1.17.2-k3s.2

Step 2: Create Kubernetes deployments

We will create Nginx deployment and httpd deployment.

1$ kubectl create deployment nginx --image=nginx2$ kubectl create deployment httpd --image=httpd

You should be seeing this now

$kubectl get podsNAME                     READY   STATUS    RESTARTS    AGEnginx-86c57db5678-ksjha   1/1     Running    0          22shttpd-7bddd41278-hdjka    1/1     Running    0          16s

Step 3: Expose the deployments as External IP type

Let’s expose the Nginx deployment

$ cat << EOF > nginx-service.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
externalIPs:
- 2.2.2.2 (any IP you choose)
EOF

And expose httpd deployment

$ cat << EOF > httpd-service.yaml
apiVersion: v1
kind: Service
metadata:
name: httpd-service
spec:
selector:
app: httpd
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
externalIPs:
- 1.1.1.1(any IP you choose)
EOF

Kubectl them

1$ kubectl create -f nginx-service.yaml2$ kubectl create -f httpd-service.yaml

Now your Kubernetes services should look like this

$ kubectl get svcNAME            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGEkubernetes      ClusterIP   10.0.1.5       <none>          443/TCP   18mhttpd-service   ClusterIP   10.240.0.2     10.240.1.123    80/TCP    32snginx-service   ClusterIP   10.240.0.3     10.240.1.124     80/TCP    26s

You might see the service type is ClusterIP here. I am not sure why it does not says External IP instead.

Step 4

So we can check the output by using curl and we must get the apache default page

$ curl -i 1.2.4.120
HTTP/1.1 200 OK
Date: Fri, 11 Jun 2020 03:36:23 GMT
Server: Apache/2.4.41 (Unix) <------
Last-Modified 11 Jun 2020 18:53:14 GMT
ETag: "2d-432a5e4fhjfhd3a80"
Accept-Ranges: bytes
Content-Length: 45
Content-Type: text/html<html><body><h1>It works!</h1></body></html>

Next, let us curl Nginx service and you should see Nginx default page response.

$ curl -i 1.2.4.114
HTTP/1.1 200 OK
Server: nginx/1.17.6 <------
Date: 11 Jun 2020 03:36:01 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 11 Jun 2020 12:50:08 GMT
Connection: keep-alive
ETag: "5ddbdbsk00-467"
Accept-Ranges: bytes<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
....

Creating a service for an application running in five pods

Before you begin

  • Install kubectl.
  • Use a cloud provider like Google Kubernetes Engine or Amazon Web Services to create a Kubernetes cluster. This tutorial creates an external load balancer, which requires a cloud provider.
  • Configure kubectl to communicate with your Kubernetes API server. For instructions, see the documentation for your cloud provider.

Objectives

  • Run five instances of a Hello World application.
  • Create a Service object that exposes an external IP address.
  • Use the Service object to access the running application.

Creating a service for an application running in five pods

  1. Run a Hello World application in your cluster:

service/load-balancer-example.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/name: load-balancer-example
name: hello-world
spec:
replicas: 5
selector:
matchLabels:
app.kubernetes.io/name: load-balancer-example
template:
metadata:
labels:
app.kubernetes.io/name: load-balancer-example
spec:
containers:
- image: gcr.io/google-samples/node-hello:1.0
name: hello-world
ports:
- containerPort: 8080

Now run the command

kubectl apply -f https://k8s.io/examples/service/load-balancer-example.yaml

The preceding command creates a Deployment and an associated ReplicaSet. The ReplicaSet has five Pods each of which runs the Hello World application.

  1. Display information about the Deployment:
kubectl get deployments hello-world
kubectl describe deployments hello-world

2. Display information about your ReplicaSet objects:

kubectl get replicasets
kubectl describe replicasets

3. Create a Service object that exposes the deployment:

kubectl expose deployment hello-world --type=LoadBalancer --name=Ex-service

4. Display information about the Service:

kubectl get services Ex-service

The output is similar to this:

NAME        TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S) 
Ex-service LoadBalancer 10.192.12.1 10.88.55.7 8080/TCP

5. Display detailed information about the Service:

kubectl describe services Ex-service

The output is similar to this:

Name:           Ex-service
Namespace: default
Labels: app.kubernetes.io/name=load-balancer-example
Annotations: <none>
Selector: app.kubernetes.io/name=load-balancer-example
Type: LoadBalancer
IP: 10.0.12.1
LoadBalancer Ingress: 10.0.55.7
Port: <unset> 8080/TCP
NodePort: <unset> 32377/TCP
Endpoints: 10.0.0.4:8080,10.0.0.5:8080,10.0.0.6:8080 + 2 more...
Session Affinity: None
Events: <none>

Make a note of the external IP address (LoadBalancer Ingress) exposed by. your service. In this example, the external IP address is 10.88.55.7. Don't forget to find the value of NodePort and port In this example, the is 8080 and the Nodeport is 32377.

In the preceding output, you can see that the service has several endpoints: 10.0.0.4:8080,10.0.0.5:8080,10.0.0.6:8080 + 2 more.These are the IP addresses used internally by the pods to run Hello World To verify these are pod addresses, enter this command:

kubectl get pods --output=wide

The output is similar to this:

NAME                         ...  IP         NODE
hello-world-2815656413-1jsh9 ... 10.0.0.4 gke-cluster-1-default-pool-e0b8f239-9ihs
hello-world-2815656413-2e6gh ... 10.0.0.5 gke-cluster-1-default-pool-e0b8f239-ssjs
hello-world-2815656413-9dg67 ... 10.0.0.6 gke-cluster-1-default-pool-e0b8f239-9j5s
hello-world-2815656413-oyh55 ... 10.0.0.7 gke-cluster-1-default-pool-e0b8f239-9dha
hello-world-2815656413-dgh7s ... 10.0.0.8 gke-cluster-1-default-pool-e0b8f239-hsux

Use the external IP address (LoadBalancer Ingress) to access the Hello World application:

curl http://<external-ip>:<port>

The response to a successful request is a hello message:

Hello Kubernetes!

Note.

If you stuck in solving the problem of “Kubernetes service external IP pending
  • Run a Hello World application in your cluster:
# kubectl run hello-world --replicas=2 --labels="run=LoadBalancer" --image=gcr.io/google-samples/node-hello:1.0  --port=8080
deployment.apps/hello-world created
  • Create a Service object that exposes the deployment:
# kubectl expose deployment hello-world --type=LoadBalancer --name=lb-service
service/lb-service exposed
  • Display information about the Service:
# kubectl get services lb-service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
lb-service LoadBalancer 10.102.7.76 <pending> 8080:31031/TCP 1m

After you create the service, it takes time for the cloud infrastructure to create the load balancer and gets the IP address in the service

After the Kubernetes cluster is running but still external IP is on pending state

So here is the twist while using a Loadbalancer, If the K8s is running in a cluster which does not support LoadBalancer type service, The Loadbalancer won't be provisioned there and it will continue to treat itself like a NodePort service

In that case, if you manage to add or EIP or VIP to your node then you can attach to the EXTERNAL-IP of your TYPE=LoadBalancer in k8 cluster, for example attaching the EIP/VIP address to the node 172.10.2.10

root@kube-master:/home/ansible# kubectl patch svc lb-service  -p '{"spec": {"type": "LoadBalancer", "externalIPs":["172.10.2.10"]}}'
service/lb-service patched
  • Display information about the Service:
root@kube-master:/home/ansible# kubectl get services lb-service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
lb-service LoadBalancer 10.102.7.76 172.10.2.10 8080:31031/TCP 10m
  • so you can now access the service at that IP address
root@kube-master:/home/ansible# curl 172.10.2.10:8080
Hello World!
Well Done

👋 Join us today !!

️Follow us on LinkedIn, Twitter, Facebook, and Instagram

If this post was helpful, please click the clap 👏 button below a few times to show your support! ⬇

--

--

Kubernetes Advocate
AVM Consulting Blog

Vineet Sharma-Founder and CEO of Kubernetes Advocate Tech author, cloud-native architect, and startup advisor.https://in.linkedin.com/in/vineet-sharma-0164