Kubernetes and ELBs, The Hard Way

Hagai Barel
6 min readOct 8, 2017

--

This story was inspired by Kelsey Hightower’s GitHub project “kubernetes the hard way”. Go check it out, it’s a great exercise that will deepen your understanding of how some of the magic around bootstrapping a Kubernetes cluster happens.

Kubernetes is a wonderful piece of software, and while you gain major benefits running on bare metal machines (or VMs), it really shines when running on a cloud provider such as AWS or Azure.

One of those benefits is exposing services to the outside world. In an almost auto-magical way, one can define a service as type=LoadBalancer, and kubernetes provisions a load balancer from the underlining cloud provider and configures it properly. But what really happens behind the scenes? How do all of the pieces fit together?

We’ll try to de-misitify the magic behind type=LoadBalancer be using AWS’ ELB as reference and provision our own load balancer and manually configure it. During the process, we’ll point out the actions needed in order to glue all the parts together.

Why do it? first of all, it’s a great exercise that will help us gain a better understanding of how stuff really works. Second, there are times when you might want to expose multiple services using the same ELB, say from a cost perspective. When defining a type=LoadBalancer on a service, kubernetes will provision a separate ELB for each service, Meaning if you have 5 services with type=LoadBalancer, you get 5 ELBs. Note that a better approach to this matter will probably be using an ingress controller, but this requires some additional components and configuration and more hassle, and is out of the scope of this story.

Why not? well, we are going do it manually, which means it isn’t managed by kubernetes, which means that should our service get deleted or modified in the future, we’ll need to manually re-configure the ELB.

The Setup

I’ll be using a cluster on AWS that I brought up using kops. Kops makes it really easy to bring up a cluster on AWS and to manage it later on. The process of using kops is actually pretty simple and straight forward, go check out the docs for a detailed overview of its capabilities.

Once the cluster is up and running, we can start creating our resources. For this demonstration, we’ll use an nginx deployment and expose it using a service that we’ll manually configure an ELB for.

The deployment manifest looks like this:

Nothing too fancy here, we’re defining a deployment with nginx (alpine based image) as the container, and exposing container port 80. Notice that the deployment defines 3 replicas, but this isn’t really necessary and we could have done it with just one.

Now comes the interesting bit, we define a kubernetes service with type=NodePort

The spec.selector field defines the labels on our pods that the service will use as endpoints to route traffic to (it’s actually a bit more complex than that, but let’s stick with the simple version). The use of type=NodePort means that the kubernetes master will allocate a port from a flag-configured range (default: 30000–32767), and each node will proxy that port (the same port number on every node) into our service.

This gives developers the freedom to set up their own load balancers, to configure environments that are not fully supported by Kubernetes, or even to just expose one or more nodes’ IPs directly. (from the kubernetes docs)

Once the deployment and service are created in the cluster, we can run kubectl get pods,svc -l app=nginx and get the list of our resources:

We can see we have 3 nginx pods running and a service with a type=NodePort defined, note the ports the service has — 80:31634. This means that external traffic arriving to the node on port 31634 will be routed to internal port 80 and to our nginx pods.

Configuring an ELB on AWS

So now that we have a cluster running, nginx pods and a service with type=NodePort we can go on the AWS’ console and configure the ELB.

Log on to the AWS console, and navigate to EC2 -> Load Balancers. From the top of the screen, select the “Create Load Balancer” button. The wizard is pretty straight forward, but a couple of things to note:

  1. The load balancer type should be Classic Load Balancer
  2. On the listener configuration, define the load balancer port 80 and the instance protocol to the service port (31634 in our case)
  3. Select the VPC that hosts the cluster and the relevant subnets. Note that if you used Kops’ private tolpolgy to bring up the cluster, you’ll want to target the public subnets (which kops names by default as utility-<zone>)

4. Next, select a security group that allows the load balancer port (80)

This a pretty permissive security group as it allows all inbound traffic on port 80, but should be good enough for our purpose.

5. Skip the security settings for now, we’re using port 80 which isn’t secured by default. Again, not production settings but should be good enough for our needs.

6. Use the default health check settings, should be good for now.

7. In the add EC2 instances screen, select all of you cluster’s worker instances.

8. Add tags if you wish to, review and create.

The last step is to add to your node’s security group a rule allowing traffic from the load balancer’s security group (which we defined while creating the load balancer)

Save the new rule and head back to the ELB description -> instances tab

It shows us only 3 instances in service, which makes sense as we are running just 3 replicas of nginx.

To access nginx, find the DNS name for the ELB (from the description tab) and navigate to it

Walla! that’s our nginx service routing traffic to our pods, running in the cluster behind our manually created ELB!

Clean up

Before we re-cap, let’s delete the resources used for this demo —

  1. From the AWS console, delete the ELB and the security group. You’ll need to remove the ELB security group reference from the node’s security group before you can delete it.
  2. Using kubectl delete the nginx resources (kubectl delete all -l app=nginx )

Re-cap

Whoh, that was a lot of work. Think of all the magic that happens when you define a Kubernetes service with type=LoadBalancer:

  • A load balancer is provisioned from the underlining provider (AWS in our case)
  • The relevant security groups are created and configured
  • The cluster instances are registered with the load balancer
  • Port mapping are handled in regards to the service definitions

And of course, when you delete that service, everything is deleted in the proper order and cleaned up. We could have done it a lot easier (and simpler) by using the AWS cli and just type in the commands to create and configure all of our resources, but doing it through the console illustrates the process better.

Summary

So, when should you create a load balancer by hand? if don’t really need to, never. It might come in handy if you’d like to share a load balancer with multiple services, but a better solution will probably be using an ingress controller.

Kubernetes is a great piece of software that can manage a lot of the heavy lifting that comes with running on a cloud provider, and having to configure assets dynamically as the application needs change and evolve. Let it do its magic.

--

--