Container Load Balancing on Google Kubernetes Engine(GKE)

Get Cooking in Cloud

Priyanka Vergadia
Google Cloud - Community
5 min readMay 14, 2020

--

Authors: Priyanka Vergadia, Stephanie Wong

Introduction

Get Cooking in Cloud” is a blog and video series to help enterprises and developers build business solutions on Google Cloud. In this series we plan on identifying specific topics that developers are looking to architect on Google cloud. Once identified we create a mini series on that topic.

In this miniseries, we will go over Google Cloud load balancing.

  1. Choosing the right load balancer
  2. Application Capacity Optimizations with Global Load Balancing
  3. Capacity Management with load balancing
  4. Load Balancing to GKE network endpoint groups (this article)

In this article we will cover how load balancing works with Google Kubernetes clusters running as backends.

Check out the video

Load balancing Google Kubernetes Container clusters

What you’ll learn

  • Why we need container load balancing?
  • What is Network Endpoint Group (NEG)?
  • How to set up container native load balancing in GKE
  • Benefits of container native load balancing

Review

In this series we are working with Beyond Treat, one stop shop for vegan dog treats! Their online business has still been booming. And while they upgraded to using Google’s global load balancer, they also decided to move to a containerized microservices environment for their web backend on Google Kubernetes Engine.

History of load balancing and some challenges

Load balancers were initially built to support resource allocation for virtual machines (VM), which helps workloads be highly available.

When containers and container orchestrators started to take off, users adapted these VM-focused load balancers for their use, even though the performance was suboptimal.

Traditionally, HTTP(S) load balancers targeting Kubernetes clusters would actually be targeting its nodes because they didn’t have a way to recognize each pod. In the absence of a way to define a group of pods as backends, the load balancer used instance groups to group VMs as backends. Ingress support in GKE using those instance groups used the HTTP(S) load balancer to perform load balancing to nodes in the cluster.

iptables routing traffic to the pods

Iptables rules programmed on the nodes, route requests to the pods serving as backends for the load-balanced application. Load balancing to the nodes was the only option, since the load balancer didn’t recognize pods or containers as backends, resulting in imbalanced load and a suboptimal data path with additional unnecessary hops between nodes.

Network Endpoint Group (NEG) — container native load balancing

Google came out with a Network Endpoint Group (NEG) abstraction layer that enables container-native load balancing. This means the load balancer has visibility into a Kubernetes cluster’s pods because NEGs are integrated with the Kubernetes Ingress controller running on GCP.

With Network Endpoint Group (NEG), the load balancer has visibility into a Kubernetes cluster’s pods.

How Network Endpoint Group (NEG) help route traffic to the pods

In case of Beyond Treat, they have a multi-tiered e-commerce deployment and want to expose one service to the internet using GKE. With NEGs they can now provision an HTTP(S) load balancer, allowing them to configure path-based or host-based routing to their backend pods.

Set up container native load balancing in GKE

Create a VPC-native GKE cluster. By default, Kubernetes uses static routes for pod networking, which requires the Kubernetes control plane to maintain these routes to each node. But, this comes at a scaling cost.

In GKE you have the option to create clusters in VPC-native mode, which provides container native load balancing that uses the NEG data model.

Set up VPC native GKE cluster

VPC-native mode means you have connectivity between all pods in your VPC without the overhead of route scaling, and traffic is evenly distributed among the available healthy backends in an endpoint group.

Benefits of container native load balancing

Benefits of NEG and container native load balancing are even traffic distribution, heath checks, graceful termination and optimal path.

  • With container-native load balancing, traffic is distributed evenly among the available healthy backends in an endpoint group, following the defined load balancing algorithm.
  • Container native load balancing supports health checking, including TCP, HTTP(S) or HTTP/2 checks. NEGs can check the pods directly rather than nodes forwarding health checks to a random pod. As a result, health checks more accurately mirror the health of the backends.
  • You also benefits from graceful termination — when a pod is removed, the load balancer automatically drains the connections to the endpoint serving traffic based on the connection draining period configured for it.
  • Traffic hits an optimal data path — With the ability to load balance directly to containers, the traffic hop from the load balancer to the nodes disappears, since load balancing is performed in a single step rather than two.
  • Increased visibility and security — Container-native load balancing helps them troubleshoot their services at the pod level. It can preserve the source IP in the HTTP header to make it easier to trace back to the source of the traffic. Since the container sees the packets arrive from the load balancer rather than through a source NAT from another node, they can now create firewall rules using node-level network policies.

Conclusion

Well, there you have it, Load balancing deconstructed all the way! In the first three articles we learned about the load balancing options on Google Cloud, and proved that global load balancing is more performant and reliable than regional HTTPS load balancing. And finally we learned that even with GKE clusters, container-native load balancing means they can target backend pods directly for optimal traffic distribution and improvements in scalability.

Stay tuned for more articles in the Get Cooking in Cloud series and checkout the references below for more details.

Next steps and references:

--

--

Priyanka Vergadia
Google Cloud - Community

Developer Advocate @Google, Artist & Traveler! Twitter @pvergadia