Huawei Cloud CCE Kubernetes Ingress — 3: Sticky Sessions

Burak Ovalı
Huawei Developers
Published in
8 min readMay 8, 2023
Huawei Cloud CCE — Sticky Session

Intro

In the last article of the series, we will implement Sticky Session in the CCE service that creates the Kubernetes Environment in Huawei Cloud. ELB and Nginx Ingress were discussed in the previous two articles.

1- Huawei Cloud CCE Kubernetes Ingress — 1: ELB Ingress Service with SSL

2- Huawei Cloud CCE Kubernetes Ingress — 2: Nginx Ingress

3- Huawei Cloud CCE Kubernetes Ingress — 3: Sticky Sessions

I suggest you read those articles first. Prerequisites have been skipped as this article focuses directly on Sticky Sessions. In this article, three different sticky sessions will be discussed.

1 — Sticky Session Through Load Balancing Service (Layer 4)

2 — Sticky Session ELB Ingress (Layer 7)

3 — Sticky Session Nginx Ingress

Sticky Session Through Load Balancing Service (L4)

In layer-4 Load Balancing, Source IP address-based sticky session (Hash routing based on the Client IP address) can be enabled. To enable Source IP address-based sticky session on Services. Here we use External Traffic Policies for traffics.

Let’s first look at the definition of externalTrafficPolicy:

externalTrafficPolicy denotes if this Service desires to route external traffic to node-local or cluster-wide endpoints. “Local” preserves the client source IP and avoids a second hop for LoadBalancer and NodePort type services, but risks potentially imbalanced traffic spreading. “Cluster” obscures the client source IP and may cause a second hop to another node, but should have good overall load-spreading.

As can be understood from the definition, the externalTrafficPolicy parameter can take two values; Local and Cluster. The following diagram summarizes the above definition:

externalTrafficPolicy

External Traffic Policies is such a broad topic that it could be the subject of another article. That’s why we don’t do a deep dive.

Let’s deploy 2 replica Pods with the Deployment manifest below. Let’s distribute Pods to two different Nodes with PodAntiAffinity.

Deployment Resource Manifest

We deploy a simple Flask application to understand from which Source IP the traffic comes to the Pod. For those wondering, the Source Code is as follows:

Python/Flask Source Code for Article

You can take a look at the article here to containerize the above application and store it in SWR, the Image Repository of Huawei Cloud.

Let’s deploy it by running the following command line in the directory where the Deployment yaml file is located.

kubectl apply -f deployment.yaml

Let’s check the status of the Pods by running the command line below.

kubeclt get deployments -n default
Deployment Resources

Both Pods are in ready status and each Pod is running on different Nodes.

A little reminder externalTrafficPolicy only supports NodePort and Load Balancer. That’s why we need a Load Balancer first. Let’s quickly create a Load Balancer instance using Huawei Cloud’s Elastic Load Balance service.

Creating a Dedicated Load Balancer

Elastic Load Balance (ELB) distributes incoming traffic across multiple backend servers based on listening rules. This expands service capabilities of applications and improves their fault tolerance. For more on the ELB, see here.

Let’s go to the Elastic Load Balancer service under all services. Then click on the Buy Elastic Load Balancer button.

Buy Elastic Load Balancer on HC

Currently only Dedicated supports the Istanbul region. If you’re in another region, select Dedicated as the type.

Elastic Load Balancer Configurations

In the next step, choose New EIP for Public IP. Prefer Traffic as Billed By. We choose Traffic to avoid extra charges. You can choose the value you want for Bandwidth. The fee does not change. Finally, choose Network Load Balancing. Complete the process by selecting Specification according to your wish.

Elastic Load Balancer Configurations

Let’s note the Instance ID of the created Load Balancer.

Elastic Load Balancer Instance ID

Let’s create a Load Balancer Service in front of the Pods. Load Balancer Service will be created with the Service manifest below.

The value of the spec.externalTrafficPolicy parameter in the yaml file is Cluster. This parameter has the default Cluster value. Added for clarity. On the other hand, some parameters are used under metadata.annotations. Let’s quickly explain these:

  • kubernetes.io/elb.class: union: shared load balancer/performance: dedicated load balancer
  • kubernetes.io/elb.id: This parameter indicates the ID of a load balancer.
  • kubernetes.io/elb.lb-algorithm: This parameter indicates the load balancing algorithm of the backend server group.
  • kubernetes.io/elb.session-affinity-mode: Listeners ensure session stickiness based on IP addresses. Requests from the same IP address will be forwarded to the same backend server.
Load Balancer Service

Let’s deploy it by running the following command line in the directory where the Service yaml file is located.

kubectl apply -f svc.yaml

Let’s check the service resource by running the following command line.

kubectl get svc -n default

As will be understood from the output, we can send requests to applications with a Public IP of 101.44.33.49. Let’s examine the response when we send a request to the ip address via the browser. There are two points to note here: IP (Destination IP) and IP_ADDR (Source IP).

Traffic is illustrated with the following diagram:

externalTrafficPolicy=Cluster

It doesn’t matter which Node we request at the Cluster level or which Node the Pod is running under. The disadvantage of this structure is Service access will cause performance loss due to route redirection, and the Source IP address of the client cannot be obtained.

Now let’s change the spec.externalTraffickPolicy parameter in the service yaml file to Local. Then, let’s apply the new changes by running the following command line:

kubectl apply -f svc.yaml
Load Balancer Service

Let’s send a request to the same IP address again and examine the returned response.

We see that the IP and IP_ADDR values are different in the response after the new request. The flow is illustrated by the diagram below.

externalTrafficPolicy=Local

By setting externalTrafficPolicy=Local, Nodes only route traffic to Pods that are on the same node, which then preserves Client IP. It’s important to recognize that externalTrafficPolicy is not a way to preserve source IP; it’s a change in networking policy that happens to preserve Source IP.

Let’s also remember the service manifest above. We set SOURCE_IP as the Load Balancing Algorithm when creating an ELB with Annotations. In other words, end-users will provide sticky session on the Node.

Sticky Session ELB Ingress (Layer 7)

In layer-7 Load Balancing, Sticky Session based on HTTP cookies and APP cookies can be enabled. To enable such Sticky Session, the following conditions must be met:

  1. The application (workload) corresponding to the ingress is enabled with workload anti-affinity.
  2. Node affinity is enabled for the Service corresponding to the ingress.

Let’s deploy 2 replica Pods with the Deployment manifest below. Let’s distribute Pods to two different Nodes with PodAntiAffinity.

Deployment Resource Manifest

Configure the Sticky Session in a Service. An Ingress can connect to multiple Services, and each Service can have different Sticky Sessions. Let’s create a NodePort service with the following manifest.

NodePort Service

You can also select APP_COOKIE.

NodePort Service with APP_COOKIE

Create an Ingress and associate it with a Service. The following example describes how to automatically create a Shared Load Balancer.

ELB Ingress Manifest

Let’s deploy the following command line to the Kubernetes environment for the manifests we created above.

kubeclt apply -f *

With the second step we implemented a Sticky Session in Layer 7 using ELB Ingress. ELB Ingress is explained in detail in the first article of this series.

Sticky Session Nginx Ingress

For a Cloud Agnostic application, we can choose Nginx Ingress, which is one of the first alternatives that comes to mind in today’s projects. Nginx Ingress is explained in detail in the second article of this series. That’s why I’m skipping the Nginx Ingress installation step and other basic steps. For more information on this topic, you can check out this article.

In the first two steps, we used the externalTrafficPolicy parameter for the Stikcy Session. With this parameter, we could access the Pods under the Node. By using SOURCE_IP on a Node basis, we were ensuring that the Source IP of the traffic stays on the same Node. However, with Nginx Ingress, a different use case Sticky Session will be discussed, which in the first step does not have the Sticky Session feature that we will apply now.

Sticky Session with Nginx Ingress is illustrated with the following diagram:

Nginx Ingress Sticky Session

As can be seen from the diagram, we can apply sticky sessions for the Pods behind the SVC. It doesn’t matter which Node these Pods are on.

Let’s deploy Deployment and ClusterIP Service.

Deployment and ClusterIP Resources Manifest

Let’s deploy the resources by running the following command line in the directory where the yaml file is located.

kubectl apply -f deployment-svc.yaml

Let’s run the following yaml file for Nginx Ingress and explain the newly added annotations, unlike the previous article.

Nginx Ingress Manifest

Let’s complete the nginx ingress installation by running the following command line in the directory where the ingress yaml file is located.

kubeclt apply -f nginx-ingress.yaml

Unlike the previous manifest, there are some annotations here. For detailed information about these, you can check the official documentation of Nginx.

  • nginx.ingress.kubernetes.io/affinity: Type of the affinity, set this to cookie to enable session affinity.
  • nginx.ingress.kubernetes.io/session-cookie-name: Name of the cookie that will be created
  • nginx.ingress.kubernetes.io/session-cookie-max-age: Time until the cookie expires, corresponds to the Max-Age cookie directive
  • nginx.ingress.kubernetes.io/session-cookie-expires: Legacy version of the previous annotation for compatibility with older browsers, generates an Expires cookie directive by adding the seconds to the current date

Conclusion

With the first two articles, two different Ingress topics are explained in the Huawei Cloud environment. The first is Huawei Cloud’s internal ELB Ingress and the other is the third party technology Nginx Ingress.

Sticky session for ngin ingress is discussed in the last article of the series. The first two are made with the services of Huawei Cloud, while the third is done with the third party Nginx. There are different sticky session applications that you can apply for different use cases, and each of them can be included in our kubernetes environment with very simple steps.

References

Create an Nginx Ingress

Creating and Pushing Container Image to SWR

Docker Container App, Pull and Push to Huawei Cloud SWR

Create an ELB Ingress

--

--