Cluster Node Autoscaling with Oracle Container Engine for Kubernetes

Mickey Boxell
Oracle Developers

--

Oracle Container Engine for Kubernetes (OKE) reduces the operational burden of creating and operating enterprise-grade Kubernetes clusters, used to deploy containerized applications to the cloud. Oracle manages your cluster resources and automates reoccurring Kubernetes administration and scaling tasks to simplify Kubernetes operations. Until now, OKE customers have used pod-level autoscaling to meet their changing resource requirements. This approach is suitable for many use cases, but the need to manually scale the underlying node resources can put an unnecessary burden on cluster administrators.

To address these needs, we released cluster node autoscaling for OCI Container Engine for Kubernetes (OKE) in March 2021. You can now use the Cluster Autoscaler to dynamically scale node pools based on workload demand.

Autoscale your Kubernetes clusters

Applications face dynamic demands. For example, a company’s e-commerce application might face an increase in site visits because of a holiday or the sudden popularity of one of their products. Their Kubernetes cluster has to adapt to those fluctuating demands or they risk availability issues related to underprovisioning resources or the extra cost of over-provisioning them.

OKE allows you to right size your application by horizontally or vertically scaling pods in your cluster using data from the Kubernetes Metrics Server or other open source metrics servers. The Kubernetes Horizontal Pod Autoscaler (HPA) adjusts the number of pods in a deployment and the Kubernetes Vertical Pod Autoscaler adjusts the resource requests and limits for containers running in a deployment’s pods. Both solutions make adjustments based on CPU utilization, and HPA also supports custom metrics. They address pod-level autoscaling but not the scaling of underlying resources. OKE currently allows you to manually scale those underlying resources: The shape and number of worker nodes in a cluster’s node pools. Manual scaling works for predictable workloads but might not be suitable for unpredictable ones and puts an unnecessary burden on cluster administrators.

You can solve this challenge with the Kubernetes Cluster Autoscaler, which OKE now supports. Cluster Autoscaler automatically adjusts the size of a Kubernetes cluster’s node pools based on workload demand. It helps you optimize the use and cost of your OCI Compute resources. When demand increases, the number of nodes is scaled up to meet those demands. When demand decreases, instead of leaving an excess of resources assigned to your cluster, the number of nodes is scaled down.

Tying it all together

Let’s return to that e-commerce application that we mentioned. The cluster administrator responsible for the application deploys the Kubernetes metrics server to their cluster and configures the Horizontal Pod Autoscaler (HPA) to create replicas of their Kubernetes deployments based on CPU and memory utilization. They go one step further and use the recommender component of the Vertical Pod Autoscaler (VPA) to determine the ideal CPU and memory request values for their containers.

Because they’re dealing with a mission critical application, the administrator specifies pod disruption budgets, which limit the total number of pods that are simultaneously unavailable because of voluntary disruptions, and other Kubernetes features to maintain application availability. Until now, the automation ended here. The administrator had to manually scale the number and size of nodes in their node pools.

With the introduction of Cluster Autoscaler, node scaling can be automated too. The cluster administrator deploys Cluster Autoscaler and configures it to scale node pools with stateless workloads. To maintain control over how nodes are added and removed from those pools, they choose not to include node pools running statefulsets and kube-system pods.

After adopting Cluster Autoscaler, the next time one of those high traffic periods comes around, HPA triggers scaling at the pod level and when the number of replicas increases beyond what the nodes can handle and pods became unschedulable, Cluster Autoscaler kicks in and creates nodes for those pods to be scheduled onto. When the high traffic period subsides, HPA autoscales the number of replicas to the appropriate level and Cluster Autoscaler scales down the number of nodes.

Cluster Autoscaler is supported for clusters running Kubernetes version 1.17 and higher. Cluster Autoscaler runs as a deployment in your cluster and scales worker nodes in specified node pools.

Want to know more?

To learn more, use the following resources:

Originally published on blogs.oracle.com: https://blogs.oracle.com/cloud-infrastructure/cluster-node-autoscaling-with-oci-container-engine-for-kubernetes

--

--

Mickey Boxell
Oracle Developers

Product Manager — OCI Container Engine for Kubernetes (OKE)