Deploying Cluster Autoscaler on EKS using ArgoCD and Helm
Introduction
Cluster Autoscaler is a tool that automatically adjusts the number of nodes in a Kubernetes cluster when there are insufficient capacity errors to launch new pods, and also scales down the number of nodes when they are not needed.
Autoscaler adjusts the number of nodes by changing the desired capacity of a AWS Autoscaling Group.
Note: The Autoscaler won’t increase the number of nodes beyond the maximum capacity that was specified during the cluster creation, likely done through Terraform.
Prerequisites
Before deploying Cluster Autoscaler on an Amazon EKS cluster using ArgoCD and Helm, make sure you have the following prerequisites in place:
- An Amazon EKS cluster created with the necessary IAM permissions for AWS autoscaling groups. These IAM permissions should allow actions such as DescribeAutoScalingGroups, SetDesiredCapacity, and TerminateInstanceInAutoScalingGroup. Here is an example IAM policy you can use:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribeAutoScalingInstances",
"autoscaling:DescribeLaunchConfigurations",
"autoscaling:DescribeTags",
"autoscaling:SetDesiredCapacity",
"autoscaling:TerminateInstanceInAutoScalingGroup"
],
"Resource": "*"
}
]
}
2. ArgoCD installed on the cluster. If you haven’t set up ArgoCD yet, you can follow this documentation
3. A GitHub repository connected to ArgoCD, where we will store the deployment manifest for our Cluster Autoscaler.
Now let’s write a deployment manifest for our Cluster Autoscaler
This manifest will be used by ArgoCD to automatically deploy the Cluster Autoscaler to your EKS cluster.
clusterautoscaler.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: clusterautoscaler
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
destination:
namespace: argocd
server: https://kubernetes.default.svc
project: default
sources:
- repoURL: https://github.com/username/repo
targetRevision: HEAD
ref: here
- repoURL: https://kubernetes.github.io/autoscaler
chart: cluster-autoscaler
targetRevision: 9.29.1
helm:
values: |
autoDiscovery:
# cloudProviders `aws`, `gce`, `azure`, `magnum` and `clusterapi` `oci-oke` are supported by auto-discovery at this time
# AWS: Set tags as described in https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md#auto-discovery-setup
clusterName: your_cluster_name
# autoDiscovery.tags -- ASG tags to match, run through `tpl`.
tags:
- k8s.io/cluster-autoscaler/enabled
- k8s.io/cluster-autoscaler/{{ .Values.autoDiscovery.clusterName }}
- kubernetes.io/cluster/{{ .Values.autoDiscovery.clusterName }}
# autoDiscovery.roles -- Magnum node group roles to match.
roles:
- worker
# AWS_ACCESS_KEY
awsAccessKeyID: ""
# AWS_REGION
awsRegion: your_region
# AWS_SECRET_ACCESS_KEY
awsSecretAccessKey: ""
# CLOUD_PROVIDER
cloudProvider: aws
sslCertPath: /etc/kubernetes/pki/ca.crt
rbac:
## If true, create & use RBAC resources
##
create: true
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
Please note that this file clusterautoscaler.yaml should be in the folder of the repository connected to ArgoCD.
Once you have this clusterautoscaler.yaml file in your connected GitHub repository, and you have setup ArgoCD on your cluster using this documentation, ArgoCD will automatically deploy the Cluster Autoscaler to your EKS cluster based on the configurations specified in the manifest. Please ensure that you replace placeholder values like your_cluster_name, AWS credentials, and other relevant settings with your actual configurations before deploying.
Conclusion
In this article, we explored the process of setting up Cluster Autoscaler on an Amazon Elastic Kubernetes Service (EKS) cluster using ArgoCD and Helm.
With the Cluster Autoscaler up and running, your EKS cluster can now dynamically adjust its node capacity in response to varying workloads, effectively eliminating the manual effort of scaling and enhancing the cluster’s overall reliability and resource utilization.
Embrace this powerful tool to build scalable and resilient Kubernetes environments effortlessly.
SCALE HAPPILY!