Unlocking EKS Cost Efficiency with Karpenter

Ahmd Belhoula
Edixos
Published in
5 min readOct 8, 2023

In the dynamic landscape of cloud computing, where agility and scalability reign supreme, managing costs has emerged as a crucial undertaking for enterprises of all sizes. This is especially true for non-production clusters, startups, and small businesses that often operate with limited resources. In the realm of Kubernetes orchestration, Amazon’s Elastic Kubernetes Service (EKS) provides an excellent platform for businesses to deploy, manage, and scale containerized applications seamlessly. However, as these organizations set their sights on technological innovation, one aspect demands unyielding attention: the cost incurred by running these Kubernetes clusters.

What is Karpenter?

Karpenter is an open-source provisioner tool that can quickly deploy Kubernetes infrastructure with the right nodes at the right time. It significantly improves the efficiency and cost of running workloads on a cluster. It automatically provisions new nodes in response to un-schedulable pods.

How is Karpenter different from Cluster Autoscaler?

  • Upscaling and Downscaling: Karpenter offers granular upscaling and downscaling control, adjusting resources based on workload requirements to optimize efficiency and reduce costs. In contrast, Cluster Autoscaler primarily focuses on node-level upscaling and may be less effective in downscaling specific resources.
  • Group-less node provisioning: Karpenter manages each instance directly, without using additional orchestration mechanisms such as node groups. Whereas Cluster Autoscaler works with node groups.
  • Right-sizing: In the case of Karpenter, we don’t have to worry about right-sizing the compute resources beforehand. It gives us the flexibility to define multiple resource types which minimizes the operational overhead and optimizes the cost. Cluster Autoscaler requires you to define compute resources beforehand.

How does Karpenter work?

As Karpenter is a Kubernetes controller, it observes events within an eks cluster and then sends commands to the cloud provider. As new pods are detected, scheduling constraints are evaluated, nodes are provisioned based on the required constraints, pods are scheduled on the newly created nodes, and nodes are removed when no longer needed to minimize scheduling latencies and infrastructure costs.

The key concept behind the custom resource named Provisioner, is used by Krapenter to define the provisioning configuration. Provisioners contain constraints that affect the nodes that can be provisioned and the attributes of those nodes (for example, timers for removing nodes).

apiVersion: v1
items:
- apiVersion: karpenter.sh/v1alpha5
kind: Provisioner
metadata:
generation: 1
name: default
spec:
consolidation:
enabled: true
kubeletConfiguration:
containerRuntime: containerd
maxPods: 110
limits:
resources:
cpu: "8"
memory: 10Gi
providerRef:
name: default
requirements:
- key: karpenter.k8s.aws/instance-category
operator: In
values:
- t
- m
- key: karpenter.k8s.aws/instance-cpu
operator: In
values:
- "1"
- "2"
- "4"
- key: karpenter.k8s.aws/instance-hypervisor
operator: In
values:
- nitro
- key: topology.kubernetes.io/zone
operator: In
values:
- us-east-1a
- us-east-1b
- us-east-1c
- us-east-1d
- key: kubernetes.io/arch
operator: In
values:
- amd64
- key: karpenter.sh/capacity-type
operator: In
values:
- spot
- key: kubernetes.io/os
operator: In
values:
- linux
ttlSecondsUntilExpired: 604800
kind: List

The Provisioner can be set to do things like:

  • Define taints to limit the pods that can run on nodes Karpenter creates.
  • Limit the creation of nodes to certain zones, instance types, operating systems, and computer architectures (for eg. AMD).
  • Set defaults for node expiration (timers).

Key Features & Benefits of Karpenter

  • Consolidation: When enabled, Karpenter will actively reduce cluster cost by identifying when a node can be removed as its workload can be handled on other cluster nodes and when a node can be replaced with a cheaper variant due to a change in workload.
  • Cost Savings: Through proper node provisioning and decommissioning, Karpenter helps lower infrastructure costs while maintaining application availability.
  • Customization: Karpenter allows for fine-grained control over autoscaling policies, enabling you to tailor node scaling behavior to your specific application requirements.
  • Rapid node launch times with efficient response to dynamic resource requests.
  • Efficient Resource Usage: It minimizes the wastage of compute resources by automatically removing unnecessary nodes when they are no longer needed.

Deploying an EKS cluster + Karpenter using Terraform

Looking for an easy way to deploy Karpenter for Amazon EKS? We’ve got a GitHub repository ready to streamline the process:

Access the GitHub Repository: [https://github.com/abelhoula/eks-karpenter/tree/main]

In this repository, we’ve automated the setup of networking infrastructure, EKS clusters, and Karpenter. For step-by-step instructions and customization details, please check out the README within the repository itself.

├── README.md
├── backend.tf
├── main.tf
├── modules
│ ├── karpenter
│ │ ├── autoscaler.tf
│ │ ├── outputs.tf
│ │ ├── provider.tf
│ │ ├── terraform.md
│ │ └── variables.tf
│ └── network
│ ├── outputs.tf
│ ├── subnets.tf
│ ├── terraform.md
│ └── variables.tf
├── providers.tf
├── terraform-dev.tfvars
└── variables.tf

Cost Reduction — Using Spot and On-Demand Flexibility

The real-world problem that Karpenter can help solve is managing workload fluctuations in a cost-effective manner. Traditionally, manual scaling of worker nodes is required to handle increased traffic, which can be time-consuming and costly. Karpenter’s efficient response to dynamic resource requests enables users to handle increased traffic without downtime. To reduce the costs, spot instances can be used with on-demand fallbacks. Additionally, Karpenter provides time-slicing GPU nodes, allowing users to run high-performance computing workloads. These features help users optimize their resources and save costs while ensuring their workloads run efficiently.

Limitations of Karpenter

  • Currently, it’s tied to Amazon Web Services only.
  • Karpenter’s pod still needs to be deployed within a managed node group. But with new changes, it can be run on Fargate

Some Learning

  • Do not delete Provisioner as It will delete all worker nodes provisioned by it. If you want to keep the worker node provisioned by the provisioner then delete the ec2 tag. karpenter.sh/provisioner-name. Once the tag is removed it will not be managed by Karpenter. You will need to manually drain and delete this node.
  • It’s always better to set resource limits for all the provisioners so that any unwanted pod or batch jobs can not add unexpected bills.
  • Before enabling the consolidation feature, make sure to have appropriate CPU/RAM request/limit assigned to all the pods. If this is not set correctly, you will get lots of out-of-memory, timeout in readiness, and liveness probe and pods crashing/latency issues. This happens because Karpenter provisions the node based on Request/Limit. If any pod does not have any Request/Limit defined, it can consume most of the worker node resources and when Karpenter assigns a new pod to this node, it will not get all requested resources.
  • Each EC2 instance type has a maximum limit on the number of Pods it can support due to the available Elastic Network Interfaces (ENIs) per instance type. To optimize resource usage, avoid resource waste, and circumvent scaling limitations based on Pods IP address constraints, it is highly recommended to enable the ENABLE_PREFIX_DELEGATION feature on the VPC CNI. This configuration allows for better pod dentistry, ensuring that your clusters can efficiently accommodate a larger number of Pods without running into IP address limitations.

--

--