Karpenter — Deploy Step By Step with Terraform and Helm

Amir Lev
Israeli Tech Radar
Published in
5 min readOct 30, 2022

--

We are going to deploy karpenter using Terraform and helm, and I will guide you through it step by step.

What is Karpenter?

Karpenter is a node autoscaler, developed by Amazon (AWS), and deployed on the cluster itself. Since this article is more hands-on and get to work, I’m not going to explain why or how it is better than other autoscalers or go deep into how it is work behind the scenes. Karpenter is 10x faster than the cloud autoscalers and became to much more reliable. Lately, AWS announced this project as production ready. So it is time to get going with this new technology.

Before we start:

We are going to use Terraform and helm cli for this one.

I have used those versions:
Terraform v1.2.2
Helm v3.9.0
Karpenter 0.16.3 (chart version)
Kubernetes version 1.21+

Note! This article does not include the creation of a cluster and a vpc/subnet/security group.

Instance profile and IAM Roles

We will first create the roles and the policy so we don’t need to deal with them later.

We going to create two roles:

One is for the Karpenter controller service account. So it can have permission to create instances and control them from the cluster.

The second role is going to be for the instance profile. It will be attached to the nodes so they will connect to the cluster and will have permission to launch the node successfully.

Create a file and named it iam.tf and config the code below to your environment:

Nice. We can run now terraform apply and we will have two roles, one named Karpenter(controller)-role and one Karpenter-instance-profile. You can continue to the next step.

Don't forget terraform init if you haven't applied it yet.

Create values for helm:

We need to install now the Karpenter with helm. But before we do it, we need to create values that are compatible with our own needs.

For that, I’m going to use Terraform local_file. Some will rather create it as a YAML file manually, well… you can do it. But we lose the concept of keeping everything in one place, and here it comes to our advantage. Especially with Terraform so we can add the output values without the need to pass them manually. Just remember that we need to create a values YAML file through terraform. So the filename must contain values.yaml at the end!

Example: /path/to/values.yaml

an important note! I do say that keep everything in one place it is important thing when it comes handy and useful. I don’t use helm_realse here, because I do not think that terraform should handle and maintaninace helm! If you want you can added it instead of install with the cli.

So let’s do that. I’m going to create a new terraform file and I will call it karpenter-values.tf. Configure the code below to your needs:

Karpenter needs 4 important values to get going.

A. The Karpenter controller role-arn

B. The cluster name

C. The cluster endpoint.

D. The instance profile.

I can pass all of those 4 with terraform outputs! This is a big advantage we have here by creating the values.yaml file with terraform. Great! Now we can run terraform apply. And it will create for us the file in the path we gave in the filename field.

Helm Install:

Now that we covered the values and the roles, I think it is a good time to install Karpenter on the cluster.

Make sure you have helm installed.

Let’s first add the Karpenter helm repo.

helm repo add karpenter https://charts.karpenter.sh/
helm repo update

Now that we have the helm repo. We can run helm install so the cluster will start using our Karpenter.

# lets go to the karpenter values.yaml that we created before
# with terraform
cd /path/to/values.yamlhelm upgrade --install karpenter karpenter/karpenter \
--namespace karpenter \
--create-namespace \
-f /path/to/values.yaml \
--version v0.16.3

Great! we can do kubectl logs -f karpenter-pod-id and we can see we have a Karpenter that is installed on the cluster. but we not finish yet! We need to add the Provisioners.

Provisioners:

Karpenter provisioners are k8s CRDS, who acts like node groups from the aws autoscaler… but it is not quite right, matter effect there is a bit different.

Provisioner is what telling the Karpenter which subnet we are going to use, security group, instances-type, labels on nodes, max nodes (here it’s working on limits) and give you much more control on the auto scaling! We can have a couple of those in one cluster. And it is much more effective when it comes to choose for a more specific settings.

So… let’s create one of those provisioner that I’m talking about. We going to do it same as I created in the values.yaml file. let’s create a terraform file and name it provisioner.tf.

Now we can go to where we place the provisioner file and apply that.

cd /k8s/charts/karpenter/kubectl apply -f provisioner-one.yaml

So let me explain…

Provider:

First, let’s take a look at the provider, the provider will require two important parameters. one is the security group we are going to use to apply the nodes, and the second is the subnet. simple as that.

Note! the security group and the subnet must be on the same network!

Labels:

The labels are the ones that going to be on every node created by this provisioner.

Requirements:

Requirements are the way we choose which pod and node will be where. Think of it as a node that has a personality and this is his attributes. With well-known labels or labels youv’e added yourself to the node. It will match itself and the pods so they will fit where we told them.

For example: let’s say that Karpenter finds out it needs to create a new node. Great. It is running on the provisioner. See the next line in the requirements:

- key: "node.kubernetes.io/instance-type"                                       operator: In                                       
values: ["t3.medium", "t3.large", "t3.xlarge"]

Okay, it start to think what is most compatible to the situation. It also see’s that the pod that need a new node have the next line:

Note! all of this it's just an example for demonstration!
nodeSelector:
"node.kubernetes.io/instance-type": t3.large

Karpenter will do all the calculation to the current situation and will upload t3.large.

If we will do this:

nodeSelector:
"node.kubernetes.io/instance-type": m5.large

Karpenter will throw an error for example. This is one of the examples that requirements determine the scale.

limits:

Limits will determine the max of nodes the Karpenter will generate. Let’s say every node has 8gi per node, and we give it a memory limit of 160gi. Do the math, 8/160=20. We will have max 20 nodes (if all of them 8gi of course).

Summary

Karpenter is still in its infancy, and yet(!) still gives a great performance. Even if you don’t look to change your autoscaler anytime soon into Karpenter, I’m still thinking it’s a great chance to add another tool to your DevOps inventory.

Thanks for reading. :) See you in the next one!

--

--

Amir Lev
Israeli Tech Radar

A computer freak who works as a Devops engineer, And like to gets his hands dirty on new technologies.