Setting up AWS EKS cluster entirely with Terraform

AV
AV
Apr 9 · 5 min read

This article will explain how to create an EKS cluster entirely with Terraform. No any other tool required. That’s right — no kubectl. This is not a comprehensive step by step guide that you can simply copy-paste to your work files and be done with it, but it will link to such guides.


Kubernetes is very popular orchestration solution these days. Amazon Web Services (AWS) is super popular cloud services platform for the last few years. And Terraform is extremely popular infrastructure provisioning tool among DevOps and alike. Put these three together and you have an army of DevOps engineers looking to provision managed kubernetes clusters (simply put EKS) on AWS with Terraform. No surprise the topic quickly became well covered in internet.

When I started looking into the subject, I needed a way to finish the entire setup with Terraform for the reasons I will explain later. And sadly, I found that majority of those articles are not staying strictly within Terraform and have to resort to kubectl for the finishing touches. This post is my attempt to add that missing piece of a puzzle and hopefully assist someone else in their quest for better solution.

What does it take to bring up an EKS cluster?

Roughly speaking, creating an EKS cluster can be broken down into these parts:

  • Define networking: VPC, subnets and security groups
  • Create an IAM role for kubernetes control plane to access resources
  • Create cluster that uses previously defined networking and IAM role
  • Define a pool of worker nodes: create Auto-scaling Group (ASG) with launch configuration and provision nodes to attempt joining the cluster
  • Configure the cluster to allow worker nodes to join

These steps are typically presented as sufficient to be done with EKS setup and usually they are. In my case, I had to add an extra step of adding an IAM role to access the cluster from kubectl as described here.

Reason being, our Terraform scripts bring up tons of infrastructure that is not limited to only EKS. So, when we run terraform apply, it assumes a role that has wide range of privileges. We don’t want to assume this role whenever we use kubectl to deploy services on EKS.

Another reason for a separate role is a bit more subtle: the role we use for Terraform has sts:ExternalID condition set for extra security. Unfortunately, current version of tools (aws-iam-authenticator and aws eks) do not support specifying “external_id” attribute so there is no way to generate proper kubeconfig that would assume this kind of role. All of these necessitated for me to create another role with limited privileges and register it with kubernetes for the sake of being able to access it from kubectl.

It resulted in a couple of extra steps that you may want to have as well:

  • Create an IAM role with EKS access permissions that can be assumed by your account
  • Register the role in kubernetes cluster

I am not going to cover how to setup the EKS with Terraform because it is already nicely done by hashicorp guys themselves here. I advise you to read that how-to. It is very easy to follow, and you will end up with fully setup EKS. I do suggest, however, to replace their “HEREDOC” definitions of assume role policies with a proper aws_iam_policy_document data source (it’s just nicer this way).

Creating a role in Terraform is trivial. To allow your account to assume the role, define a policy document that says so:

principals {
type = "AWS"
identifiers = ["arn:aws:iam::<YOUR ACCOUNT #>:root"]
}
}
}

Then define a “role” resource and attach required policies:

resource "aws_iam_role_policy_attachment" "eks_kubectl-AmazonEKSClusterPolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
role = "${aws_iam_role.eks_kubectl_role.name}"
}
resource "aws_iam_role_policy_attachment" "eks_kubectl-AmazonEKSServicePolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSServicePolicy"
role = "${aws_iam_role.eks_kubectl_role.name}"
}
resource "aws_iam_role_policy_attachment" "eks_kubectl-AmazonEKSWorkerNodePolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
role = "${aws_iam_role.eks_kubectl_role.name}"
}

And this concludes it.

Now we have approached the part of the EKS setup process that is usually handled by summoning kubectl and using it to inject the ARNs of created roles into aws-auth ConfigMap that is responsible for binding together kubernetes Auth with AWS IAM.

Official AWS documentation says you should modify the ConfigMap and apply it using kubectl apply -f aws-auth-cm.yaml, which makes sense. Even the official Terraform guide says you should use kubectl apply, which probably makes sense but we can do better.

If you like me, and don’t want to leave comfortable world of Terraform to finish your EKS setup, which you are forced to do in order to register at least the IAM role for your worker nodes (EC2 instances) with kubernetes, then follow along.

How do I stay inside?

The idea is simple: we will create the aforementioned ConfigMap from within the Terraform. And for this reason we will resort to kubernetes provider. The documentation can be found here.

We will create kubernetes_config_map resource using kubernetes Terraform provider with a bit of help from aws_eks_cluster_auth data source to let our provider authenticate with the EKS cluster.

This is what Terraform doc says about aws_eks_cluster_auth:

Get an authentication token to communicate with an EKS cluster.

Uses IAM credentials from the AWS provider to generate a temporary token that is compatible with AWS IAM Authenticator authentication. This can be used to authenticate to an EKS cluster or to a cluster that has the AWS IAM Authenticator server configured.

First, let’s define our cluster auth data source. It is important that the name attribute matches your cluster name:

Now we can use it when defining our kubernetes provider configuration:

It will supply authentication tokens based on approach defined in yourAWS provider config. In my case it is assumed role with external id, but it can be anything that works for you.

And the last step will create the aws-auth ConfigMap in our EKS cluster.

data {
mapRoles = <<YAML
- rolearn: ${<worker node ARN>}
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
- rolearn: ${<kubetl access role ARN>}
username: kubectl-access-user
groups:
- system:masters
YAML
}
}

And we are done!

If you follow the terraform guide you just need to add these last steps in and skip the part that advises you to run kubectl apply. And as promised in the beginning of this article you will get fully setup and running EKS strictly from Terraform.

Now it is your call either follow this approach or stick to kubectl apply. For me there is no choice until the tools support external_id attribute.

Kudos

The idea how to get entire EKS setup in Terraform is not mine. I found it mentioned in this comment in one of the feature requests for kubernetes terraform provider.

AV

Written by

AV

Staff Software Engineer | AWS | GCP | Docker | Kubernetes | Golang