EKS Cluster: A Deployment Walkthrough

Žygimantas Čijunskis
7 min readSep 15, 2023

--

To create the necessary EKS related resources, we will rely on an official AWS EKS module developed by AWS. As a result, the process will be pretty straightforward; we just need to feed the correct configuration parameters into the module itself.

In case you find yourself uncertain, all Terraform configuration files related to this article are available on my GitHub account:

Before delving further, I assume that you have already configured the Terraform CLI and set up all the necessary AWS VPC-related resources.

The terraform version I was using:

$ terraform -v
Terraform v1.3.6
on darwin_arm64

We will be building the required Terraform files one by one. Certain details will be explained within the configuration comments, while others will be covered in the article. This is the directory structure that we will have at the end of this article:

.
└── terraform/
├── eks.tf
├── iam_roles.tf
├── kms.tf
├── locals.tf
├── main.tf
├── kms.tf
├── providers.tf
├── variables.tf
└── version.tf

All the resources should be applied only after we’ve defined all the resources in the directory structure I’ve outlined above.

Before proceeding with the creation of resources, we need to define the required providers in a version.tf file.

terraform {
required_version = ">= 1.0"

required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 4.47"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = ">= 2.10"
}
null = {
source = "hashicorp/null"
version = ">= 3.0"
}
}
}

And now, we will predefine some variables within the variables.tf file. These list variables will serve the purpose of creating an aws-auth ConfigMap.

The aws-auth ConfigMap is used to manage the mapping between IAM roles/users and Kubernetes RBAC roles within an EKS cluster.

variable "map_roles" {
description = "Additional IAM roles to add to the aws-auth configmap."
type = list(object({
rolearn = string
username = string
groups = list(string)
}))

default = []
}

variable "map_users" {
description = "Additional IAM users to add to the aws-auth configmap."
type = list(object({
userarn = string
username = string
groups = list(string)
}))

default = []
}

Note: Once all the resources will be created, the aws-auth ConfigMap will look similar to this:

$ kubectl describe configmap aws-auth -n kube-system
Name: aws-auth
Namespace: kube-system
Labels: <none>
Annotations: <none>

Data
====
mapRoles:
----
- "groups":
- "system:bootstrappers"
- "system:nodes"
"rolearn": "arn:aws:iam::800000000:role/initial-eks-node-group-2023081"
"username": "system:node:{{EC2PrivateDNSName}}"

mapUsers:
----
[]

mapAccounts:
----
[]


BinaryData
====

Events: <none>

Note: Here is the list of policies that will be associated with the role “initial-eks-node-group-20230823125137625400000005,” which will be assigned to the aws-auth ConfigMap: AmazonEC2ContainerRegistryReadOnly, AmazonEKS_CNI_Policy, AmazonEKSWorkerNodePolicy

Given that our EKS Module will be creating some resources inside the Kubernetes cluster (such as aws-auth), we need to define some authentication data, that the Kubernetes provider will be using when creating resources using K8s API.

Add this block to the providers.tf file:

provider "kubernetes" {
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
token = data.aws_eks_cluster_auth.this.token
}

After dealing with providers.tf file — we will create a aws_kms_key resource, which will give us the possibility to encrypt the secrets that are related to the EKS cluster.

Add this resource in kms.tf file.

resource "aws_kms_key" "eks_kms_key" {
description = "KMS Key generated to encrypt cluster secrets"
deletion_window_in_days = 10
enable_key_rotation = true
}

Note: “Enabling secrets encryption allows you to use AWS Key Management Service (KMS) keys to provide envelope encryption of Kubernetes secrets stored in etcd for your cluster.”

Note: You can create KMS encryption using EKS module itself without specifying resource out of the module scope, but when you create KMS resource individually — you can specify some custom parameters you want f.e specific encryption type.

Next, we will declare local variables that will populate the kubeconfig which is used by the Kubernetes provider, as well as variables related to the aws-auth ConfigMap.

Add variables in local.tf file:

locals {

base_auth_configmap = yamldecode(module.eks_cluster.aws_auth_configmap_yaml)

updated_auth_configmap_data = {
data = {
mapRoles = yamlencode(
distinct(concat(
yamldecode(local.base_auth_configmap.data.mapRoles), var.map_roles, )
))
mapUsers = yamlencode(var.map_users)
}
}


// We need to autogenerate a valid kubeconfig to be used by the null_resource to update the aws-auth configmap
kubeconfig = yamlencode({
apiVersion = "v1"
kind = "Config"
current-context = "terraform"
clusters = [{
name = module.eks_cluster.cluster_name
cluster = {
certificate-authority-data = module.eks_cluster.cluster_certificate_authority_data
server = module.eks_cluster.cluster_endpoint
}
}]

contexts = [{
name = "terraform"
context = {
cluster = module.eks_cluster.cluster_name
user = "terraform"
}
}]
users = [{
name = "terraform"
user = {
token = data.aws_eks_cluster_auth.this.token
}
}]
})

}

Optionally, if you prefer to store your state file in an S3 bucket, you can include this block in your main.tf file:

provider "aws" {
region = "eu-central-1"
}

terraform {
backend "s3" {
bucket = "terraform-state-eu-central-1"
key = "aws/opsbridge-cluster/terraform.tfstate"
region = "eu-central-1"
}
}

Lastly, we will add an official AWS module invocation block with the specified parameters in the eks.tf file. You’ll notice that various sections are explained in configuration itself.

// Instead of passing the AWS Region as parameter, we infer it from the provider configuration
data "aws_region" "current" {}

data "aws_availability_zones" "available" {}

data "aws_eks_cluster" "cluster" {
name = module.eks_cluster.cluster_name
depends_on = [module.eks_cluster.cluster_name]
}

data "aws_eks_cluster_auth" "this" {
name = module.eks_cluster.cluster_name
depends_on = [module.eks_cluster.cluster_name]
}

module "eks_cluster" {
source = "terraform-aws-modules/eks/aws"
# Latest version can be found at https://registry.terraform.io/modules/terraform-aws-modules/eks/aws/latest
version = "19.16.0"

# Name your EKS cluster in whatever name you would like
cluster_name = "opsbridge-cluster"
# Kubernetes latest version can be found at https://kubernetes.io/releases/
cluster_version = "1.27"

# If you want that cluster plane API would be reachable from public address rather than private - enable this
cluster_endpoint_public_access = true

# A list of the desired control plane logs to enable. https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html
cluster_enabled_log_types = ["api", "audit", "authenticator", "controllerManager", "scheduler"]

# Change with the VPC ID into which EKS control plane with nodes will be provisioned
vpc_id = "<vpc_id>"
# subnet_ids - A list of subnet IDs where the nodes/node groups will be provisioned.
subnet_ids = ["<subnet1>", "<subnet2>", "<subnet3>"]
# control_plane_subnet_ids - Is a list of subnet IDs where the EKS cluster control plane (ENIs) will be provisioned.
# If control_plane_subnet_ids is not provided, the EKS cluster control plane (ENIs) will be provisioned in subnet_ids provided subnets.
# control_plane_subnet_ids = ["<subnet1>", "<subnet2>", "<subnet3>"]

# aws-auth manages a configmap which maps IAM users and roles
manage_aws_auth_configmap = "true"
# Enable creation aws-auth configmap. Only if you are using self-managed node groups
create_aws_auth_configmap = "false"

# List of role maps to add to the aws-auth configmap
aws_auth_roles = var.map_roles
# List of user maps to add to the aws-auth configmap
aws_auth_users = var.map_users

# Enabling encryption on AWS EKS secrets using a customer-created key
cluster_encryption_config = {
provider_key_arn = aws_kms_key.eks_kms_key.arn
resources = ["secrets"]
}

# Additional EKS provided addons https://docs.aws.amazon.com/eks/latest/userguide/eks-add-ons.html
cluster_addons = {
vpc-cni = {
resolve_conflicts = "OVERWRITE"
}
}

# IRSA enabled to create an OpenID trust between our cluster and IAM, in order to map AWS Roles to Kubernetes SA's
enable_irsa = true

# Configuration of nodes that will be provisioned. They will always exist even if
# karpenter autoscalling will be configured.
eks_managed_node_groups = {
initial = {
instance_types = ["t3.large"]

min_size = 1
max_size = 2
desired_size = 1
}
}

node_security_group_additional_rules = {

node_to_node_ig = {
description = "Node to node ingress traffic"
from_port = 1
to_port = 65535
protocol = "all"
type = "ingress"
self = true
}

}

}

With these steps completed, you can now initiate the terraform apply command. You should observe that the resources have been successfully created:

<...Truncated...>
module.eks_cluster.module.eks_managed_node_group["initial"].aws_eks_node_group.this[0]: Still creating... [1m50s elapsed]
module.eks_cluster.module.eks_managed_node_group["initial"].aws_eks_node_group.this[0]: Still creating... [2m0s elapsed]
module.eks_cluster.module.eks_managed_node_group["initial"].aws_eks_node_group.this[0]: Creation complete after 2m9s [id=opsbridge-cluster:initial-20230821e]
module.eks_cluster.aws_eks_addon.this["vpc-cni"]: Creating...
module.eks_cluster.kubernetes_config_map_v1_data.aws_auth[0]: Creating...
module.eks_cluster.kubernetes_config_map_v1_data.aws_auth[0]: Creation complete after 1s [id=kube-system/aws-auth]
module.eks_cluster.aws_eks_addon.this["vpc-cni"]: Still creating... [10s elapsed]
module.eks_cluster.aws_eks_addon.this["vpc-cni"]: Still creating... [20s elapsed]
module.eks_cluster.aws_eks_addon.this["vpc-cni"]: Still creating... [30s elapsed]
module.eks_cluster.aws_eks_addon.this["vpc-cni"]: Creation complete after 35s [id=opsbridge-cluster:vpc-cni]


Apply complete! Resources: 35 added, 0 changed, 0 destroyed.

You can verify the status of your cluster’s activity either through the AWS Console or by using the eksctl command. When using eksctl, you should see that the cluster you've created is now marked as "Active."

$ eksctl get cluster opsbridge-cluster
NAME VERSION STATUS CREATED VPC SUBNETS SECURITYGROUPS PROVIDER
opsbridge-cluster 1.27 ACTIVE 2023-08-26T08:51:50Z vpc-1 subnet-1,subnet-2,subnet-3 sg-1 EKS

At this point, you can generate the kubeconfig on your local shell using the AWS CLI:

aws eks update-kubeconfig --region eu-central-1 --name opsbridge-cluster

Once you’ve configured the kubeconfig, you can use the kubectl commands to examine the resources within your cluster:

$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-10-31-91-129.eu-central-1.compute.internal Ready <none> 9m14s v1.27.3-eks-a5565ad

$ kubectl get all -n kube-system
NAME READY STATUS RESTARTS AGE
pod/aws-node-gmw5b 1/1 Running 0 10m
pod/coredns-7bc655f56f-9mhmw 1/1 Running 0 17m
pod/coredns-7bc655f56f-htg4x 1/1 Running 0 17m
pod/kube-proxy-28hd7 1/1 Running 0 11m

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kube-dns ClusterIP 172.20.0.10 <none> 53/UDP,53/TCP 17m

NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/aws-node 1 1 1 1 1 <none> 17m
daemonset.apps/kube-proxy 1 1 1 1 1 <none> 17m

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/coredns 2/2 2 2 17m

NAME DESIRED CURRENT READY AGE
replicaset.apps/coredns-7bc655f56f 2 2 2 17m

That’s it! Now you’re all set to create your own manifests and generate the necessary resources using them!

Thank you! 🙏

--

--