Hands-on: Deploying EKS Cluster using Terraform in AWS

Keyurvaidya
6 min readJul 8, 2023

--

Skill Level — Intermediate

PrerequisiteWhat is AWS? What is Terraform, configure your AWS CLI and Terraform.

Terraform and EKS
Terraform and EKS

In this article we would be creating EKS Cluster, using Terraform. You may download the source code my Git Repo — EKS Terraform Code

What is Terraform?

  • Terraform is an IAC tool, used primarily to automate various infrastructure tasks.

We are going to create VPC, Subnets, Internet Gateway, NAT Gateway, Route Tables, IAM roles and an OIDC Connector.

So, let’s get our hands dirty and begin.

  1. Creating a VPC
  • A VPC is a logical isolated virtual network that we define.

resource “aws_vpc” “main” {
cidr_block = “10.0.0.0/16”
#instance_tenancy = “default”

tags = {
Name = “main”
}
}

2. Creating Internet Gateway

  • To provide internet access to the Services, we create Internet Gateway in your VPC.

resource “aws_internet_gateway” “igw” {
vpc_id = aws_vpc.main.id

tags = {
Name = “main”
}
}

3. Creating Subnets

  • To meet the requirements of EKS, we need 2 public and 2 private Subnets.

##### First Private Subnet

resource “aws_subnet” “private-us-east-1a” {
vpc_id = aws_vpc.main.id
cidr_block = “10.0.0.0/19”
availability_zone = “us-east-1a”

tags = {
Name = “private-us-east-1a”
“kubernetes.io/role/internal-elb” = “1” # Required by K8S to discover subnets where private LoadBalancers will be created
“kubernetes.io/cluster/demo” = “owned”
}
}

# Second Private Subnet just in different AZ

resource “aws_subnet” “private-us-east-1b” {
vpc_id = aws_vpc.main.id
cidr_block = “10.0.32.0/19” # The last IP of the CIDR block in above Subnet is 10.0.31.0/255, so that’s why we used this CIDR
availability_zone = “us-east-1b”

tags = {
“Name” = “private-us-east-1b”
“kubernetes.io/role/internal-elb” = “1”
“kubernetes.io/cluster/demo” = “owned”
}
}

resource “aws_subnet” “public-us-east-1a” {
vpc_id = aws_vpc.main.id
cidr_block = “10.0.64.0/19”
availability_zone = “us-east-1a” # Same as the first private subnet
map_public_ip_on_launch = true # Only need this if you require public K8S instance group

tags = {
“Name” = “public-us-east-1a”
“kubernetes.io/role/elb” = “1”
“kubernetes.io/cluster/demo” = “owned”
}
}

resource “aws_subnet” “public-us-east-1b” {
vpc_id = aws_vpc.main.id
cidr_block = “10.0.96.0/19”
availability_zone = “us-east-1b”
map_public_ip_on_launch = true

tags = {
“Name” = “public-us-east-1b”
“kubernetes.io/role/elb” = “1”
“kubernetes.io/cluster/demo” = “owned”
}
}

4. Creating NAT Gateway

  • It used in the private Subnet to allow services to access the Internet

#For NAT, we need to create and allocate a Public IP address

resource “aws_eip” “nat” {
vpc = true

tags = {
Name = “nat”
}
}

resource “aws_nat_gateway” “nat” {
allocation_id = aws_eip.nat.id
subnet_id = aws_subnet.public-us-east-1a.id #You need to put your Public Subnet, that must have IGW as default route

tags = {
Name = “nat”
}

depends_on = [aws_internet_gateway.igw]
}

5. Creating Route Tables

  • To route the traffic

#Private Route table with default route to NAT Gateway

resource “aws_route_table” “private” {
vpc_id = aws_vpc.main.id

route = [
{
cidr_block = “0.0.0.0/0”
nat_gateway_id = aws_nat_gateway.nat.id
carrier_gateway_id = “”
destination_prefix_list_id = “”
egress_only_gateway_id = “”
gateway_id = “”
instance_id = “”
ipv6_cidr_block = “”
local_gateway_id = “”
network_interface_id = “”
transit_gateway_id = “”
vpc_endpoint_id = “”
vpc_peering_connection_id = “”
},
]

tags = {
Name = “private”
}
}

resource “aws_route_table” “public” {
vpc_id = aws_vpc.main.id

route = [
{
cidr_block = “0.0.0.0/0”
gateway_id = aws_internet_gateway.igw.id
nat_gateway_id = “”
carrier_gateway_id = “”
destination_prefix_list_id = “”
egress_only_gateway_id = “”
instance_id = “”
ipv6_cidr_block = “”
local_gateway_id = “”
network_interface_id = “”
transit_gateway_id = “”
vpc_endpoint_id = “”
vpc_peering_connection_id = “”
},
]

tags = {
Name = “public”
}
}

resource “aws_route_table_association” “private-us-east-1a” {
subnet_id = aws_subnet.private-us-east-1a.id
route_table_id = aws_route_table.private.id
}

resource “aws_route_table_association” “private-us-east-1b” {
subnet_id = aws_subnet.private-us-east-1b.id
route_table_id = aws_route_table.private.id
}

resource “aws_route_table_association” “public-us-east-1a” {
subnet_id = aws_subnet.public-us-east-1a.id
route_table_id = aws_route_table.public.id
}

resource “aws_route_table_association” “public-us-east-1b” {
subnet_id = aws_subnet.public-us-east-1b.id
route_table_id = aws_route_table.public.id
}

6. Creating EKS Cluster

  • EKS Configuration

#First, Creat an IAM role that has Amazon EKS cluster policy
resource “aws_iam_role” “demo” {
name = “eks-cluster-demo”

assume_role_policy = <<POLICY
{
“Version”: “2012–10–17”,
“Statement”: [
{
“Effect”: “Allow”,
“Principal”: {
“Service”: “eks.amazonaws.com”
},
“Action”: “sts:AssumeRole”
}
]
}
POLICY
}

##Attaching the required IAM Policy to the EKS Cluster Demo IAM role that we created above
resource “aws_iam_role_policy_attachment” “demo-AmazonEKSClusterPolicy” {
policy_arn = “arn:aws:iam::aws:policy/AmazonEKSClusterPolicy”
role = aws_iam_role.demo.name
}

#EKS Configuration
resource “aws_eks_cluster” “demo” {
name = “demo”
role_arn = aws_iam_role.demo.arn

vpc_config {
subnet_ids = [
aws_subnet.private-us-east-1a.id,
aws_subnet.private-us-east-1b.id,
aws_subnet.public-us-east-1a.id,
aws_subnet.public-us-east-1b.id
]
}

depends_on = [aws_iam_role_policy_attachment.demo-AmazonEKSClusterPolicy]
}

7. Creating Private Node for EKS

  • Single Instance group for K8S

##### Single Instance group for K8S #####
resource “aws_iam_role” “nodes” {
name = “eks-node-group-nodes”

assume_role_policy = jsonencode({
Statement = [{
Action = “sts:AssumeRole”
Effect = “Allow”
Principal = {
Service = “ec2.amazonaws.com”
}
}]
Version = “2012–10–17”
})
}

resource “aws_iam_role_policy_attachment” “nodes-AmazonEKSWorkerNodePolicy” {
policy_arn = “arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy” #This policy grants access to EKS and EC2
role = aws_iam_role.nodes.name
}

resource “aws_iam_role_policy_attachment” “nodes-AmazonEKS_CNI_Policy” {
policy_arn = “arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy”
role = aws_iam_role.nodes.name
}

resource “aws_iam_role_policy_attachment” “nodes-AmazonEC2ContainerRegistryReadOnly” {
policy_arn = “arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly”
role = aws_iam_role.nodes.name
}

resource “aws_eks_node_group” “private-nodes” {
cluster_name = aws_eks_cluster.demo.name
node_group_name = “private-nodes”
node_role_arn = aws_iam_role.nodes.arn

subnet_ids = [
aws_subnet.private-us-east-1a.id,
aws_subnet.private-us-east-1b.id
]

capacity_type = “ON_DEMAND”
instance_types = [“t3.small”]

scaling_config { #By Deafult EKS will not auto scale your node, We need to implement additional cluster auto-scaler
desired_size = 1
max_size = 5
min_size = 0
}

update_config {
max_unavailable = 1
}

labels = {
role = “general”
}

# taint {
# key = “team”
# value = “devops”
# effect = “NO_SCHEDULE”
# }

# launch_template {
# name = aws_launch_template.eks-with-disks.name
# version = aws_launch_template.eks-with-disks.latest_version
# }

depends_on = [
aws_iam_role_policy_attachment.nodes-AmazonEKSWorkerNodePolicy,
aws_iam_role_policy_attachment.nodes-AmazonEKS_CNI_Policy,
aws_iam_role_policy_attachment.nodes-AmazonEC2ContainerRegistryReadOnly,
]
}

8. Creating OIDC Connector

  • We create OIDC resource to manage permissions for the Application deployed on EKS

data “tls_certificate” “eks” {
url = aws_eks_cluster.demo.identity[0].oidc[0].issuer
}

resource “aws_iam_openid_connect_provider” “eks” {
client_id_list = [“sts.amazonaws.com”]
thumbprint_list = [data.tls_certificate.eks.certificates[0].sha1_fingerprint]
url = aws_eks_cluster.demo.identity[0].oidc[0].issuer
}

After you have created the above files, we simply need to run the following command

Terraform init

Terraform plan

Terraform approve –auto-approve

Now, navigate to your AWS Console and search for EKS.

As shown in the image below, you will see an EKS Cluster created with the name that we defined in EKS.tf file.

NOTE — Please do not forget to destroy the resources after the hands-on to incur the cost.

terraform destroy — auto-approve

I hope the above article helped you learn and understand the basic concept of How to deploy an EKS cluster on AWS using Terraform.

Thank you for your time.

--

--