Deploy a Kubernetes cluster in AWS EKS using Terraform

Shak
6 min readMay 8, 2024

--

In my previous blog I discussed about how much fun Kubernetes can be once you get a hang of it. The process of deploying a Kubernetes cluster in AWS EKS is very convenient with the availability of Terraform modules. Terraform modules are pre-build codes for different resources and infrastructure that you can use instead of writing up the codes from scratch every time you want to build something. You can customize it to however way you want and all you have to do is reference the terraform modules and your IDE can directly work with the codes in the referenced module. Let’s get into this!

Prerequisites:

  • AWS account
  • Kubernetes CLI installed
  • Kubectl installed
  • AWS CLI installed
  • Terraform installed
  • Basic Terraform knowledge and experience
  • Basic Kubernetes knowledge and experience
  • Visual Studio Code or similar IDE

Objectives:

  • Set up AWS credentials
  • Install kubectl (if not already done)
  • Configure working directory
  • Edit .tf files to create Kubernetes cluster infrastructure
  • Deploy the Kubernetes cluster
  • Clean up

We will start with setting up Terraform cloud as our backend. This is a safe way to pass on your credentials to Terraform so it can make the API call to create resources in AWS. If you don’t know how to do this, please refer to my previous post here. Once you have set up Terraform cloud as your backend with your AWS credentials, you can install Kubectl in your IDE if you already don’t have this installed. Kubectl is the command-line tool used to interact with Kubernetes clusters. It allows users to execute commands against Kubernetes clusters to deploy and manage applications, inspect cluster resources, and troubleshoot issues. You need Kubectl installed in your IDE to manage Kubernetes. Follow these commands to install Kubectl. Once installation is complete run the command kubectl version and you should see the current version of the Kubectl. See below for reference

Create a new directory and create the following files

  • terraform.tf
  • variables.tf
  • vpc.tf
  • outputs.tf
  • eks-cluster.tf
  • main.tf

Your directory should look like this

Now its time to add our Terraform codes. Open the terraform.tf file and copy and paste the codes below. AWSEKS is the name of the organization in my terraform cloud account for this project. This is where you should include the name of your Terraform cloud organization you are using. The versions outlined below for all the other modules is the current version at the time of this project. Once completed you can save and close.

terraform {
cloud {
organization = "AWSEKS"

workspaces {
name = "AWSEKS"
}
}
}

terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.48.0"
}

random = {
source = "hashicorp/random"
version = "~> 3.6.1"
}

tls = {
source = "hashicorp/tls"
version = "~> 4.0.5"
}

cloudinit = {
source = "hashicorp/cloudinit"
version = "~> 2.3.4"
}

kubernetes = {
source = "hashicorp/kubernetes"
version = "~> 2.30.0"
}
}

required_version = "~> 1.3"
}

This code above defines the provider configuration for Terraform along with the availability zone and cluster name configuration. The data block makes an api call to pull availability zones in your specified AWS region and the locals variable uses a random string to create a unique EKS cluster name. The random strings are defined in the resource block. Now open the vpc.tf file and paste the following code below. Make sure you use the current version available when you are working on this project. You can find all the updated version numbers in the terraform registry.

module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "3.14.2"

name = "vpc"

cidr = "10.0.0.0/16"
azs = slice(data.aws_availability_zones.available.names, 0, 3)

private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
public_subnets = ["10.0.4.0/24", "10.0.5.0/24", "10.0.6.0/24"]

enable_nat_gateway = true
single_nat_gateway = true
enable_dns_hostnames = true

public_subnet_tags = {
"kubernetes.io/cluster/${local.cluster_name}" = "shared"
"kubernetes.io/role/elb" = 1
}

private_subnet_tags = {
"kubernetes.io/cluster/${local.cluster_name}" = "shared"
"kubernetes.io/role/internal-elb" = 1
}
}

You will see the module listed on top of the code that tells Terraform to call the VPC module that is listed on the registry. We define our Cidr block and also the availability zones based on the data block that we defined earlier. We also define the public and private subnet and enable the nat gateways and dns hostnames to be “true”. We also add tags to the private and public subnet that reference the local variable we created earlier. Once completed, you can save and close. Now edit the aws-eks.tf file with the code outlined below.

module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "20.9.0"

cluster_name = local.cluster_name
cluster_version = "1.24"

vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets
cluster_endpoint_public_access = true

eks_managed_node_group_defaults = {
ami_type = "AL2_x86_64"

}

eks_managed_node_groups = {
one = {
name = "node-group-1"

instance_types = ["t3.small"]

min_size = 1
max_size = 3
desired_size = 2
}

two = {
name = "node-group-2"

instance_types = ["t3.small"]

min_size = 1
max_size = 2
desired_size = 1
}
}
}

The node above is calling the terraform eks module from the terraform registry. We specify the VPC that the EKS will connect to by specifying the VPC defined earlier by calling the VPC ID and private subnet IDs from the VPC module. We also define other parameters like public access and also the ami type and nodes. If you are not familiar with nodes you can read more about Kubernetes nodes and other resources here. Now let’s edit the variable.tf file. Paste the following code to define your region.

variable "region" {
description = "AWS region"
type = string
default = "us-east-1"
}

I am using us-east-1 for my region. You can include any region that you want to use. Now let’s edit the output.tf file. Paste the following code. This will define the output you want printed to the CLI.

output "cluster_endpoint" {
description = "Endpoint for EKS control plane"
value = module.eks.cluster_endpoint
}

output "cluster_security_group_id" {
description = "Security group ids attached to the cluster control plane"
value = module.eks.cluster_security_group_id
}

output "region" {
description = "AWS region"
value = var.region
}

output "cluster_name" {
description = "Kubernetes Cluster Name"
value = module.eks.cluster_name
}

The codes are complete and now we have to deploy this in AWS. Initialize Terraform with terraform init command and then validate the codes using terraform validate command. Once completed use the terraform plan command to see what resources will be added. You should see 59 resources to add on your CLI like below.

Now run terraform apply -auto-approve command to execute this plan. It may take a while (mine took 12 minutes) but you should see the resource added screen like this below.

Our resources are successfully created. All we have to do now is configure kubectl and we should be able to communicate with the cluster. Use this command below to configure Kubectl

aws eks update-kubeconfig --region <region> --name <EKS_cluster_name>

Replace the region and the cluster name with your region and cluster name like the output shown above on your CLI when Terraform was applied. You can also go to your AWS console and go to EKS dashboard to look into your cluster, vpc and subnet information as well.

We have completed our project! Don’t forget to use Terraform destroy to delete all your resources. Thank you for following.

--

--

Shak
Shak

Written by Shak

Financial Analyst turned Cloud Engineer | Tech Enthusiast | DevOps ♾ | Cloud Engineer ☁️ | Linux 🐧 | AWS 🖥️ | Python 🐍 | Docker 🐳 | Terraform 🏗