Set up resilient Kubernetes Cluster in AWS EKS using Terraform

Hari Manikkothu
Kubernetes
Published in
4 min readOct 1, 2019

This article explains how to setup a kubernetes cluster in AWS EKS using Terraform.

AWS EKS is a simplified kubernetes managed service. This helps the users to to focus only on setting up the worker nodes and the managements of app layers where as the AWS EKS takes care of managing the control plane in a fully secure, reliable and fault tolerant manner. The control plane consists of at least two API server and three etcd nodes that runs across three availability zones in a region.

EKS Deployment Architecture

Terraform is a popular infrastructure provisioning and management platform. Terraform helps to achieve a fully automated infrastructure provisioning, it can then be integrate into a CI/CD pipeline.

Prerequisite

  1. AWS account — https://portal.aws.amazon.com/billing/signup
  2. Ubunt-18.04 LTS —Though the demo code is tested on this specific OS, it may work on other linux based OS without any change.

Setup Terraform

Terraform is distributed as a single binary. Download the appropriate binary from https://www.terraform.io/downloads.html to local machine, unzip and set the path accordingly.

$ wget https://releases.hashicorp.com/terraform/<version>/terraform_<version>_linux_amd64.zip$ unzip terraform_${VER}_linux_amd64.zip$ chmod +x terraform$ mv terraform /user/local/bin

Setup aws_iam_authenticator

Amazon EKS uses IAM to provide authentication to the kubernetes cluster. aws_iam_authenticator will be used by the kubectl to connect to the EKS cluster. Download and setup aws_iam_authentication as follows.

$ curl -o aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/<version>/<date>/bin/linux/amd64/aws-iam-authenticator$ chmod +x ./aws-iam-authenticator$ mv aws-iam-authenticator /usr/local/bin

Setup aws CLI

aws CLI is needed to update/create the kubectl configuration.

# https://docs.aws.amazon.com/cli/latest/userguide/install-linux-al2017.html# Install pip for python3
$ sudo apt install python3-pip
# Install aws CLI using pip3
$ pip3 install --upgrade --user awscli
# Add install location to the PATH
$ export PATH=/home/ec2-user/.local/bin:$PATH

Setup kubectl

# https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-on-linux
$ curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
$ chmod +x ./kubectl$ sudo mv ./kubectl /usr/local/bin/

Preparation

Variables

EKS cluster name and the workstation IPs are configured in variables.tf file.

VPC Networking

This demo will create an EKS control plan, and will setup a three AZ subnets for better resiliency. The high level architecture diagram is depicted above. A 10.0.0.0/16 VPC, three 10.0.*.0/24 subnets, an internet gateway and subnet routing to route external traffic through the internet gateway will be created as implemented in the vpc.tf file.

EKS Cluster

An EKS control plane will be create as per the code in eks-cluster.tf. EKS will decide subnets and control nodes placements, and will ensure the HA and resilience aspects of it. The user will need to provide the subnets that will host the worker nodes during the cluster creation.

EKS Worker Nodes

eks-worker-node.tf creates the following

  • IAM role allowing Kubernetes actions to access other AWS services
  • EC2 Security Group to allow networking traffic
  • Data source to fetch latest EKS worker AMI
  • AutoScaling Launch Configuration to configure worker instances
  • AutoScaling Group to launch worker instances

Terraform plan and apply

Next step is to trigger the terraform code execution, use the ‘terraform plan’ to see the execution plan before actually creating any resources in the cluster. The output will show the steps necessary to complete the EKS cluster and nodes creation, otherwise shows the error if any.

Run the ‘terraform apply’ command to start the execution.

Configure kubectl

Run the below command to configure the aws CLI

$ aws configure
AWS Access Key ID [None]: <access_key>
AWS Secret Access Key [None]: <secret_key>
Default region name [None]: us-east-1
Default output format [None]: json

Run the command below to create kube config

$ aws eks --region region update-kubeconfig --name cluster_name

Connect to the EKS cluster using kubectl

A ConfigMap is required for the worker nodes to join the underlying kubernetes cluster. This can be created by running the command ‘terraform output config_map_aws_auth’, save the output to configmap.yml.

# Sample configmap
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: arn:aws:iam::<acc-num>:<role-name>
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes

Apply and verify the nodes with the following commands.

$ kubectl apply -f configmap.yml$ kubectl get nodes

Output will show the all of the worker nodes that are successfully joined the cluster.

References

--

--

Hari Manikkothu
Kubernetes

kubernetes enthusiast | AWS certified Solution Architect