An In-depth Guide to Creating an AWS EKS Cluster

Ahmed Salem
6 min readJul 19, 2023

--

As more organizations embrace container orchestration, Kubernetes has quickly emerged as a popular solution. In this article, we’ll explore how to set up an Amazon Elastic Kubernetes Service (EKS) cluster, deploy a simple ‘Hello World’ web application, and expose it publicly using a LoadBalancer service.

Setting the Stage

Let’s start by understanding what we aim to achieve. We will:

  1. Set up an EKS cluster via eksctl utility and IAM roles via the AWS Console.
  2. Deploy a Kubernetes (K8s) pod using a YAML file, utilizing the Docker image gcr.io/google-samples/hello-app:2.0.
  3. Create a K8s service using a YAML file for the deployed pods, exposing it publicly to a LoadBalancer through port 8080.

Our acceptance criterion is straightforward: as a user, you should be able to access the ‘Hello World’ page deployed on the EKS cluster via a web browser.

Prerequisites

Ensure you have the following:

  1. An AWS account with admin privileges.
  2. AWSCli access, eksctl for creating the cluster, and Kubectl utility installed for creating pods, deployments, services using YAML files, and retrieving the service domain name/port.
  3. An instance is configured to manage the cluster via Kubectl.

The architecture of the system we’re going to implement can be visualized as follows:

Step-by-Step Guide

Step 1: Create an IAM Role for the EKS Cluster

The first step involves creating an IAM role (let’s call it EKS-ClusterRole) for our EKS cluster via the AWS Management Console. The role enables AWS services to act on your behalf.

Learn more about the Amazon EKS cluster IAM role here.

Step 2: Set Up a Dedicated VPC for the EKS Cluster

To provide a secure environment for our EKS cluster, we’ll create a dedicated Virtual Private Cloud (VPC) using a CloudFormation template shown below.

AWSTemplateFormatVersion: '2010-09-09'
Description: 'Amazon EKS Sample VPC - Private and Public subnets'

Parameters:

VpcBlock:
Type: String
Default: 192.168.0.0/16
Description: The CIDR range for the VPC. This should be a valid private (RFC 1918) CIDR range.

PublicSubnet01Block:
Type: String
Default: 192.168.0.0/18
Description: CidrBlock for public subnet 01 within the VPC

PublicSubnet02Block:
Type: String
Default: 192.168.64.0/18
Description: CidrBlock for public subnet 02 within the VPC

PrivateSubnet01Block:
Type: String
Default: 192.168.128.0/18
Description: CidrBlock for private subnet 01 within the VPC

PrivateSubnet02Block:
Type: String
Default: 192.168.192.0/18
Description: CidrBlock for private subnet 02 within the VPC

Metadata:
AWS::CloudFormation::Interface:
ParameterGroups:
-
Label:
default: "Worker Network Configuration"
Parameters:
- VpcBlock
- PublicSubnet01Block
- PublicSubnet02Block
- PrivateSubnet01Block
- PrivateSubnet02Block

Resources:
VPC:
Type: AWS::EC2::VPC
Properties:
CidrBlock: !Ref VpcBlock
EnableDnsSupport: true
EnableDnsHostnames: true
Tags:
- Key: Name
Value: !Sub '${AWS::StackName}-VPC'

InternetGateway:
Type: "AWS::EC2::InternetGateway"

VPCGatewayAttachment:
Type: "AWS::EC2::VPCGatewayAttachment"
Properties:
InternetGatewayId: !Ref InternetGateway
VpcId: !Ref VPC

PublicRouteTable:
Type: AWS::EC2::RouteTable
Properties:
VpcId: !Ref VPC
Tags:
- Key: Name
Value: Public Subnets
- Key: Network
Value: Public

PrivateRouteTable01:
Type: AWS::EC2::RouteTable
Properties:
VpcId: !Ref VPC
Tags:
- Key: Name
Value: Private Subnet AZ1
- Key: Network
Value: Private01

PrivateRouteTable02:
Type: AWS::EC2::RouteTable
Properties:
VpcId: !Ref VPC
Tags:
- Key: Name
Value: Private Subnet AZ2
- Key: Network
Value: Private02

PublicRoute:
DependsOn: VPCGatewayAttachment
Type: AWS::EC2::Route
Properties:
RouteTableId: !Ref PublicRouteTable
DestinationCidrBlock: 0.0.0.0/0
GatewayId: !Ref InternetGateway

PrivateRoute01:
DependsOn:
- VPCGatewayAttachment
- NatGateway01
Type: AWS::EC2::Route
Properties:
RouteTableId: !Ref PrivateRouteTable01
DestinationCidrBlock: 0.0.0.0/0
NatGatewayId: !Ref NatGateway01

PrivateRoute02:
DependsOn:
- VPCGatewayAttachment
- NatGateway02
Type: AWS::EC2::Route
Properties:
RouteTableId: !Ref PrivateRouteTable02
DestinationCidrBlock: 0.0.0.0/0
NatGatewayId: !Ref NatGateway02

NatGateway01:
DependsOn:
- NatGatewayEIP1
- PublicSubnet01
- VPCGatewayAttachment
Type: AWS::EC2::NatGateway
Properties:
AllocationId: !GetAtt 'NatGatewayEIP1.AllocationId'
SubnetId: !Ref PublicSubnet01
Tags:
- Key: Name
Value: !Sub '${AWS::StackName}-NatGatewayAZ1'

NatGateway02:
DependsOn:
- NatGatewayEIP2
- PublicSubnet02
- VPCGatewayAttachment
Type: AWS::EC2::NatGateway
Properties:
AllocationId: !GetAtt 'NatGatewayEIP2.AllocationId'
SubnetId: !Ref PublicSubnet02
Tags:
- Key: Name
Value: !Sub '${AWS::StackName}-NatGatewayAZ2'

NatGatewayEIP1:
DependsOn:
- VPCGatewayAttachment
Type: 'AWS::EC2::EIP'
Properties:
Domain: vpc

NatGatewayEIP2:
DependsOn:
- VPCGatewayAttachment
Type: 'AWS::EC2::EIP'
Properties:
Domain: vpc

PublicSubnet01:
Type: AWS::EC2::Subnet
Metadata:
Comment: Subnet 01
Properties:
MapPublicIpOnLaunch: true
AvailabilityZone:
Fn::Select:
- '0'
- Fn::GetAZs:
Ref: AWS::Region
CidrBlock:
Ref: PublicSubnet01Block
VpcId:
Ref: VPC
Tags:
- Key: Name
Value: !Sub "${AWS::StackName}-PublicSubnet01"
- Key: kubernetes.io/role/elb
Value: 1

PublicSubnet02:
Type: AWS::EC2::Subnet
Metadata:
Comment: Subnet 02
Properties:
MapPublicIpOnLaunch: true
AvailabilityZone:
Fn::Select:
- '1'
- Fn::GetAZs:
Ref: AWS::Region
CidrBlock:
Ref: PublicSubnet02Block
VpcId:
Ref: VPC
Tags:
- Key: Name
Value: !Sub "${AWS::StackName}-PublicSubnet02"
- Key: kubernetes.io/role/elb
Value: 1

PrivateSubnet01:
Type: AWS::EC2::Subnet
Metadata:
Comment: Subnet 03
Properties:
AvailabilityZone:
Fn::Select:
- '0'
- Fn::GetAZs:
Ref: AWS::Region
CidrBlock:
Ref: PrivateSubnet01Block
VpcId:
Ref: VPC
Tags:
- Key: Name
Value: !Sub "${AWS::StackName}-PrivateSubnet01"
- Key: kubernetes.io/role/internal-elb
Value: 1

PrivateSubnet02:
Type: AWS::EC2::Subnet
Metadata:
Comment: Private Subnet 02
Properties:
AvailabilityZone:
Fn::Select:
- '1'
- Fn::GetAZs:
Ref: AWS::Region
CidrBlock:
Ref: PrivateSubnet02Block
VpcId:
Ref: VPC
Tags:
- Key: Name
Value: !Sub "${AWS::StackName}-PrivateSubnet02"
- Key: kubernetes.io/role/internal-elb
Value: 1

PublicSubnet01RouteTableAssociation:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
SubnetId: !Ref PublicSubnet01
RouteTableId: !Ref PublicRouteTable

PublicSubnet02RouteTableAssociation:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
SubnetId: !Ref PublicSubnet02
RouteTableId: !Ref PublicRouteTable

PrivateSubnet01RouteTableAssociation:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
SubnetId: !Ref PrivateSubnet01
RouteTableId: !Ref PrivateRouteTable01

PrivateSubnet02RouteTableAssociation:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
SubnetId: !Ref PrivateSubnet02
RouteTableId: !Ref PrivateRouteTable02

ControlPlaneSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Cluster communication with worker nodes
VpcId: !Ref VPC

Outputs:

SubnetIds:
Description: Subnets IDs in the VPC
Value: !Join [ ",", [ !Ref PublicSubnet01, !Ref PublicSubnet02, !Ref PrivateSubnet01, !Ref PrivateSubnet02 ] ]

SecurityGroups:
Description: Security group for the cluster control plane communication with worker nodes
Value: !Join [ ",", [ !Ref ControlPlaneSecurityGroup ] ]

VpcId:
Description: The VPC Id
Value: !Ref VPC

Step 3: Create the EKS Cluster

Next, we’ll leverage the eksctl tool to create our EKS cluster, providing a configuration file to specify the cluster's settings as shown below:

apiVersion: eksctl.io/v1alpha5
cloudWatch:
clusterLogging:
enableTypes:
- api
- audit
- authenticator
- controllerManager
- scheduler
logRetentionInDays: 14
iam:
vpcResourceControllerPolicy: true
withOIDC: true
kind: ClusterConfig
kubernetesNetworkConfig:
ipFamily: IPv4
managedNodeGroups:
- amiFamily: AmazonLinux2
desiredCapacity: 3
disableIMDSv1: false
disablePodIMDS: false
iam:
withAddonPolicies:
albIngress: true
appMesh: false
appMeshPreview: false
autoScaler: true
awsLoadBalancerController: true
certManager: true
cloudWatch: true
ebs: true
efs: true
externalDNS: true
fsx: false
imageBuilder: false
xRay: false
instanceSelector: {}
instanceTypes:
- c5.xlarge
labels:
alpha.eksctl.io/cluster-name: Cluster-Demo
alpha.eksctl.io/nodegroup-name: App-NG
maxSize: 3
minSize: 3
name: App-NG
privateNetworking: true
releaseVersion: ""
ssh:
allow: false
publicKeyPath: ""
tags:
alpha.eksctl.io/nodegroup-name: App-NG
alpha.eksctl.io/nodegroup-type: managed
volumeIOPS: 3000
volumeSize: 158
volumeThroughput: 125
volumeType: gp3
metadata:
name: Cluster-Demo
region: me-south-1
version: "1.24"
privateCluster:
enabled: true
skipEndpointCreation: false
vpc:
autoAllocateIPv6: false
cidr: 10.0.0.0/16
id: vpc-054c9ed32c6b642e9
manageSharedNodeSecurityGroupRules: true
nat:
gateway: Disable
subnets:
private:
me-south-1a:
az: me-south-1a
cidr: 10.0.128.0/20
id: subnet-034b73b7f3984c274
me-south-1b:
az: me-south-1b
cidr: 10.0.144.0/20
id: subnet-0ae1327a462d69f31
me-south-1c:
az: me-south-1c
cidr: 10.0.160.0/20
id: subnet-0ca6880fdba103ca3

Step 4: Launch and Configure an EC2 Instance

We’ll now launch an EC2 instance that can communicate with our EKS cluster and install necessary utilities and tools. You’ll need to install the latest version of AWS CLI, IAM Authenticator, and Kubectl utility.

After installing these, ensure to configure the AWS CLI correctly by providing your access key, secret access key, default region name, and default output format. To confirm whether AWS CLI has been configured correctly, try calling any AWS service.

Next, we’ll install the AWS IAM authenticator, which Amazon EKS uses for providing authentication to your Kubernetes cluster. You can verify the installation by invoking the help command.

Finally, we’ll install Kubectl, a command-line utility to interact with the Kubernetes cluster. After successful installation, we’ll update the kubeconfig file, which Kubectl uses to interact with the Kubernetes cluster. The command kubectl get svc will verify if the kubeconfig is configured correctly.

Step 5: Create an IAM Role for the EKS Worker Nodes

Similar to step 1, create an IAM role for the EKS worker nodes (EKS-WorkerNodeRole). This role allows worker nodes to join the cluster. More details on creating an Amazon EKS node IAM role can be found here.

Step 6: Create Worker Nodes

Using the AWS Management Console, we’ll create worker nodes using a Managed Node Group. You can find more details about creating a managed node group here.

Step 7: Deploy the Demo Application

SSH into the configured EC2 instance. We’ll create a deployment using a YAML file as shown below and apply it.

apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
selector:
matchLabels:
app: metrics
department: sales
replicas: 3
template:
metadata:
labels:
app: metrics
department: sales
spec:
containers:
- name: hello
image: "gcr.io/google-samples/hello-app:2.0"

Similarly, we’ll create a service using another YAML file and apply it. This service exposes the application to the internet.

apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: metrics
department: sales
type: LoadBalancer
ports:
- port: 80
targetPort: 8080

Finally, confirm that the service is running by making a curl request to the DNS name of the AWS LoadBalancer or by pasting that DNS name in the browser.

$ curl -silent a705b459c7045418087f45dcdd920176-789082253.us-east-2.elb.amazonaws.com:80 
$ http://a705b459c7045418087f45dcdd920176-789082253.us-east-2.elb.amazonaws.com/

You should see the ‘Hello World’ page as a result, indicating that the service is running correctly.

Conclusion

This step-by-step guide provides a detailed approach to setting up an EKS cluster, deploying a demo application, and exposing it publicly. As Kubernetes continues to be an integral part of modern cloud architectures, understanding its setup and operation becomes increasingly important. Happy exploring!

--

--

Ahmed Salem

Sr. Cloud DevOps Engineer | AWS Community Builder | 3x AWS Certified | 1x Azure Certified | CKA | CKAD | Terraform Certified