EKS Cluster with eksctl

Lubomir Tobek
9 min readJan 5, 2024

--

“Amazon Elastic Kubernetes Service (Amazon EKS) is a managed Kubernetes service that makes it easy for you to run Kubernetes on AWS and on-premises. Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.”

Source of an image: distro.eks.amazonaws.com

Today, I will look at creating a Kubernetes cluster using the AWS CLI and the eksctl tool in AWS.

Prerequisite

This tutorial will be a hands-on demonstration. To follow along, ensure you have a computer and an AWS account. A free tier account is available if you don’t have an AWS account.

Creating an Admin User

Before you create an AWS Kubernetes cluster, you create an administrator. The administrator allows you to log into the AWS console and configure your EKS cluster. Start this tutorial by creating a user with administrative privileges through the AWS console.

AWS Console -> IAM -> IAM Dashboard -> Users -> Create user

Next, provide a username in the User name field, here “eks-admin” is used.

Click the Attach existing policies directly option, check the AdministratorAccess policy, and click Next.

The AdministratorAccess policy gives the user (“eks-admin”) full access to AWS, and more as follow:

  • Allows the user to use CloudFormation
  • Create EC2 instances and CloudWatch logs
  • Configure elastic load balancers

    Finally, review the user details and click Create user to finalize creating the admin user.

Click again on the newly created user and create an access key for him.

We can add the description tag value and create an access key, which we will need later.

Launching an EC2 Instance

Now that you have created “eks-admin,” you can create your first EC2 instance. You will use this instance as the primary node where you run the commands to create the cluster.

EC2 -> Instances -> Launch an instance

We will define the name of the instance, in my case for example “jumpbox”.

We will choose the predefined ones, for example “Amazon Linux 2 AMI”.

Keep the default (t3.micro) for the instance type and click Next: Configure Instance Details to configure the instance.

In the “Create new key pair” section, we will create a new key for accessing the instance.

Enable the Auto-assign Public IP option. This option ensures each of your containers can access the public IP of your Kubernetes master node and your EC2 instances.

In the Configure storage section, I add a space to the instance.

I will launch the instance in the next step.

I will wait for the instance to initialize and connect to it from the local area.

Configuring the AWS CLI Tool

Now that your instance is running, it’s time to configure the command line (CLI) tools. Using the CLI tools with your AWS account is essential in creating your Kubernetes cluster.

From your EC2 dashboard, check the box to select the instance, as shown below. Click on Connect to initialize connecting to the instance.

Once you’ve connected to your EC2 instance, your browser redirects to the interactive terminal below as your temporary SSH session with your EC2 instance.

Now we can run the aws command below to check the CLI version.

[ec2-user@ip-172-31-6-155 ~]$ aws --version
aws-cli/1.18.147 Python/2.7.18 Linux/5.10.201-191.748.amzn2.x86_64 botocore/1.18.6

We need to download and install AWS CLI version 2+ to make sure you can access all Kubernetes features.

[ec2-user@ip-172-31-6-155 ~]$ curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
[ec2-user@ip-172-31-6-155 ~]$ unzip awscliv2.zip

[ec2-user@ip-172-31-6-155 ~]$ sudo ./aws/install
You can now run: /usr/local/bin/aws --version

[ec2-user@ip-172-31-6-155 ~]$ aws --version
aws-cli/1.18.147 Python/2.7.18 Linux/5.10.201-191.748.amzn2.x86_64 botocore/1.18.6

Next, run the aws configure command to configure your instance with the new AWS CLI tools.

Enter the appropriate values in the prompts as per below:

  • AWS Access Key ID [None] — Enter the Access Key ID you noted in the previous “Creating Your Admin User” section.
  • AWS Secret Access Key [None] — Enter the Secret Access Key you noted in the previous “Creating Your Admin User” section.
  • Default region name [None] — Select a supported region, like us-east-1.
  • Default output format [None] — Enter json, since JSON format is the preferred standard for use with Kubernetes.
[ec2-user@ip-172-31-6-155 ~]$ aws configure
AWS Access Key ID [None]: **************************
AWS Secret Access Key [None]: *********************************************
Default region name [None]: us-east-1
Default output format [None]: json

Configuring Amazon EKS Command-Line Tool (eksctl)

Since we aim to create a Kubernetes cluster with the AWS EKS CLI, we will also configure the Amazon EKS command line tool (eksctl). This tool allows us to create and manage Kubernetes clusters on Amazon EKS.

[ec2-user@ip-172-31-6-155 ~]$ curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
[ec2-user@ip-172-31-6-155 ~]$ sudo mv /tmp/eksctl /usr/bin

[ec2-user@ip-172-31-6-155 ~]$ eksctl version
0.167.0

You can run the command below to get a list of all of the supported eksctl commands and their usage.

[ec2-user@ip-172-31-6-155 ~]$ eksctl --help
The official CLI for Amazon EKS

Usage: eksctl [command] [flags]

Commands:
eksctl anywhere EKS anywhere
eksctl associate Associate resources with a cluster
eksctl completion Generates shell completion scripts for bash, zsh or fish
eksctl create Create resource(s)
eksctl delete Delete resource(s)
eksctl deregister Deregister a non-EKS cluster
eksctl disassociate Disassociate resources from a cluster
eksctl drain Drain resource(s)
eksctl enable Enable features in a cluster
eksctl get Get resource(s)
eksctl help Help about any command
eksctl info Output the version of eksctl, kubectl and OS info
eksctl register Register a non-EKS cluster
eksctl scale Scale resources(s)
eksctl set Set values
eksctl unset Unset values
eksctl update Update resource(s)
eksctl upgrade Upgrade resource(s)
eksctl utils Various utils
eksctl version Output the version of eksctl

Common flags:
-C, --color string toggle colorized logs (valid options: true, false, fabulous) (default "true")
-d, --dumpLogs dump logs to disk on failure if set to true
-h, --help help for this command
-v, --verbose int set log level, use 0 to silence, 4 for debugging and 5 for debugging with AWS debug logging (default 3)

Use 'eksctl [command] --help' for more information about a command.


For detailed docs go to https://eksctl.io/

Provisioning your EKS Cluster

Now that we have configured eksctl, we can now issue eksctl commands to our first EKS cluster.

[ec2-user@ip-172-31-6-155 ~]$ eksctl create cluster --name dev --version 1.28 --region us-east-1 --nodegroup-name standard-workers --node-type t3.micro --nodes 3 --nodes-min 1 --nodes-max 4 --managed

Let’s go to our CloudFormation dashboard and see the actions performed by the command. The eksctl create cluster command uses CloudFormation to provision the infrastructure in your AWS account.

As we can see below, the eksctl-dev-cluster CloudFormation stack is created. This process may take 15–20 minutes or more to complete.

Let’s take a look at the CloudFormation stack.

Now let’s go to your EKS dashboard and you will see a cluster named dev provisioned.

This is what the console view looks like while creating a Node group.

Let’s take a look at AWS Console -> EKS -> Clusters -> dev

Below, you can see the dev’s EKS cluster details, like Node name, Instance type, Node Group, and Status.

Let’s switch to our EC2 dashboard and see that we have nodes running, three of which have the t3.micro role in your AWS account (three worker nodes).

Let’s go back to looking at the CloudFormation stack.

Let’s run the command below and update your kubectl configuration (update-kubeconfig) with the cluster endpoint, certificate and credentials.

[ec2-user@ip-172-31-6-155 ~]$ aws eks update-kubeconfig --name dev --region us-east-1
Added new context arn:aws:eks:us-east-1:************:cluster/dev to /home/ec2-user/.kube/config

Let’s also look at the VPC, how we created a new VPC, new subnets, routing tables, Internet gateway, NAT gateway and other network components.

Deploying an Application on EKS Cluster

Before that, we will install kubectl, the red information that alerted us.

[ec2-user@ip-172-31-6-155 ~]$ curl -O https://s3.us-west-2.amazonaws.com/amazon-eks/1.28.3/2023-11-14/bin/linux/amd64/kubectl
[ec2-user@ip-172-31-6-155 ~]$ curl -O https://s3.us-west-2.amazonaws.com/amazon-eks/1.28.3/2023-11-14/bin/linux/amd64/kubectl.sha256
[ec2-user@ip-172-31-6-155 ~]$ sha256sum -c kubectl.sha256
kubectl: OK

[ec2-user@ip-172-31-6-155 ~]$ chmod +x ./kubectl
[ec2-user@ip-172-31-6-155 ~]$ mkdir -p $HOME/bin && cp ./kubectl $HOME/bin/kubectl && export PATH=$HOME/bin:$PATH
[ec2-user@ip-172-31-6-155 ~]$ echo 'export PATH=$HOME/bin:$PATH' >> ~/.bashrc

[ec2-user@ip-172-31-6-155 ~]$ kubectl version --client
Client Version: v1.28.3-eks-e71965b
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3

Let’s use the yelb application as an example. We will create a namespace and deploy LoadBalancer.

[ec2-user@ip-172-31-6-155 ~]$ kubectl create ns yelb
namespace/yelb created

[ec2-user@ip-172-31-6-155 ~]$ kubectl -n yelb apply -f https://raw.githubusercontent.com/lamw/yelb/master/deployments/platformdeployment/Kubernetes/yaml/yelb-k8s-loadbalancer.yaml
service/redis-server created
service/yelb-db created
service/yelb-appserver created
service/yelb-ui created
deployment.apps/yelb-ui created
deployment.apps/redis-server created
deployment.apps/yelb-db created
deployment.apps/yelb-appserver created

Verify that all Yelb Pods are running.

[ec2-user@ip-172-31-6-155 ~]$ kubectl -n yelb get pods

NAME READY STATUS RESTARTS AGE
redis-server-78d46b7f7b-qjp58 1/1 Running 0 70s
yelb-appserver-5ff999c576-zk8mt 1/1 Running 0 69s
yelb-db-599794658c-drgfs 1/1 Running 0 70s
yelb-ui-7f8ccf4cdf-mmljx 1/1 Running 0 70s

Verify Yelb application has allocated an IP Address.

[ec2-user@ip-172-31-6-155 ~]$ kubectl -n yelb get svc

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
redis-server ClusterIP 10.100.226.176 <none> 6379/TCP 2m1s
yelb-appserver ClusterIP 10.100.230.220 <none> 4567/TCP 2m
yelb-db ClusterIP 10.100.97.240 <none> 5432/TCP 2m
yelb-ui LoadBalancer 10.100.98.73 a3bd1e9f9a5c64342bcdaa3451fe564f-1347224159.us-east-1.elb.amazonaws.com 80:31185/TCP 2m

Now I can see that the application is available through aws load balancer with a dns name.

At this point, you should have a good understanding of how to create EKS clusters in your AWS environment.

--

--