Creating kubernetes cluster with AWS EKS

AWS provides comprehensive guide to start with EKS on a link

Next steps generally follow that guide, difference is that provisioning play is (can be) implemented purely with terraform w/o cloud templates.

It has advantage in a way how we do deployments, but in your scenario cloud templates might have advantage too.

First step is creating a VPC with Public and Private Subnets for Your Amazon EKS Cluster

The simpliest VPC would be all subnets public, like seen on cloud template ( )

But if your goal is to reuse that in a production, the reasonable would be VPC with private and public parts, like

At the moment there are limitations:

Let’s proceed with recommended network architecture that uses private subnets for your worker nodes and public subnets for Kubernetes to create internet-facing load balancers within

Step 1.1: Create an Elastic IP Address for Your NAT Gateway(s)

Worker nodes in private subnets require a NAT gateway for outbound internet access. A NAT gateway requires an Elastic IP address in your public subnet, but the VPC wizard does not create one for you. Create the Elastic IP address before running the VPC wizard.

To create an Elastic IP address

Open the Amazon VPC console at

In the left navigation pane, choose Elastic IPs. Choose Allocate new address, Allocate, Close. Note the Allocation ID for your newly created Elastic IP address; you enter this later in the VPC wizard.

Step 1.2: Run the VPC Wizard

The VPC wizard automatically creates and configures most of your VPC resources for you.

To run the VPC wizard

In the left navigation pane, choose VPC Dashboard. Choose Start VPC Wizard, VPC with Public and Private Subnets, Select. For VPC name, give your VPC a unique name. For Elastic IP Allocation ID, choose the ID of the Elastic IP address that you created earlier. Choose Create VPC. When the wizard is finished, choose OK. Note the Availability Zone in which your VPC subnets were created. Your additional subnets should be created in a different Availability Zone. Appropriate terraform part would be

Step 2 create your Amazon EKS service role

Open the IAM console at

Choose Roles, then Create role.

Choose EKS from the list of services, then Allows Amazon EKS to manage your clusters on your behalf for your use case, then Next: Permissions.

Choose Next: Review.

For Role name, enter a unique name for your role, such as eksServiceRole, then choose Create role.

Terraform definition for the role goes as below.

Step 3: Create a Control Plane Security Group

When you create an Amazon EKS cluster, your cluster control plane creates elastic network interfaces in your subnets to enable communication with the worker nodes. You should create a security group that is dedicated to your Amazon EKS cluster control plane, so that you can apply inbound and outbound rules to govern what traffic is allowed across that connection. When you create the cluster, you specify this security group, and that is applied to the elastic network interfaces that are created in your subnets.

To create a control plane security group

In the left navigation pane, for Filter by VPC, select your VPC and choose Security Groups, Create Security Group.


If you don’t see your new VPC here, refresh the page to pick it up. Fill in the following fields and choose Yes, Create:

For Name tag, provide a name for your security group. For example, -control-plane. For Description, provide a description of your security group to help you identify it later. For VPC, choose the VPC that you are using for your Amazon EKS cluster.

Download aws authenticator, kind of

Validate by running

Check if you have latest awscli installed, you should get output similar to one below

Step 4: create EKS cluster

Now it is time to launch the cluster, either via UI or using provisioning tool of your choice

Making cluster to ECS might seem a bit tricky , as by default AWS proposes to hold cluster info in the separate files, like ~/.kube/eksconfig and use environment variables for switching context export KUBECONFIG=~/.kube/eksconfig

If you have multiple clusters configured, that’s for sure will not be convenient to you.

for example above, you could validate, if you configured your cluster and kubectl right.

Terraform part for the creating cluster action would be

Note, that instead of patching kubelet config, as per original guide, you can just get ready-to-use file content from terraform output kubeconfig.

Step 5,6: Now it is time to launch worker nodes

At a time being AWS recommends to launch stack using, for example, following cloud formation template

What you would need from outcomes, is record the NodeInstanceRole for the node group that was created. You need this when you configure your Amazon EKS worker nodes.

On outputs pane you would see output kind of

final step would be ensuring that worker nodes can join your cluster.

You would need template for kubernetes configuration map

amend role arn and apply changes with

In a few minutes your fleet should be ready

If you want to omit relying on cloud formation template by URL, and instead stick to fully scripted infrastructure:

Step 5: on that we prepare worker node and security groups

Step 5: on that step we are creating autoscaling group

On that step, you should also have fully working EKS cluster. Upon terraform apply run you should see smth like

To make cluster fully operational, after provision you should:

  1. get cluster kubelet config via

after that you should be able to execute commands over your eks cluster

  1. to allow nodes to join, you need to provide cluster with additional config map
  1. You should be able to run pods and services


Troubleshouting …

Now lets start smth

if you see state pending

you might want immediately check

you might see smth like

where in message you will get hint about the issue

Troubleshouting hints from AWS can be found on

Software engineer, with project management background. Founder @ — cool automation for the people :) — have a problem that needs to be solved?