Terraform: Deploying an EKS Cluster.

Matthew Mendez
DevOps Engineer Documentation
5 min readJul 22, 2021

The purpose of this demo is to deploy an AWS EKS cluster via Terraform.

Push our code to GitHub.

Connect Terraform cloud to Github.

Use Terraform cloud as a CI/CD tool to check our build.

1. What is AWS EKS?

Amazon EKS is a managed service that helps make it easier to run Kubernetes on AWS. Through EKS, organizations can run Kubernetes without installing and operating a Kubernetes control plane or worker nodes. Simply put, EKS is a managed containers-as-a-service (CaaS) that drastically simplifies Kubernetes deployment on AWS.

2. Prerequisites.

3. Let’s build!

We’ll start our project by building our code locally via Terraform modules. The HashiCorp Terraform Module Registry gives Terraform users easy access to templates for setting up and running their infrastructure with verified and community modules. We’ll be accessing our modules here.

Let’s begin by making a new directory.

mkdir terraform-eks

change into directory terraform-eks

cd terraform-eks

open up your text editor with a file named main.tf and copy the following:

This will allow Terraform to interact with AWS and it’s resources.

I suggest typing terraform init after each provider to make sure our resources are running properly.

  • terraform init

After a successful install, open up another file names vpc.tf and copy the following.

This will provision a vpc via the vpc module and it’s resources. We also added a resource of random_string to include in our cluster name.

Hit a terraform init to make sure everything is running properly.

Next, let’s configure our EKS cluster via the EKS module. Open up a file named eks.tf, This will provision our resources to set up an EKS cluster including our worker groups.

Next, let’s configure our kubernetes provider. Open up kubernetes.tf, Make sure to include the kubernetes provider or you will recieve an error while connecting your resources.

Next, we need to open up security.tf and configure our security groups for our worker groups.

Next let’s configure some outputs that we’d like to see after a terraform apply. Open up outputs.tf and copy the following.

This will output the name of our cluster and expose the endpoint of our cluster.

Hit a terraform init to make sure everything is working properly.

Hit a terraform validate to make sure our code is valid.

Github.

We’re now ready to push our code to github.

In the upper right corner of your github page. Click on New repository.

Choose a repository name and click create repository.

After the last step, you will be forwarded to the starting page of your new GitHub repository that looks like this:

Before we begin pushing our code to github. We’ll need to add a .gitignore file to ignore the files we don’t want to include. Open up file .gitignore and copy the following code.

We’re ready to push our code to Github. Follow the steps from above.

Here is the link to my code at my Github page.

We’re now ready for Terraform Cloud to connect to Github to provision our code for us as a CI/CD tool.

Sign into Terraform cloud and choose a new workspace.

Choose version control for your workflow and Github as your provider.

Click on the repo that we just created in Github.

Click create workspace.

Configure Environmental Variables.

You will need to add four variables here.

  • Add your AWS_ACCESS_KEY_ID. Mark as sensitive.
  • Add your AWS_SECRET_ACCESS_KEY. Mark as sensitive
  • Add your AWS_DEFAULT_REGION. (mine is set to us-east-2)
  • Add CONFIRM_DESTROY. Set to 1 as value. This is for cleaning up our environment after the demo.
  • Under actions, click on start new plan.
  • Type test 1 when asked for reason for starting new plan
  • Click Start plan.
  • Under actions, click on start new plan.
  • Type test 1 when asked for reason for starting new plan
  • Click Start plan.

Terraform Cloud will now start a plan of action.

Click on confirm plan and add a comment.

Terraform Cloud has now provisioned our infrastructure.

Below you should see our outputs.

Let’s go to the AWS EKS console to add additonal confirmation that our resources have been deployed.

Here is our cluster that we have created in the EKS console.

Along with our VPC and additional resources that we created via Terraform.

VPC
Worker groups.
AutoScaling Groups.

Thanks for following along and checking out another project of mine! You can now queue your destroy plan and apply via the Terraform cloud console.

Check out my Github here.

Connect with me on LinkedIn here.

--

--

Matthew Mendez
DevOps Engineer Documentation

Documenting my journey from bartender to a career as a devops engineer