Provisioning EKS Cluster With Terragrunt — Part 2

Lidor Ettinger
NI Tech Blog
Published in
3 min readNov 29, 2022

In part one of this blog post, we detailed the VPC structuring in a cloud environment, we broke down the infrastructure into three sections:

  • Environment variables that customize the cloud account
  • Module that refers to Terraform resources
  • Infra that binds the environment variables and the module

This strategy allows us to simplify the Cloud Provisioning process by reusing the modules and to not DRY environment variables which leaves us with focusing on building the infra.

In this post we will provision an EKS cluster using Terragrunt and we will verify our cluster health.

Define the environment variables module

The abstract infrastructure with Terragrunt allows us to reuse the common environment variable defined previously for the VPC without repeating ourselves.

The following section is similar to what we have already created:

terragrunt
└ terragrunt.hcl

Just keep the same terragrunt.hcl values.

  • Remember to export the environment variables: ACCOUNT_ID and BUCKET

Define the EKS module

Configure the EKS resource that you are using:

terragrunt
└ modules
└ cluster
└ terragrunt.hcl

For example:

Our terraform module contains:

  • source — Terraform will make a GET request to the given URL and will fetch the archives over HTTP
  • after_hook — A feature of terragrunt that make it possible to define custom actions that will be called either before or after execution of the terraform command.
  • extra_arguments — A block that reads some shared variables from the main terragrunt.hcl file. These environment variables will be used in the script.sh.

Setup the EKS resource of the infrastructure

Configure the variables of your EKS resource:

terragrunt
└ infra
└ eks
└ script.sh
└ terragrunt.hcl

For example:

The EKS has required variables that are needed in order to launch the Kubernetes into an existing VPC. The above input is getting generated from the predefined variables:

inputs = {
cluster_version = local.env_vars.locals.cluster_version
cluster_name = local.env_vars.locals.cluster_name
vpc_id = dependency.vpc.outputs.vpc_id
subnet_ids = dependency.vpc.outputs.private_subnets
}

After Hooks

We also decided to add an after_hook that will export the Kubernetes Context to our local machine, which is essential for us to be able to connect to the EKS cluster.

The following script will run after that the EKS will be launched and it will create and export the cluster config — devops-us-east-1.k8s.local.config:

Check the cluster status:

Let’s check the Kubernetes cluster health:

curl -k $(cat $KUBECONFIG | grep -m 1 'server' | sed -n 's/server://p' | tr -d ' ')/livez\?verbose

Expected result:

[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
livez check passed

What we will do next…

In the next post we will create a dedicated node group and internal services that are essentials for monitoring our EKS cluster.

--

--