Create EKS Fargate cluster with EKS Add-Ons & Expose Microservices using AWS Load Balancer Controller

SATYAM SAREEN
11 min readDec 31, 2023

--

Hello learners, today we’ll be creating an EKS Fargate cluster, and we’ll install some EKS add-ons on top of it. We’ll then finally install the AWS Load Balancer Controller to expose a microservice.

The diagram above should give a good idea of what we’ll be deploying today. I have shown single public and private subnets to keep the diagram clean & concise but the architecture will be highly available with at least 2 public & private subnets across AZs.

We’ll be creating our entire stack using Terraform which is a popular “Infrastructure as Code” tool.

Our stack will be deployed in 2 steps. First, we’ll create the cluster, and then the K8s resources. This is done because the K8s and Helm provider in Terraform don’t get initialized properly if we have dynamic references to kubeconfig file, cluster CA certificate, or cluster endpoint which get populated after the cluster is created. In a nutshell, Terraform at the time of this writing doesn’t support Dynamic Providers.

  1. ) Cluster Creation

Create a folder cluster-creation and the below files.

providers.tf declares the necessary terraform providers to create the resources.

variables.tf declares the necessary input variables that will be used by the different cluster resources

variables/dev.tfvars populates the above variables with the user-supplied values. Update the aws_account and vpc_name with your AWS account ID and VPC name.

data.tf has the necessary data sources to fetch information from external sources for eg. your AWS account.

eks-cluster.tf defines the EKS cluster and the cluster role. Here the cluster endpoint is configured to be public i.e. API server will be accessible publicly. You can also optionally limit the CIDR blocks that can access your public API server endpoint. If you limit access to specific CIDR blocks, then it is recommended that you also enable the private endpoint access, or ensure that the CIDR blocks that you specify include the addresses that the Fargate Pods use to access the public endpoint.

We are also defining a cluster role. We have followed the principle of least-privilege here. This permission set allows the Kubernetes cluster to manage nodes but doesn’t allow the legacy cloud provider to create load balancers. This will be taken care of by the AWS Ingress controller which will have its own separate IAM role, as depicted in the diagram above.

create-kubeconfig.tf as the name suggests, creates a kubeconfig file using templatefile function and local_file resource which will be used later by the Kubernetes and Helm provider to authenticate and deploy the resource onto the cluster.

kubeconfig.tpl defines the structure/template of the kubeconfig file which will be populated once the cluster gets created.

pod-execution-role.tf as the name suggests defines a pod execution role that is required to run Pods on AWS Fargate infrastructure. When pods are created on your Fargate infrastructure, various actions are performed behind the scenes before the pod is up and running such as pulling container images from Amazon ECR or routing logs to other AWS services. The Amazon EKS Pod execution role provides the IAM permissions to do this.

fargate-profile.tf as the name suggests defines multiple Fargate profiles that filter out which pods run on the Fargate infrastructure using namespace and label selectors. If a Pod matches multiple Fargate profiles, you can specify which profile a Pod uses by adding the following Kubernetes label to the Pod specification: eks.amazonaws.com/fargate-profile: my-fargate-profile. We are also referring to the above-defined pod execution roles in the below Fargate profiles. Currently, only private subnets with no direct route to an Internet Gateway can be used in a Fargate profile. Below we have defined 3 profiles for fluxcd, aws-ingress-controller, and kube-dns namespace.

irsa.tf which stands for “IAM roles for service accounts” defines a bunch of resources that let applications running inside pods make AWS API calls using IAM permissions. We have defined an OIDC identity provider by providing the OpenID connect provider URL of our cluster and a thumbprint of the CA that signed the certificate used by the EKS cluster’s OpenID connect provider, both the details which we get from the tls_certificate data source. We then define a trust policy document that binds a service account to an IAM role in our case i.e. “aws-ingress-controller” service account in the “aws-ingress-controller” namespace to the “aws-ingress-controller-pod-role” IAM role. Behind the scenes, the service account token that gets mounted onto the pod is exchanged while making an AWS STS AssumeRoleWithWebIdentity API operation (thus the audience in the trust policy is “sts.amazonaws.com”, the intended recipient of the mounted OIDC JSON web token). This JWT gets trusted by AWS, thanks to the OIDC provider that we defined below, and temporary IAM role credentials are returned (Access Key and Secret Key) which can then be used to make AWS API calls.

aws-ingress-controller-policy.tf defines the IAM policy which we get from the http data source we had defined in data.tf and attaches to the IAM role defined above. In a nutshell, this policy allows the “aws-ingress-controller” pods to create ALBs/NLBs whenever a Kubernetes ingress or a service resource is created.

vpc-cni-amazon-eks-addon.tf defines an EKS add-on for the VPC CNI plugin. EKS add-ons are a curated set of add-on software for Amazon EKS clusters. Their upgrades are managed by AWS. We get notified whenever an upgrade for an EKS add-on is available. All EKS add-ons include the latest security patches and bug fixes. We can even update specific Amazon EKS-managed configuration fields for Amazon EKS add-ons through the Amazon EKS API. In a nutshell, the operational overhead of running and managing a supporting operational software on EKS is taken care of by AWS using EKS add-ons. Below we have defined a trust policy document that allows vpc-cni pods running using the aws-node service account in the kube-system namespace to assume the “fluxcd-vpc-cni-plugin-role” IAM role. The policy allows assigning a private IPv4 or IPv6 address from your VPC to each Node, Pod, and service in your cluster. Notice how we have commented out the kubernetes_annotations resource, this is because the EKS add-on takes care of adding the eks.amazonaws.com/role-arn annotation to the aws-node service account.

Important Note! No need to create the VPC CNI EKS Add-On, as on Fargate this pluging is managed by AWS itself. Plus even if you try to install the below add-on, it won’t make any difference as daeomon-sets are not supported on Fargate (There is no conecept of nodes, Remember!). Above information will be relevant for EKS running on Node Groups, you can safely skip this tf file.

coredns-amazon-eks-addon.tf defines the EKS add-on for CoreDNS which is an extensible DNS server and provides name resolution for all Pods in the cluster. We have given a configuration value of compute type as Fargate to make it run on our cluster. There is a depends_on section as well which explicitly tells Terraform to create the CoreDNS Fargate profile before installing the add-on, as without the Fargate profile the CoreDNS pods won’t be able to get scheduled on the Fargate infrastructure and would remain in a pending state.

This wraps up our first part of the setup. Now let's create the cluster!

Add the below tags to your VPC and subnets for the EKS cluster to function correctly. Make sure you are using the correct names in the below tags for your VPC, subnet, and EKS cluster. These tags also help in subnet auto-discovery for the AWS Ingress Controller. When the ingress controller tries to create a public ALB, it looks for the kubernetes.io/role/elb tag on the subnets with the value 1 or ‘’.

vpc tags
public subnet tags
private subnet tags

Run the below commands:

cd cluster-creation
terraform apply -var-file variables/dev.tfvars -refresh=false

Wait for terraform apply to succeed. It might take 5–15 minutes. After the apply is successful you will see a kubeconfig file created at the location k8s-resources\kubeconfig\config , which will be utilized by the Kubernetes and Helm provider of Terraform to deploy the resources on our fluxcd-fargate EKS cluster.

2. ) K8s Resources Creation

Create the below files under k8s-resources folder

providers.tf declares the necessary terraform providers to create the resources.

variables.tf declares the necessary input variables that will be used by the different k8s resources

variables/dev.tfvars populates the above variables with the user-supplied values. Update the aws_account and vpc_name with your AWS account ID and VPC name.

data.tf has the necessary data sources to fetch information from external sources for eg. your AWS account.

fluxcd-namespace.tf defines a k8s namespace with the name fluxcd.

nginx-service.tf defines a ClusterIP service with the name nginx in the fluxcd namespace and exposes port 80.

nginx-service-account.tf defines a service account with the name nginx in the fluxcd namespace.

nginx-deployment.tf defines an nginx deployment in the fluxcd namespace. This is the microservice that we will be exposing using the AWS Ingress Controller. It uses the public nginx:1.21.6 image.

aws-ingress-controller.tf defines a bunch of resources. Let’s take a look at them one by one.
Firstly we define a k8s namespace with the name aws-ingress-controller.
We then define a k8s service account and provide the AWS Ingress Controller IAM role ARN as the eks.amazonaws.com/role-arn annotation, which lets the controller pod assume this role and create/update load balancers on your behalf.
Then we define a k8s cluster role, give it the appropriate permissions, and bind it to the aws-ingress-controller service account.
We then define a helm chart that creates the actual controller deployment of some other resources including a custom resource definition i.e TargetGroupBinding. We also pass input values to the helm chart such as the service account name, vpc id, cluster name, region, etc.
We then finally define the ingress resource which exposes the nginx service at path “/” on port 80. The AWS Ingress Controller continuously watches for changes in ingress resources that have their ingress class set as “alb” and then creates the application load balancers in AWS with the appropriate listener rules and target groups as mentioned in the ingress resource. Note that we have given an annotation of alb.ingress.kubernetes.io/scheme as internet-facing, this will create the load balances ENIs in the public subnets which is necessary as the traffic is coming from the internet.

This wraps up our second part of the setup. Now let’s deploy these resources!

Run the below commands. Wait for terraform apply to succeed. It might take 5–15 minutes.

cd ../k8s-resources
terraform apply -var-file variables/dev.tfvars -refresh=false

Now let’s take a look at the below screenshots, to get a better understanding of our entire setup!

CoreDNS Add-On is installed and is in an “Active” state.

All 3 Fargate profiles were created and are in an Active state

A total of 5 pods are running, for CoreDNS, AWS ingress controller, and Nginx.

The AWS Ingress controller has created the below application load balancer as per the configuration we had passed in the ingress resource. Note that the name of the load balancer is derived from the namespace name in which we had created the ingress resource plus the name of the ingress resource itself.

A listener is created, accepting traffic on port 80. 2 listener rules are also created, forwarding traffic with the path /* to the nginx target group. Note that, a 404 default rule was created which might confuse some readers. This was created because we didn’t specify any default_backend in the ingress resource.

A target group is also created, accepting traffic on port 80. Our nginx pod was successfully registered with the target group and is in a healthy state. The target type is IP, the registered IP is of the nginx pod itself, the same can be confirmed in the below screenshots.

registered target IP address
nginx pod IP

The AWS LoadBalancer controller internally uses a custom resource called TargetGroupBinding to support the functionality for Ingress and Service resources. It automatically creates TargetGroupBinding in the same namespace as the target service used in the ingress resource.
You can also use a TargetGroupBinding to expose your pods using an existing ALB TargetGroup or NLB TargetGroup. This will allow you to provision the load balancer infrastructure completely outside of Kubernetes but still manage the targets with Kubernetes Service.
In a nutshell, it is also a controller within the AWS LoadBalancer controller that watches for changes in your backend pods, whenever a new pod is created or an existing pod is deleted in your backend service, it updates the target group in AWS by registering or de-registering the target pods.

If we peek into the AWS ingress controller pod logs, we can see it is creating multiple resources such as load balancer, target group, listener rule, etc.

Finally, let’s hit the load balancer DNS and see if our Nginx pod responds.

Whoa!, we got the Nginx Welcome Page!

Now let’s hit our load balancer at a path that we know doesn’t exist in our backend pod, just to get a fun 404 error!

The same can be observed from our Nginx pod logs, the requests are going through!

To keep your AWS bills in check, destroy the infrastructure as soon you are done creating and testing the entire setup.

Run the below commands

cd cluster-creation
terraform destroy --auto-approve -var-file variables/dev.tfvars
cd k8s-resources
terraform destroy --auto-approve -var-file variables/dev.tfvars
yay!

This marks the end of our “Create EKS Fargate cluster with EKS Add-Ons & Expose Microservices using AWS Ingress Controller” blog.
If you have any questions/suggestions please add them in the comments.
If you learned anything new today, please consider giving a clap👏, it keeps me motivated to write more AWS content. 😀

--

--