AWS Workshops DIY — EKS Workshop — 35. Running Workload Pods on Amazon EKS w/ Serverless Fargate

Hands-on demonstration of enabling Fargate in Amazon EKS Cluster, running Workloads/Pods, and Resource Management on Fargate EC2 instances.

John David Luther
The AWS Way
11 min readMar 4, 2024

--

Your “Cloud Zen Life” ☁️🧘‍♂️🌿 wish is my command! — The AWS Fargate ️Genie🧞‍♂️.️

📌 Table of Contents

  1. Introduction
  2. Rules of Engagement — Runing Amazon EKS Pods on Fargate
  3. Running on Amazon EKS w/ Serverless Fargate — Theory Notes & References
  4. Running on Amazon EKS w/ Serverless Fargate — Implementation Hands-On
  5. Conclusion and Next Steps
eksworkshop.com/docs/fundamentals/fargate
eksworkshop.com/docs/fundamentals/fargate

✴️ Introduction

Taking into account AWS Fargate considerations, it surely sounds like AWS Fargate possesses “genie” like power, with its ability to provision on-demand, right-sized compute capacity for containers, taking off my burden of not having to choose server types, provisioning, configuring, cluster packing and seamlessly scaling compute needs in real-time to meet application demands.

Given the 👆 power Fargate possesses, as a responsible Amazon EKS operator with a growing container footprint in the organization, it’s strategically imperative to mix Fargate in the overall design and architecture of the application workloads without a doubt!

Even without a deeper understanding of Amazon EKS Fargate architecture, the diagram below tells the tale transparently, that the Fargate indeed is a less burdensome affair. Of course, there are limitations in operating Fargate, just like the other growing Serverless computing technology, Lambda, but architecturally speaking, based on my understanding and personal experience, I always by default gravitate to build my application architecture starting from a 100% buildable on Serverless strategy and then pare down expectations based on what’s possible. In other words, I strive to leverage Serverless as much as possible pushing their highest limits while staying cost-effective, and then shift to a different strategy, not vice versa.

💡 How Fargate is designed to work in Amazon EKS👇—

💦 Amazon EKS integrates Kubernetes with Fargate by using controllers that are built by AWS using the upstream, extensible model provided by Kubernetes.

💦 These controllers run as part of the Amazon EKS-managed Kubernetes control plane and are responsible for scheduling native Kubernetes Pods onto Fargate.

💦 The Fargate controllers include a new scheduler that runs alongside the default Kubernetes scheduler in addition to several mutating and validating admission controllers.

💦 When a Pod that meets the criteria for running on Fargate (based on Fargate profile selection), the Fargate controllers that are running in the cluster recognize, update, and schedule the Pod onto Fargate.

✴️ Rules of Engagement — Runing Amazon EKS Pods on Fargate

There are pre-defined rules of engagement to run Amazon EKS Cluster Pods on Fargate. Let’s go through them point by point, then set the expectation of our sample application demonstration on Fargate and do the code hands-on execution in the Implementation section.

  1. There must be at least one Fargate profile that specifies which Pods use Fargate when launched. Normally, the Kubernetes administrators use a Fargate profile’s selectors to declare which Pods run on Fargate.
  2. There can be a maximum of five selectors per profile, each profile must contain a namespace and a selector can include key-value pair labels.
  3. Pods that match a selector using the namespace and labels are scheduled on Fargate. If a namespace selector is defined without labels, Amazon EKS attempts to schedule all the Pods that run in that namespace onto Fargate using the profile. If a to-be-scheduled Pod matches any of the selectors in the Fargate profile, then that Pod is scheduled on Fargate.
  4. If a Pod matches multiple Fargate profiles, which profile the Pod ends up using is specified by adding eks.amazonaws.com/fargate-profile: my-fargate-profile label to the Pod specification. The Pod must match a selector in that profile to be scheduled onto Fargate. Note that, Kubernetes affinity/anti-affinity rules neither apply to nor are necessary with Amazon EKS Fargate Pods.
  5. A Fargate profile needs a Pod Execution Role for the Amazon EKS components that run on the Fargate infrastructure using the profile. It’s added to the cluster’s Kubernetes Role-Based Access Control (RBAC) for authorization. That way, the kubelet that runs on the Fargate infrastructure can register with the Amazon EKS cluster and appear in the cluster as a node. The Pod execution role also provides IAM permissions to the Fargate infrastructure to allow read access to Amazon ECR image repositories.

In our hands-on demonstration, we’ll do the following:

1️⃣ Add a Fargate profile to the EKS cluster with the namespace and label. We also specify the private subnets of the VPC to place Pods in and the IAM role for the Fargate infrastructure to pull images from ECR, write logs to CloudWatch, etc.

2️⃣ Schedule the Sample Retail Application’s Checkout microservice on Fargate. Once scheduled, we’ll examine the Pod’s description and Events to see the Pod’s scheduling on Fargate and Node description to learn more about the underlying EC2 instance.

3️⃣ Control Resource Allocation of the Fargate instance through resource request specification in the Pod in terms of CPU and memory and validate it post-resource allocation by looking at the Pod’s annotations.

4️⃣ Test horizontal scaling of Pods along with the underlying Fargate compute nodes. Since we don’t have visibility and control over Fargate control plane activities, we’ll only scale out/in our Kubernetes replicas and let Fargate handle the scaling process automatically as it deems fit.

✴️ Running on Amazon EKS w/ Serverless Fargate — Theory Notes & References

  1. Fargate EKS Workshop Page — The workshop page, in the usual manner, provides comprehensive coverage of both concepts, relevant training links, and hands-on code demonstrating an end-to-end working example of running an Amazon EKS application workload on Fargate.
  2. AWS Fargate Amazon EKS Documentation — The source of the truth AWS Fargate section as part of Amazon EKS User Guide. Stay in touch with this page for the latest information, updates, and announcements. I’ve already mentioned before but worth mentioning that AWS Fargate considerations must be read and well-understood to guide your Fargate decisions. A lot of bullets but they’re nicely explained, digestible nuggets.
  3. AWS Fargate Hub Page — Refer to this portal for a general understanding of AWS Fargate as a Container Serverless solution, how it operates, and how it fits in the overall AWS Cloud ecosystem.
  4. AWS Fargate Pricing — While it’s easy to get excited about using AWS Fargate to take advantage of the Serverless framework, 👀 the pricing as it eventually makes or breaks the decision. Of course, you want to consider the cost of efforts, cost of security risk, reputation, etc. too to come up with your own cost model to drive your decisions!
  5. [AWS Blog] Building and deploying Fargate with EKS in an enterprise context using the AWS Cloud Development Kit and cdk8s+ — This is a good blog despite its implementation focus using CDK. Refer to it for a good conceptual understanding.

✴️ Running on Amazon EKS w/ Serverless Fargate— Implementation Hands-On

Let’s back up the conceptual understanding with hands-on implementation. In the code frame below, watch the 4 STEPS mentioned in the Rules of Engagement section earlier after the first cluster building and prep’ing step.

STEP 1: Build Cluster and Prepare Environment

STEP 2: Enabling Fargate

STEP 3: Scheduling on Fargate

STEP 4: Resource Allocation

STEP 5: Scaling the Workload

👉 Build a Kubernetes cluster first — Begin by firing off the cluster with the link below (steps 1 & 2 inside the post), and then follow all the steps, as given.

#
# EKS WORKSHOP : FUNDAMENTALS MODULE
# Managed Node Groups - Fargate
#
# Ref: https://www.eksworkshop.com/docs/fundamentals/fargate/
#

#
# STEP 1: Build Cluster and Prepare Environment
#

# Pre-Req: Create Amazon EKS Cluster following EKS Workshop Chapter-2
# https://medium.com/the-aws-way/aws-workshops-diy-eks-workshop-2-lets-cluster-with-eksctl-e129a4a3be9b

# Prepare environment for MNG - Spot Instances
#
prepare-environment fundamentals/fargate

# The prep step creates an IAM role to be used by Fargate
# Let's see its policies
# The following code, courtsey of ChatGPT
echo "FARGATE_IAM_PROFILE_ARN: [$FARGATE_IAM_PROFILE_ARN]"
ROLE_NAME=$(aws iam list-roles --query "Roles[?Arn=='$FARGATE_IAM_PROFILE_ARN'].RoleName" --output text)
echo "ROLE_NAME: [$ROLE_NAME]"

# List attached policies and capture the output
POLICIES=$(aws iam list-attached-role-policies --role-name $ROLE_NAME --output json)

# Iterate over the policies and get their policy documents
for policy_name in $(echo $POLICIES | jq -r '.AttachedPolicies[].PolicyName'); do
# Get the policy ARN
POLICY_ARN=$(aws iam list-policies --query "Policies[?PolicyName=='$policy_name'].Arn" --output text)

# Get the latest version ID of the policy
VERSION_ID=$(aws iam list-policy-versions --policy-arn $POLICY_ARN --output json | jq -r '.Versions[-1].VersionId')

# Get the policy document using the latest version ID
POLICY_DOCUMENT=$(aws iam get-policy-version --policy-arn $POLICY_ARN --version-id $VERSION_ID --output json)

# Output policy name and document
echo "Policy Name: $policy_name"
echo "Policy Document: $POLICY_DOCUMENT"
echo "-----------------------------"
done

#
# STEP 2: Enabling Fargate
#

# Before scheduling Pods on Fargate in Amazon EKS cluster, define at least
# one Fargate profile specifying which Pods use Fargate when launched.
# Review manifest
cat ~/environment/eks-workshop/modules/fundamentals/fargate/profile/fargate.yaml

# This configuration creates a Fargate profile called checkout-profile with
# the following characteristics:
# 1. Target Pods in the checkout namespace that have the label fargate: yes
# 2. Place pod in the private subnets of the VPC
# 3. Apply an IAM role to the Fargate infrastructure so that it can pull
# images from ECR, write logs to CloudWatch and so on

# Create the profile
cat ~/environment/eks-workshop/modules/fundamentals/fargate/profile/fargate.yaml \
| envsubst \
| eksctl create fargateprofile -f -

# Inspect the Fargate profile
# Check all the components including "selectors" (namespace and labels)
aws eks describe-fargate-profile \
--cluster-name $EKS_CLUSTER_NAME \
--fargate-profile-name checkout-profile

#
# STEP 3: Scheduling on Fargate
#

# The checkout service is not running on Fargate yet
# Examine its labels
kubectl get pod -n checkout \
-l app.kubernetes.io/component=service -o json | \
jq '.items[0].metadata.labels'

# Looks like the Pod is missing the label 'fargate=yes'
# Fix that by updating the deployment for that service so that the Pod spec
# includes the label needed for the profile to schedule it on Fargate.
cat ~/environment/eks-workshop/modules/fundamentals/fargate/enabling/deployment.yaml

# Apply the kustomization to the cluster and monitor rollout status
kubectl apply -k ~/environment/eks-workshop/modules/fundamentals/fargate/enabling

kubectl rollout status -n checkout deployment/checkout --timeout=200s

# This causes the Pod specification of the checkout service to be updated
# and trigger a new deployment, replacing all the Pods. When the new Pods
# are scheduled, the Fargate scheduler matches the new label applied
# with the target profile and intervenes to ensure Pod is scheduled
# on capacity managed by Fargate.

# To confirm that it worked, describe the new Pod and look at the Events
# 1. Check the Reason/Scheduled line, it provides the Fargate node IP
kubectl describe pod -n checkout -l fargate=yes

# Inspect this node from kubectl to get additional information about
# the compute that was provisioned for this Pod
NODE_NAME=$(kubectl get pod -n checkout \
-l app.kubernetes.io/component=service -o json | \
jq -r '.items[0].spec.nodeName')

# This describe node tells shows a number of insights into the nature of
# the underlying compute instance:
#
# - The label eks.amazonaws.com/compute-type confirms that
# a Fargate instance was provisioned
# - Another label topology.kubernetes.io/zone specified the
# availability zone (AZ) that the pod is running in
# - The System Info section tells that the instance is running Amazon Linux 2
# as well as the version information for system components like
# container, kubelet and kube-proxy
kubectl describe node $NODE_NAME

#
# STEP 4: Resource Allocation
#

# The primary dimensions of Fargate pricing is based on CPU and memory.
# The amount of resources allocated to a Fargate instance depend on
# the resource requests specified by the Pod.
# There is a documented set of valid CPU and memory combinations for
# Fargate that should be considered when assessing if a workload is
# suitable for Fargate.
# Ref: https://docs.aws.amazon.com/eks/latest/userguide/fargate-pod-configuration.html#fargate-cpu-and-memory

# Confirm what resources were provisioned for the Pod from
# the previous deployment by inspecting its annotations:
# Expect to see:
# "CapacityProvisioned": "0.25vCPU 1GB" (the minimum Fargate instance size)
kubectl get pod -n checkout \
-l app.kubernetes.io/component=service -o json | \
jq -r '.items[0].metadata.annotations'


# Increase the amount of resources the checkout component is requesting
# and see how the Fargate scheduler adapts
# Review manifest
cat ~/environment/eks-workshop/modules/fundamentals/fargate/sizing/deployment.yaml

# Apply manifest and wait for the rollout to complete:
kubectl apply -k ~/environment/eks-workshop/modules/fundamentals/fargate/sizing

kubectl rollout status -n checkout deployment/checkout --timeout=200s

# Check again the resource allocated by Fargate
# Expect to see bigger resources allocated:
# "CapacityProvisioned": "1vCPU 3GB",
kubectl get pod -n checkout \
-l app.kubernetes.io/component=service -o json | \
jq -r '.items[0].metadata.annotations'


#
# STEP 5: Scaling the Workload
#

# Fargate follows simplified horizontal scaling model
# When using EC2 for compute, scaling Pods involves considering
# how not only the Pods will scale but also the underlying compute.
# Because Fargate abstracts away the underlying comput, operators need
# to be concerned with scaling Pods themselves.

# So far we're using a single Pod replica. Scale out to 3
# Review manifest
cat ~/environment/eks-workshop/modules/fundamentals/fargate/scaling/deployment.yaml

# Apply and wait for the rollout to complete:
kubectl apply -k \
~/environment/eks-workshop/modules/fundamentals/fargate/scaling

kubectl rollout status -n checkout deployment/checkout --timeout=200s

# Check the number of Pods after rollout completes successfully
# Each of the three Pods is scheduled on a separate Fargate instance.
kubectl get pod -n checkout -l app.kubernetes.io/component=service

# Also check the number of nodes post scale-out
# Expect to see 3 Fargate nodes (fargate-ip-*)
kubectl get node

# Shrink back replicas to 1 again
kubectl -n checkout scale deployment/checkout --replicas=1

# Check the nodes, expect only one fargate-* node
kubectl get node

# END OF DEMONSTRATION

✴️ Conclusion and Next Steps

In this Running Workload Pods on Amazon EKS w/ Serverless Fargate session, we learned the features and capabilities of AWS Fargate and how to run Amazon EKS Workloads on AWS Fargate.

We saw in action how to setup a Fargate profile in the EKS Kubernetes cluster, how to schedule and run workload Pods on a Fargate compute node, how to scale out and scale in Pod replicas, and watch the Fargate controller taking immediate action and how Fargate automatically picks the right node matching the resource requests made by the Pod via its specification.

With this chapter, we’re fully concluding the FUNDAMENTALS module of the EKS Workshop.

Next, we’re going to move on to the OBSERVABILITY module starting with Viewing Kubernetes Resources in EKS Console.

Stay tuned and see you there!

If you benefited from reading the post, please 👏 👏👏👏👏 a few times before parting, and help others by sharing it; I highly appreciate that!

👉 Please follow to stay in touch, track, and be the first to get notified of all future writings on AWS Cloud, Containers, Kubernetes, and Machine Learning. Also, check all my stories on The AWS Way publication.👈

--

--

John David Luther
The AWS Way

8 X AWS, CKA, CKAD, Terraform, TensorFlow Developer Certified. In pursuit of Cloud, Containers, ML/AI—Development, Architecture and Operations Excellence!