Part 15 —HumanGov Application — Kubernetes: Deployment of HumanGov SaaS Application on AWS Elastic Kubernetes Service (EKS) Using a Route 53 Domain, ALB Ingress, and SSL Endpoint Powered by AWS Certificate Manager

Cansu Tekin
18 min readMar 27, 2024

--

The HumanGov Application is a Human Resources Management Cloud SaaS application for the Department of Education across all 50 states in the US. Whenever a state hired a new employee the registiration of the new employees will be done through the application. Their information will be stored inside AWS DynamoDB and their employment document will be stored inside S3 buckets. Our primary responsibility as DevOps Engineer is to modernize and enhance the HumanGov application. We first focused on provisioning the infrastructure with Terraform (Part 10) onto the AWS Cloud environment, then we configured and deployed applications on those resources with Ansible (Part 13), and we containerized the HumanGov application using Docker (Part 14) to make it more efficient, portable, and scalable across different computing environments. In this part of the project series, we will focus on Elastic Kubernetes Service (EKS), Route 53, Application Load Balancer (ALB), and AWS Certificate Manager for SSL endpoint encryption. We will keep improving the application with DevOps tools in the following sections.

PART 1: AWS Elastic Kubernetes Service (EKS) Cluster Setup

PART 2: Installing Ingress Controller and Application Load Balancer (ALB)

PART 3: Deploying the HumanGov Application

PART 4: Kubernetes Ingress and Application Load Balancer (ALB) Setup with SSL

Amazon Elastic Kubernetes Service (Amazon EKS)

Kubernetes is a container orchestration tool that simplifies containerized applications' deployment, scaling, and management, allowing organizations to focus on building and delivering software rather than managing infrastructure.

Amazon Elastic Kubernetes Service (Amazon EKS) is a managed Kubernetes service that allows you to run Kubernetes on AWS. It provides features like automated deployment, scaling, load balancing, and self-healing, making building and maintaining complex applications in a cloud-native environment easier.

A Kubernetes cluster is a collection of physical or virtual machines (called nodes) grouped to run containerized applications orchestrated by Kubernetes. Kubernetes cluster consists of a control plane and one or more worker nodes. The control plane serves as the brains of the Kubernetes cluster being responsible for maintaining the desired state of the cluster, while the worker nodes are EC2 instances that run the pods (containers) for applications.

The control plane manages the cluster’s resources, schedules workloads, and maintains desired configurations. Worker nodes are responsible for executing the workloads deployed to the cluster. Each worker node hosts multiple pods. AWS manages the control plane, you manage the worker nodes.

A node group is a set of worker nodes with similar configurations, such as instance type, AMI, and Kubernetes labels. It allows you to manage and scale a group of nodes collectively, making it easier to handle the underlying infrastructure of the Kubernetes cluster.

PART 1: AWS Elastic Kubernetes Service (EKS) Cluster Setup

Step 1: Make sure you have the AWS CLI v2 installed

The AWS Command Line Interface (AWS CLI) is a unified tool provided by Amazon Web Services (AWS) that allows users to interact with various AWS services from the command line. It provides a set of commands for creating and managing AWS resources such as EC2 instances, S3 buckets, and RDS databases; configuring security and access permissions; deploying and managing applications on AWS services like Elastic Beanstalk, ECS, and Lambda; and automating tasks and workflows using scripts and shell commands.

AWS CLI version 2 (AWS CLI v2) is recommended for working with Amazon Elastic Kubernetes Service.

# Check the version
aws --version

# Upgrade if it is not installed
sudo yum remove awscli
curl “https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o “awscliv2.zip”
unzip awscliv2.zip
sudo ./aws/install --bin-dir /usr/local/bin --install-dir /usr/local/aws-cl --update

# Check the version again
aws --version

Step 2: Install the eksctl, the kubectl, and helm CLI tools on the Cloud9 environment

  • The eksctl is a command-line tool for creating, managing, and operating Kubernetes clusters specifically on Amazon EKS. It provisions necessary infrastructure with the AWS resources such as EC2 instances, networking components, and IAM roles. It also handles adding, removing, and modifying node groups.
curl --silent --location “https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz” | tar xz -C /tmp
sudo cp /tmp/eksctl /usr/bin
eksctl version

eksctl internally uses kubectl for interacting with Kubernetes clusters.

  • The kubectl(Kubernetes Control) is a general-purpose CLI tool for interacting with any Kubernetes cluster regardless of where they are hosted, while eksctl is specifically for managing Kubernetes clusters on AWS EKS. The kubectl allows deploying applications, managing resources, scaling, accessing logs, and debugging.
curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.18.9/2020-11-02/bin/linux/amd64/kubectl
chmod +x ./kubectl
mkdir -p $HOME/bin && cp ./kubectl $HOME/bin/kubectl && export PATH=$PATH:$HOME/bin
echo ‘export PATH=$PATH:$HOME/bin’ >> ~/.bashrc
kubectl version --short --client
  • Helm is a package manager for Kubernetes that simplifies deploying, managing, and scaling applications on Kubernetes clusters. The Helm CLI tool is used to interact with Helm.
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
helm version

Step 3: Create an IAM User with the AdministratorAccess policy

Create an Access Key for the eks-user, and save it in export.sh file. We will use it to authenticate AWS CLI.

cd ~/environment
touch export.sh

Step 4: Disable Managed Credentials on Cloud9 and authenticate AWS CLI

Settings > AWS Settings

Execute the commands in the export.sh file. Any environment variables or functions defined in the file will be available in the terminal.

source export.sh
  • Or configure AWS CLI as below

Step 5: Provision the HumanGov Application infrastructure with Terraform

Go to the previous project to get Terraform files to provision the S3 and DynamoDB table if you do not have them.

Update /human-gov-infrastructure/terraform/variables.tf file to provision infrastructure for 2 states.

cd human-gov-infrastructure/terraform/
terraform show
terraform plan
terraform apply
Outputs:

state_infrastructure_outputs = {
"california" = {
"dynamodb_table" = "humangov-california-dynamodb"
"s3_bucket" = "humangov-california-s3-l32k"
}
"texas" = {
"dynamodb_table" = "humangov-texas-dynamodb"
"s3_bucket" = "humangov-texas-s3-6nhm"
}
}

Step 6: Create an EKS cluster and connect to it using kubectl

Create an EKS cluster named humangov-cluster in the us-east-1 region with a node group named standard-workers

eksctl create cluster --name humangov-cluster --region us-east-1 --nodegroup-name standard-workers --node-type t3.medium --nodes 1
Compute -> Node groups

We need to retrieve the access credentials for the humangov-cluster and merge them into the kubeconfig file(/home/ec2-user/.kube/config) on our local machine to be able to connect to the humangov-cluster using kubectl.

aws eks update-kubeconfig --name humangov-cluster
# Verify Connectivity
# Lists all the services running in the Kubernetes cluster
kubectl get svc
# Lists all the nodes (or instances) that are part of the Kubernetes cluster
kubectl get nodes

PART 2: Installing Ingress Controller and Application Load Balancer (ALB)

We need to set up an Application Load Balancer (ALB) and an Ingress Controller to manage incoming traffic by directing the traffic to the correct services within the EKS cluster.

Step 1: Create an IAM policy

We need a policy that defines the permissions required by the AWS Load Balancer Controller to manage resources in your AWS account when deployed in an Amazon EKS cluster. The AWS Load Balancer Controller is a Kubernetes controller that manages Elastic Load Balancers (ELBs) for services running in a Kubernetes cluster.

cd ~/environment
# Download the policy
curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.5.4/docs/install/iam_policy.json
# Create an IAM policy
aws iam create-policy --policy-name AWSLoadBalancerControllerIAMPolicy --policy-document file://iam_policy.json
# List all the policies in your account
aws iam list-policies

Step 2: Associate the IAM OIDC identity provider for your Amazon EKS cluster with eksctl

When you associate the IAM OIDC identity provider with your EKS cluster, it establishes a trust relationship between the Kubernetes cluster and AWS IAM. This allows you to create Kubernetes service accounts and assign them IAM roles to access AWS resources such as S3 buckets, DynamoDB tables, or other AWS services. In our case, we need to access the Application Load Balancer (ALB) service. You can skip this step when you do not use IAM roles.

eksctl utils associate-iam-oidc-provider --region us-east-1 --cluster humangov-cluster --approve

Step 3: Create a Kubernetes service account and associate an IAM role

A Kubernetes service account is a Kubernetes-native concept for identity and access management. When a pod is created, it is assigned a service account to authenticate and authorize requests made by containers within the cluster to access Kubernetes resources.

By associating an IAM role with a Kubernetes service account, you enable the pods running within the Kubernetes cluster to assume the permissions defined by the IAM role when making requests to AWS services outside of the cluster. You’re essentially creating a mapping between a Kubernetes service account and an IAM role.

Create a Kubernetes service account with AmazonEKSLoadBalancerControllerRole, and attach AWSLoadBalancerControllerIAMPolicy to this role to grant AWS Load Balancer Controller with necessary permissions to create and manage ALB.

Replace the account ID (626127091134) in the command below with your AWS account ID.

# Create an Kubernetes service account within your Amazon EKS cluste
eksctl create iamserviceaccount \
--cluster=humangov-cluster \
--namespace=kube-system \
--region=us-east-1 \
--name=aws-load-balancer-controller \ # Name of the Kubernetes service account
--role-name AmazonEKSLoadBalancerControllerRole \ # IAM role name that will be associated with the Kubernetes service account
--attach-policy-arn=arn:aws:iam::626127091134:policy/AWSLoadBalancerControllerIAMPolicy \ # Policy that will be attached to the IAM role.
--approve

Step 4: Install the AWS Load Balancer Controller using Helm V3

Helm is a package manager for Kubernetes. The ALB controller needs to be installed in the Kubernetes cluster. After this point, we will use kubectl to interact with our Kubernetes cluster and its attached services.

# Add the AWS EKS Helm chart repository to your Helm configuration
helm repo add eks https://aws.github.io/eks-charts
# Ensure that you have access to the most recent versions of the charts available in the repository
helm repo update eks

# Install the AWS Load Balancer Controller
helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
-n kube-system \
--set clusterName=humangov-cluster \
--set serviceAccount.create=false \ # We already created Kubernetes service account
--set serviceAccount.name=aws-load-balancer-controller

# Verify the controller installation
kubectl get deployment -n kube-system aws-load-balancer-controller

PART 3: Deploying the HumanGov Application

Step 1: Create a Role and Service Account to provide pods access to S3 and DynamoDB tables

We will create another IAM service account to allow pods to access the S3 bucket & DynamoDB table. our application will be deployed using pod(s).

eksctl create iamserviceaccount \
--cluster=humangov-cluster \
--name=humangov-pod-execution-role \
--role-name=HumanGovPodExecutionRole \
--attach-policy-arn=arn:aws:iam::aws:policy/AmazonS3FullAccess \
--attach-policy-arn=arn:aws:iam::aws:policy/AmazonDynamoDBFullAccess \
--region us-east-1 \
--approve

Step 2: Containerize the HumanGov application

Create a public ECR repository named humangov-app, and build and push the Docker Image to the ECR repository.

Go to View push commands:

cd human-gov-application/src/
  1. Retrieve an authentication token and authenticate your Docker client to your registry. Use the AWS CLI:
aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin public.ecr.aws/r9c2d3d1

2. Build your Docker image using the following command. For information on building a Docker file from scratch see the instructions here. You can skip this step if your image is already built:

docker build -t humangov-app .

3. After the build completes, tag your image so you can push the image to this repository:

docker tag humangov-app:latest public.ecr.aws/r9c2d3d1/humangov-app:latest

4. Run the following command to push this image to your newly created AWS repository:

docker push public.ecr.aws/r9c2d3d1/humangov-app:latest

Step 4: Deploy the application for each state
Create application deployment YAML files; humangov-california.yaml and humangov-texas.yaml under the human-gov-application/src directory.

We will manage resources to deploy our application consisting of a Python app and an NGINX reverse proxy in a Kubernetes cluster. The NGINX reverse proxy forwards requests to the Python app service as configured in the deployment YAML file below.

  1. Deployment (humangov-python-app-california): Specify the desired deployment state for the container named “humangov-python-app-california” using the image from the ECR repository, humangov-app, we created before. Provide environment variables AWS_BUCKET, AWS_DYNAMODB_TABLE, AWS_REGION, and US_STATE to the container for configuration.
  2. Service (humangov-python-app-service-california): Define a Kubernetes Service to expose the Pods with the label app: humangov-python-app-california internally within the cluster on port 8000.
  3. Deployment (humangov-nginx-reverse-proxy-california): Deploy NGINX reverse proxy that runs a container named “humangov-nginx-reverse-proxy-california” using the NGINX alpine image. Mount a ConfigMap named “humangov-nginx-config-california” to configure NGINX.
  4. Service (humangov-nginx-service-california): Define the “humangov-nginx-service-california” Kubernetes Service to expose the NGINX reverse proxy on port 80.
  5. ConfigMap (humangov-nginx-config-california): Define a ConfigMap containing NGINX configuration (nginx.conf) and proxy parameters (proxy_params) to forward incoming HTTP requests to the backend service running at humangov-python-app-service-california:8000.
cd human-gov-application/src/
touch humangov-california.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: humangov-python-app-california
spec: # Describes the desired state of the Deployment
replicas: 1 # Specifies the desired number of replicas (instances) of the application to run,
selector: # Specifies the labels used to select the Pods that the Service will route traffic to.
matchLabels:
app: humangov-python-app-california
template: # Defines the Pod template used by the Deployment
metadata:
labels:
app: humangov-python-app-california
spec:
serviceAccountName: humangov-pod-execution-role # Name of the Kubernetes service account to use for running the Pods
containers: # Defines the containers that should run in the Pod
- name: humangov-python-app-california
image: public.ecr.aws/r9c2d3d1/humangov-app:latest
env:
- name: AWS_BUCKET
value: "humangov-california-s3-tim7"
- name: AWS_DYNAMODB_TABLE
value: "humangov-california-dynamodb"
- name: AWS_REGION
value: "us-east-1"
- name: US_STATE
value: "california"

---

apiVersion: v1
kind: Service
metadata:
name: humangov-python-app-service-california
spec:
type: ClusterIP # ClusterIP means the Service is only accessible within the cluster.
selector: # Specifies the labels used to select the Pods that the Service will route traffic to
app: humangov-python-app-california
ports: # Specifies the ports that the Service will listen on
- protocol: TCP
port: 8000 # Port number on which the Service will listen within the cluster
targetPort: 8000 # Port number that the Service will forward traffic to on the Pods

---

apiVersion: apps/v1
kind: Deployment
metadata:
name: humangov-nginx-reverse-proxy-california
spec:
replicas: 1
selector:
matchLabels:
app: humangov-nginx-reverse-proxy-california
template:
metadata:
labels:
app: humangov-nginx-reverse-proxy-california
spec:
containers:
- name: humangov-nginx-reverse-proxy-california
image: nginx:alpine
ports:
- containerPort: 80 # Port 80 the NGINX container will be exposed
volumeMounts: # Mounts volumes into the container's filesystem.
- name: humangov-nginx-config-california-vol
mountPath: /etc/nginx/
volumes:
- name: humangov-nginx-config-california-vol
configMap:
name: humangov-nginx-config-california

---

apiVersion: v1
kind: Service
metadata:
name: humangov-nginx-service-california
spec:
selector:
app: humangov-nginx-reverse-proxy-california
ports:
- protocol: TCP
port: 80
targetPort: 80

---

apiVersion: v1
kind: ConfigMap # Specifies a set of key-value pairs that can be used to configure Kubernetes resources.
metadata:
name: humangov-nginx-config-california
data:
nginx.conf: |

events {
worker_connections 1024;
}

http {

server {
listen 80;

location / { # Block defines how NGINX handles request. It specifies the backend service URL (http://humangov-python-app-service-california:8000) to which NGINX should forward requests.
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://humangov-python-app-service-california:8000; # App container
}
}
}

proxy_params: |
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

Deploy the HumanGov application for California:


kubectl apply -f humangov-california.yaml
# Verify the deployment
kubectl get pods
kubectl get services
kubectl get deployment

Deploy the HumanGov application for Texas:

Copy the same settings from the California app deployment and update the AWS_BUCKET, AWS_DYNAMODB_TABLE, and US_STATE environmental variables’ values and names specific to Texas in the humangov-texas.yaml file.

cp humangov-california.yaml humangov-texas.yaml

kubectl apply -f humangov-texas.yaml
kubectl get pods
kubectl get services
kubectl get deployment

Four pods should be running (2 pods/state) with their Cluster IP.

Here are the running pods and namespace:

PART 4: Kubernetes Ingress and Application Load Balancer (ALB) Setup with SSL

We need a Kubernetes ingress to route HTTP and HTTPS traffic from outside to the cluster to be able to access our application. Kubernetes ingress manages external access to services in a Kubernetes cluster, enables load balancing, and provides SSL termination.

Step 1: Create a new domain on Route 53

Amazon Route 53 is a highly scalable and reliable Domain Name System (DNS) service to route end users to internet applications by translating human-readable domain names into IP addresses. We will purchase new a human-readable domain and use this domain to connect our application instead of connecting with its public IP address.

Go to Route 53 -> Registered domains -> Register domains -> pick whatever you want as a domain name

I will use the .click extension for this project because it is the cheapest option. You can pick a different extension like .com

Registration takes some time. You can check its status under the Requests tab.

You also get emails about the registration and approval.

After registration of the domain name is approved it will be listed under Route 53 -> Registered domains

You can test record:

Step 2: Create a Certificate for the ALB

Go to AWS Certificate Manager and request a new public certificate for the humangov-ct.click domain. It will provide SSL encryption for the humangov-ct.click domain for secure communication between clients and the cluster.

Go inside the certificate and click on Create records in Route 53.

AWS Certificate Manager (ACM) provides a specific DNS record with CNAME type to add to your Route 53 hosted zone to use the certificate with AWS services like ALB. ACM uses this record to validate that you are the owner of the domain.

Select the certificate you created and create records.

Go to Route 53 -> Hosted zones -> your domain name -> Records

You will see the certificate you created in AWS Certificate Manager is added to Records.

After the certificate is issued, we can associate it with an ALB service that requires SSL encryption for secure communication. Next, we should configure the ALB service to use the issued certificate for HTTP and HTTPS communications.

Step 3: Deploy the Kubernetes Ingress Rules to configure how traffic should be routed to your application using an ALB

Create an Ingress YAML file humangov-ingress-all.yaml that includes rules for each state.

touch humangov-ingress-all.yaml

Replace the ARN of the certificate with yours.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: humangov-python-app-ingress
annotations: # Annotations are specific to the AWS ALB Ingress Controller
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/group.name: frontend
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:626127091134:certificate/f161f419-49e9-4f62-a88f-e359aa72b4a0
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]'
alb.ingress.kubernetes.io/ssl-redirect: '443' # HTTP traffic should be redirected to HTTPS
labels:
app: humangov-python-app-ingress
spec:
ingressClassName: alb
rules: # route traffic based on hostnames
- host: california.humangov-ct.click # We will create this subdomain later
http:
paths:
- path: /
pathType: Prefix # Prefix matches requests with URL paths that start with the specified prefix
backend: # backend the requests should be forwarded
service: # Kubernetes service to forward requests to
name: humangov-nginx-service-california
port:
number: 80 # The port number on the backend service
- host: texas.humangov-ct.click # We will create this subdomain later
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: humangov-nginx-service-texas
port:
number: 80

When we apply these Ingress Rules, the ALB Ingress Controller will automatically provision an ALB in our AWS account based on specified Ingress rules. We already installed the ALB Ingress Controller and configured it in our cluster for this purpose.

# Apply the ingress
kubectl apply -f humangov-ingress-all.yaml
# Verify the Ingress Controller
kubectl get ingress

Check in the Load Balancers under AWS EC2:

Step 4: Create alias entries (Record Type: A) on Route 53 and point the subdomain to the ALB domain

Head to Hosted Zones -> humangov-ct.click -> Create records for both California and Texas

Choose the ALB we created as an Alias Target to which the alias should route the traffic.

DNS records are pointing to where our application is hosted:

dualstack.k8s-frontend-ded5adda2e-1159621946.us-east-1.elb.amazonaws.com

Step 5: Check logs to see if everything is OK

kubectl get pods
kubectl logs <pod-name>

Step 6: Test the application on california.humangov-ct.click and texas.humangov-ct.click

california.humangov-ct.click
texas.humangov-ct.click

Add a new employee for testing:

Check S3 Bucket:

Check the DynamoDB table:

Step 7: Push HumanGov application source code changes to AWS CodeCommit

cd ~/environment/human-gov-application/src
git status
git add .
git commit -m "Code changes for Elastic Kubernetes Service (EKS)"
git status
git push

Step 7: Destroy the resources

cd ~/environment/human-gov-application/src
kubectl delete -f humangov-ingress-all.yaml
kubectl delete -f humangov-california.yaml
kubectl delete -f humangov-texas.yaml
eksctl delete cluster --name humangov-cluster --region us-east-1

I am going to keep the infrastructure created with Terraform for the next project. If you do not use resources for a long time and do not want to be charged destroy them as well.

cd ~/environment/human-gov-infrastructure/terraform
terraform destroy

CONGRATULATIONS!!!

--

--

Cansu Tekin

AWS Community Builder | Full Stack Java Developer | DevOps | AWS | Microsoft Azure | Google Cloud | Docker | Kubernetes | Ansible | Terraform