Deploying a Scalable and Secure Appointment System on AWS with EKS and Kubernetes
Introduction
With the increasing demand for digital solutions in healthcare, creating a scalable and reliable appointment scheduling system is essential. This project uses AWS EKS (Elastic Kubernetes Service) and a containerized microservices architecture to deploy a full-stack healthcare Appointment System. This guide will walk you through each setup stage, from initial planning to deployment and testing, ensuring the system is resilient, scalable, and secure.
Architecture Overview
The Healthcare Appointment System is designed with a microservices architecture using Kubernetes on AWS. Here’s a high-level look at each component and how they interact.
- Frontend: A React-based interface allowing users to book and manage appointments.
- Backend: A Node.js and Express API to handle appointment scheduling and interactions with MongoDB.
- Database: MongoDB for storing appointment data.
- AWS Services: EKS for container orchestration, ALB (Application Load Balancer) for traffic management, and CloudWatch for monitoring and logging.
1. Setting Up the Network Infrastructure
To ensure security and efficient traffic management, the Healthcare Appointment System is deployed in a VPC (Virtual Private Cloud) with both public and private subnets. This network architecture helps isolate resources and control access, ensuring that only necessary components are exposed to the internet.
VPC and Subnet Configuration:
VPC Creation:
- Create a VPC with two public subnets and two private subnets across different Availability Zones (AZs) for high availability and fault tolerance.
Subnet Distribution:
- Public Subnets: Used for resources that require internet access, such as the Application Load Balancer (ALB).
- Private Subnets: Used for internal resources that should remain isolated from the internet, such as the EKS worker nodes running the application backend, frontend, and MongoDB.
Internet Gateway (IGW):
- Attach an Internet Gateway to the VPC to enable instances in the public subnets to access the internet.
- Configure routing in the public subnets to use the IGW, allowing public access for components like the ALB.
NAT Gateway:
- Create a NAT Gateway in each public subnet to allow instances in the private subnets to access the internet for updates and other outbound traffic without exposing them directly to the internet.
- Set up routing in the private subnets to use the NAT Gateway, ensuring worker nodes have outbound internet access while remaining secure from direct internet exposure.
Subnet Configuration for Each Component
- Application Load Balancer (ALB):The ALB is deployed in the public subnets to receive traffic from the internet. It securely forwards requests to the backend services running in the private subnets through the ALB Ingress Controller.
- EKS Worker Nodes:The EKS worker nodes are deployed in private subnets to ensure that application containers (frontend, backend, and MongoDB) are not exposed directly to the internet, enhancing security.
- Only the backend service and other internal services communicate with MongoDB over a private network within the cluster.
2. Prerequisites
To start, ensure that the following tools are installed and configured:
- AWS CLI: For managing AWS services.
- kubectl: For managing Kubernetes clusters.
- eksctl: For creating and configuring Amazon EKS clusters.
- Docker: For building and managing container images.
3. Setting Up Prerequisites in AWS CloudShell
To start, ensure that eksctl and kubectl are installed in AWS CloudShell, which provides a cloud-based terminal in AWS without needing local installation.
Install eksctl:
curl --silent --location "https://github.com/weaveworks/eksctl/releases/download/latest_release/eksctl_$(uname -s)_$(uname -m).tar.gz" | tar xz -C /tmp
sudo mv /tmp/eksctl /usr/local/bin
Install kubectl:
curl -LO "https://dl.k8s.io/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x kubectl
sudo mv kubectl /usr/local/bin
Verify Installations:
eksctl version
kubectl version --client
With these tools installed, you’re ready to create and manage an EKS cluster directly from AWS CloudShell.
4. Creating the EKS Cluster
Using eksctl simplifies the EKS cluster creation process, but IAM policies must also be configured.
- IAM Policies for EKS Cluster Creation: Attach the following managed policies to the IAM user or role you are using to create the cluster:
- AmazonEKSClusterPolicy: Grants permissions to create and manage EKS clusters.
- AmazonEKSWorkerNodePolicy: Provides permissions for worker nodes to interact with AWS resources.
- AmazonEC2ContainerRegistryReadOnly: Grants read-only access to pull images from Amazon ECR.
2. Create the EKS Cluster:
eksctl create cluster --name <clustername> --region us-east-1 --nodes 2 --managed
Verify Cluster Creation:
eksctl get cluster --name <clustername>
kubectl get nodes
5. Setting Up ALB Ingress Controller IAM Policy
To allow the ALB Ingress Controller to provision and manage load balancers, create an IAM policy and attach it to the EKS service account.
- Create a Policy for ALB Ingress Controller:
- Here’s an IAM policy that grants the necessary permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"acm:DescribeCertificate",
"acm:ListCertificates",
"acm:GetCertificate",
"ec2:AuthorizeSecurityGroupIngress",
"ec2:CreateSecurityGroup",
"ec2:CreateTags",
"ec2:DeleteTags",
"ec2:DeleteSecurityGroup",
"ec2:DescribeInstances",
"ec2:DescribeInstanceStatus",
"ec2:DescribeSecurityGroups",
"ec2:DescribeSubnets",
"ec2:DescribeTags",
"ec2:DescribeVpcs",
"ec2:ModifyInstanceAttribute",
"ec2:ModifyNetworkInterfaceAttribute",
"ec2:RevokeSecurityGroupIngress",
"elasticloadbalancing:AddListenerCertificates",
"elasticloadbalancing:AddTags",
"elasticloadbalancing:CreateListener",
"elasticloadbalancing:CreateLoadBalancer",
"elasticloadbalancing:CreateRule",
"elasticloadbalancing:CreateTargetGroup",
"elasticloadbalancing:DeleteListener",
"elasticloadbalancing:DeleteLoadBalancer",
"elasticloadbalancing:DeleteRule",
"elasticloadbalancing:DeleteTargetGroup",
"elasticloadbalancing:DeregisterTargets",
"elasticloadbalancing:DescribeListenerCertificates",
"elasticloadbalancing:DescribeListeners",
"elasticloadbalancing:DescribeLoadBalancers",
"elasticloadbalancing:DescribeLoadBalancerAttributes",
"elasticloadbalancing:DescribeRules",
"elasticloadbalancing:DescribeSSLPolicies",
"elasticloadbalancing:DescribeTags",
"elasticloadbalancing:DescribeTargetGroups",
"elasticloadbalancing:DescribeTargetGroupAttributes",
"elasticloadbalancing:DescribeTargetHealth",
"elasticloadbalancing:RegisterTargets",
"elasticloadbalancing:SetIpAddressType",
"elasticloadbalancing:SetSecurityGroups",
"elasticloadbalancing:SetSubnets",
"elasticloadbalancing:SetWebACL",
"waf-regional:GetWebACLForResource",
"waf-regional:GetWebACL",
"waf-regional:AssociateWebACL",
"waf-regional:DisassociateWebACL",
"tag:GetResources",
"tag:TagResources",
"waf:GetWebACL"
],
"Resource": "*"
}
]
}
2. Attach the IAM Policy to the Service Account:
- Create a Kubernetes service account and associate it with this IAM policy using
eksctl
or by configuring the IAM role for the service account directly.
6. Building and Pushing Docker Images
Each service (frontend and backend) is containerized using Docker.
- Frontend Service:
FROM node:14
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "start"]
Build and Push:
docker build -t <your-docker-username>/frontend:latest .
docker push <your-docker-username>/frontend:latest
2. Backend Service:
FROM node:14
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3001
CMD ["node", "index.js"]
docker build -t <your-docker-username>/backend:latest .
docker push <your-docker-username>/backend:latest
7. Deploying Services on Kubernetes
Each service is deployed as a separate deployment in Kubernetes. Here’s how:
7.1 Backend Deployment
- Backend Deployment YAML:
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
spec:
replicas: 2
template:
spec:
containers:
- name: backend
image: <your-docker-username>/backend:latest
env:
- name: MONGO_URI
value: mongodb://<mongodb-service>:27017/appointments
Backend Service YAML:
apiVersion: v1
kind: Service
metadata:
name: backend-service
spec:
ports:
- port: 3001
selector:
app: backend
7.2 Frontend Deployment
- Frontend Deployment YAML:
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
spec:
replicas: 2
template:
spec:
containers:
- name: frontend
image: <your-docker-username>/frontend:latest
env:
- name: REACT_APP_BACKEND_URL
value: http://backend-service:3001
Frontend Service YAML:
apiVersion: v1
kind: Service
metadata:
name: frontend-service
spec:
ports:
- port: 3000
selector:
app: frontend
8. Deploying MongoDB on Kubernetes
The MongoDB database will be used to store appointment data, hosted within the EKS cluster in a persistent volume. Here’s how to configure it:
To ensure data persistence, MongoDB is deployed with persistent storage using Kubernetes Persistent Volumes (PVs) and Persistent Volume Claims (PVCs).
Define Persistent Volume and Persistent Volume Claim
- Persistent Volume: This defines the storage available for MongoDB.
- Persistent Volume Claim: The claim requests storage from the Persistent Volume, ensuring MongoDB data is preserved across pod restarts.
# mongodb-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: mongodb-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/data/mongodb"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongodb-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
Apply the PV and PVC:
kubectl apply -f mongodb-pv.yaml
Create MongoDB Deployment and Service
This deployment will create a MongoDB instance within the EKS cluster. The MongoDB service will expose the deployment to allow other services (like the backend) to connect.
# mongodb-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb
spec:
replicas: 1
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo:4.4
ports:
- containerPort: 27017
volumeMounts:
- mountPath: /data/db
name: mongodb-storage
volumes:
- name: mongodb-storage
persistentVolumeClaim:
claimName: mongodb-pvc
---
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
spec:
selector:
app: mongodb
ports:
- protocol: TCP
port: 27017
targetPort: 27017
clusterIP: None # Makes the service accessible within the cluster only
Apply MongoDB Deployment and Service:
kubectl apply -f mongodb-deployment.yaml
9. Configuring the ALB Ingress Controller
Why Use ALB Ingress Controller?
The ALB Ingress Controller integrates directly with Kubernetes to manage the lifecycle of Application Load Balancers for HTTP and HTTPS traffic within the cluster. Using the Ingress Controller instead of directly configuring ALBs provides several advantages:
- Dynamic Management: Automatically manages ALB and route configurations based on changes to Ingress resources.
- Path-based Routing: Allows flexible routing within a single ALB, reducing the need for multiple load balancers.
- Kubernetes-native: Integrates directly into Kubernetes’ Ingress resources, providing a standardized approach to routing.
Steps to Set Up the ALB Ingress Controller:
- Install the ALB Ingress Controller using Helm:
helm repo add eks https://aws.github.io/eks-charts
helm install aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system --set clusterName=<clustername>
2. Ingress YAML Configuration:
- Define an Ingress resource to route incoming traffic to
frontend service
.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: frontend-ingress
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
spec:
rules:
- http:
paths:
- path: /*
pathType: Prefix
backend:
service:
name: frontend-service
port:
number: 3000
3. Testing the Ingress and Load Balancer:
- After applying the Ingress resource, note the DNS name of the ALB to access the frontend.
10. GitHub Actions IAM Policies for CI/CD
Set up GitHub Actions with IAM policies to allow it to authenticate with EKS for deploying updates.
- Create an IAM Policy for GitHub Actions:
- This policy provides minimal permissions to update EKS deployments:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"eks:DescribeCluster",
"eks:ListClusters",
"eks:ListNodegroups",
"eks:UpdateNodegroupConfig",
"eks:UpdateClusterConfig",
"eks:DescribeNodegroup",
"eks:ListFargateProfiles"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ecr:GetDownloadUrlForLayer",
"ecr:BatchGetImage",
"ecr:BatchCheckLayerAvailability",
"ecr:ListImages"
],
"Resource": "*"
}
]
}
2. Generate IAM Access Keys and Add Them to GitHub Secrets:
- In the IAM console, create access keys for this policy and add them as GitHub secrets (
AWS_ACCESS_KEY_ID
andAWS_SECRET_ACCESS_KEY
) to allow the GitHub Actions workflow to authenticate with EKS.
11. Setting Up CI/CD with GitHub Actions
To automate deployments, set up a GitHub Actions workflow to build, push, and deploy updates to the EKS cluster.
Sample GitHub Actions Workflow (.github/workflows/deploy.yml
):
name: Deploy to EKS
on:
push:
branches:
- main
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Log in to Docker Hub
run: echo "${{ secrets.DOCKER_PASSWORD }}" | docker login -u "${{ secrets.DOCKER_USERNAME }}" --password-stdin
- name: Build and push Docker images
run: |
docker build -t ${{ secrets.DOCKER_USERNAME }}/frontend:latest ./frontend
docker build -t ${{ secrets.DOCKER_USERNAME }}/backend:latest ./backend
docker push ${{ secrets.DOCKER_USERNAME }}/frontend:latest
docker push ${{ secrets.DOCKER_USERNAME }}/backend:latest
- name: Configure kubectl
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: Update Kubernetes Deployments
run: |
kubectl set image deployment/frontend frontend=${{ secrets.DOCKER_USERNAME }}/frontend:latest
kubectl set image deployment/backend backend=${{ secrets.DOCKER_USERNAME }}/backend:latest
This workflow:
- Builds and pushes Docker images to Docker Hub upon a push to the
main
branch. - Uses
kubectl set image
to update Kubernetes deployments with the latest images, triggering a rolling update in the cluster.
Conclusion
By leveraging AWS EKS, Kubernetes, and a containerized architecture, this Healthcare Appointment System achieves a scalable, secure, and resilient deployment. Through the integration of microservices for the frontend and backend, a managed MongoDB database for reliable data storage, and tools like the ALB Ingress Controller for efficient traffic management, this system is well-equipped to handle real-world healthcare demands.
The setup ensures high availability with auto-scaling and robust monitoring through AWS CloudWatch, while CI/CD via GitHub Actions simplifies deployment and updates. Moreover, the use of IAM policies and secure networking configurations enhances security, safeguarding sensitive data and maintaining access controls.
With these tools and configurations, this Healthcare Appointment System provides a comprehensive, production-ready solution that can scale as demand grows, providing patients and healthcare providers with a seamless appointment scheduling experience. This architecture can be a strong foundation for building additional healthcare applications on AWS, ensuring both performance and security in a cloud-native environment.