The Guide to Terraform DevOps: Implementing CI/CD Pipelines for EKS workloads with GitHub Actions for Multi-Environments Approach

Joel Wembo
Towards AWS
Published in
27 min readMay 5, 2024

--

DevOps methodology in a multi-environment setup for large organizations revolves around streamlining collaboration between development and operations teams, automating processes, and ensuring continuous delivery while managing multiple environments efficiently. Adopting IaC allows teams to define and manage infrastructure using code. Tools like Terraform, AWS CloudFormation, or Azure Resource Manager help provision and manage infrastructure across different environments consistently.

Figure 1: Architecture of CI/CD Pipelines for EKS workloads with GitHub Actions for Multi-Environments Approach by joel webmo
Figure 1: Architecture of CI/CD Pipelines for EKS workloads with GitHub Actions for Multi-Environments Approach

Preface

This manual delves into how Terraform fundamentally alters infrastructure management, endowing teams with scalable, code-centric methodologies. Uncover its indispensable role in expediting software deployment and cultivating agile, robust pipelines. Whether you’re an experienced practitioner or a novice, this guide provides invaluable insights into Terraform’s paradigm-shifting impact on contemporary DevOps methodologies.

To enhance readability, this article is divided into chapters and split into series. The second part, “The Guide to Terraform DevOps: Kubernetes Tools in infrastructure as code (IaC)” will be covered in a separate article to keep the reading time manageable and ensure focused content.

Acknowledgment

We extend our appreciation to the open-source communities, whose collaborative spirit and dedication have been instrumental in the advancement of tools like Terraform and the broader DevOps ecosystem.

Abstract

DevOps methodology in a multi-environment setup for large organizations revolves around streamlining collaboration between development and operations teams, automating processes, and ensuring continuous delivery while managing multiple environments efficiently. Adopting IaC allows teams to define and manage infrastructure using code. Tools like Terraform, AWS CloudFormation, or Azure Resource Manager help provision and manage infrastructure across different environments consistently. DevOps methodology in a multi-environment setup for large organizations focuses on automation, consistency, and collaboration to ensure smooth and efficient software delivery across various stages of the development lifecycle. In this technical handbook, we propose an experimental approach to implement a multi-project, multi-environment deployment Creating an AWS EKS Cluster using Terraform and automating the deployment process with GitHub Actions.

“Productivity is never an accident. It is always the result of a commitment to excellence, intelligent planning, and focused effort.” Paul J. Meyer

Table of Contents

· Preface
· Acknowledgment
· Abstract
· Table of Contents
· Introduction
Anti-Thesis
· Prerequisites
· Chapter 1. Account and System Settings
· Chapter 2. Docker Containers and images
· Chapter 3. Terraform DevOps
· Chapter 4. Multi-environment CI/CD using GitHub Actions
GitOps Approach
· Chapter 6. Expected results
· Chapter 5. Amazon EKS Cluster Administration
· Discussion
· Conclusions
· About me
· References

Introduction

Continuous Integration (CI) and Continuous Deployment (CD) have indeed become fundamental practices in modern software development workflows.

Continuous Integration involves frequently integrating code changes into a shared repository and automatically verifying them through automated builds and tests. This ensures that any integration issues are addressed early on, leading to improved software quality and faster validation and release cycles.

Continuous Deployment (CD), sometimes called Continuous Delivery, extends CI by automatically deploying all code changes to a testing or production environment after the build stage. This means that on top of automated testing, automated release processes make it easier to rapidly and safely deploy changes to customers.

This automation reduces manual intervention, accelerates feedback loops with users, and allows teams to focus more on building features and less on the logistics of delivering them.

In today’s fast-paced and competitive market, CI/CD enables teams to deliver software updates quickly and reliably, keeping up with customer expectations and staying ahead of the competition. It’s all about agility, quality, and efficiency in software development.

Why multi-environment approach

A multi-environment deployment approach is a way of setting up your application to run on multiple environments, typically for development, testing, and production purposes. This lets you make changes and test them thoroughly without affecting the live version of your application.

Here are some key points about this approach:

  • Environments: You create separate environments that mirror your production environment as closely as possible but with different resource allocations (dev environments might have less powerful machines).
  • Deployment Pipeline: A deployment pipeline automates the process of moving code changes between environments. This pipeline typically includes building, testing, and deploying the code.
  • Benefits: This approach allows for faster development cycles, more reliable deployments, and reduced risk of introducing bugs to production and GitHub Actions offer several benefits for software development and deployment.

Terraform DevOps: Defining Your Infrastructure Blueprint

Traditionally, infrastructure provisioning involved manual configuration, a time-consuming and error-prone process. Terraform breaks this mold by enabling infrastructure to be defined in code. Codes, written in HashiCorp Configuration Language (HCL), specify the resources needed (servers, networks, databases) and their configurations.

Figure 2: IBM buying HashiCorp

“IBM’s and HashiCorp’s combined portfolios will help clients manage growing application and infrastructure complexity and create a comprehensive hybrid cloud platform designed for the AI era,” said Arvind Krishna, IBM chairman and chief executive officer

Terraform code can be version-controlled alongside application code, enabling tracking of changes, rollbacks to previous versions, and collaboration among teams. using Terraform, Repetitive infrastructure deployment tasks can be automated m, freeing up valuable developer and operations time. Also, Terraform shines with its cloud-agnostic nature. It seamlessly works with various cloud providers (AWS, Azure, GCP) and even on-premise infrastructure. This flexibility empowers DevOps teams to manage infrastructure across diverse environments without getting bogged down in provider-specific tools.

This technical handbook offers a comprehensive guide on implementing CI/CD Pipelines for EKS workloads using GitHub Actions for Multi-Environments, alongside Terraform for provisioning and HashiCorp Vault for securing secrets, SonarCloud for code quality analysis, and Trivy for vulnerability scanning. GitHub Actions will manage the conventional DevOps workflows, establishing multiple predefined environments for the deployment pipeline, such as DEV, Staging, UAT, Pre-prod, and production. Throughout this article, we’ll illustrate our approach using a React-based e-commerce application as a case study.

A multi-stage review process is a structured and systematic approach often used in academic, scientific, and professional contexts to ensure thorough evaluation and quality control. This process involves multiple steps or stages, each with specific objectives and criteria for assessment.

** multi-stage example/template **

stages:
- build
- dev
- test or uat
- staging
- prod

build:
- Actions: ...
dev:
- Actions: ...
uat:
- Actions: ...
prod:
- Actions: ...

In GitHub Actions, approval and merge mechanisms are critical components for ensuring that code changes are reviewed, tested, and approved before they are merged into a protected branch like main or production. The combination of branch protection rules, pull request reviews, status checks, manual approval workflows, and environment protection rules in GitHub Actions ensures a robust and controlled CI/CD process. This setup helps maintain code quality, security, and operational stability by enforcing necessary checks and approvals before changes are merged and deployed to critical environments.

Anti-Thesis

Some cloud resources can be provisioned using GitHub Actions easily. Why use Terraform? While GitHub Actions offers a convenient solution for CI/CD pipelines with few lines of instructions, Terraform may be preferred for complex EKS deployments in multi-environment workflows, for example in case of failure Terraform can restore the infrastructure with its state management capabilities.

Why choose Terraform? With numerous Kubernetes tools available in the DevOps marketplace, why opt for writing code in Terraform’s HCL (HashiCorp Configuration Language)?

Figure 3: What’s in it for me?

Prerequisites

Before we get into the good stuff, first we need to make sure we have the required services on our local machine or dev server, which are:

  1. Basic knowledge of React and Terraform.
  2. AWS Account
  3. GitHub Account
  4. AWS CLI installed and configured.
  5. Docker installed locally.
  6. Typescript installed
  7. NPM
  8. NodeJS
  9. Terraform
  10. Create React App
  11. Amazon EKS
  12. A Domain name Hosted by any domain name provider (Optional )
  13. Any Browser for testing

Chapter 1. Account and System Settings

Step 1: AWS Account Setup

Steps to Create Access Keys

  1. Go to the AWS management console, click on your Profile name, and then click on My Security Credentials. …
  2. Go to Access Keys and select Create New Access Key. …
  3. Click on Show Access Key and save/download the access key and secret access key.
Figure 4: Sign in to the AWS Management Console
Figure 5: Account profile and Security Credentials
Figure 6: AWS User Access Keys Create

Step 2: Install and Configure AWS CLI

We assume that you already have your ubuntu or lunix machine running, if you want to learn how to setup your dev machine for aws development you can following this tutorial

curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"

unzip awscliv2.zip

sudo ./aws/install
Figure 7: AWS Configure

Chapter 2. Docker Containers and images

Docker allows you to bundle your application’s dependencies within the container, isolating them from the host system. This helps to avoid conflicts with other applications or system packages and makes it easy to manage and update dependencies.

Figure 8: Using Docker

React js is based upon node js and npm You’ll need to have Node >= 14.0.0 and npm >= 5.6 on your machine. To create a project, run:

npx create-react-app prodxcloud-ecommerce-react-concept-1
cd prodxcloud-ecommerce-react-concept-1
npm start
Figure 9: react start command
Figure 10: Initial Project Setup with Dockerfile

Step 1: Add Dockerfile

A Dockerfile is a text file that contains collections of instructions and commands that will be automatically executed in sequence in the docker environment for building a new docker image. [Wikepedia ]

# Use an official Node runtime as a parent image
FROM node:19-alpine as build
# Set the working directory to /app
WORKDIR /app
# Copy the package.json and package-lock.json to the container
COPY package*.json ./
# COPY public ./

# Install dependencies
RUN npm install --legacy-peer-deps
# Copy the rest of the application code to the container
COPY . .
# Build the React app
RUN npm run build
# Use an official Nginx runtime as a parent image
FROM nginx:1.21.0-alpine
# Copy the ngnix.conf to the container
COPY ngnix.conf /etc/nginx/conf.d/default.conf
# Copy the React app build files to the container
COPY --from=build /app/build /usr/share/nginx/html
# Expose port 80 for Nginx
# EXPOSE 80
EXPOSE 3000
EXPOSE 3000/tcp
EXPOSE 80
# Start Nginx when the container starts
# CMD ["nginx", "-g", "daemon off;"]

Step 2: Add Nginx.conf file

NGINX is open-source web server software used for reverse proxy, load balancing, and caching. It provides HTTPS server capabilities and is mainly designed for maximum performance and stability. It also functions as a proxy server for email communications protocols, such as IMAP, POP3, and SMTP. [papertrail.com]

By default, the file is named nginx. conf and for NGINX Plus is placed in the /etc/nginx directory. (For NGINX Open Source, the location depends on the package system used to install NGINX and the operating system. It is typically one of /usr/local/nginx/conf, /etc/nginx, or /usr/local/etc/nginx.) However for the scope of this project since we are using docker, we placed this file in root directory

server {
listen 80;
server_name 127.0.0.1;
root /usr/share/nginx/html;
index index.html;
location / {
try_files $uri $uri/ /index.html;
}
}

Step 3: Add docker-compose. yaml

Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a Compose file to configure your application’s services.

version: '3'

services:
webapplication:
image: joelwembo/prodxcloud-store:latest
container_name: prodxcloud-store
build:
context: .
dockerfile: Dockerfile
ports:
- "80:80"
volumes:
- ./ngnix.conf:/etc/nginx/conf.d/default.conf
networks:
- web_network

networks:
web_network:
driver: bridge

Step 4: Create a .dockergignore file

A .dockerignore file is similar to a .gitignore file used with Git version control. In the context of Docker, it's a file placed in your project's root directory that instructs the Docker build process to exclude specific files and folders.

deployments
./deployments

build
.github
-----

Next, compile the react application with docker using the following commands:

docker build -t joelwembo/prodxcloud-store:latest .
docker run -p 80:80 --name react joelwembo/prodxcloud-store:latest

# or Simple run :
docker-compose up
Figure 11: react application under nginx using docker-compose

Next, You will need a DockerHub or any Docker registry Account and Access Token to push and pull your images

Docker Hub is a cloud-based service from Docker that allows developers to share containerized applications and automate workflows. It serves as a centralized resource for container image discovery, distribution, and collaboration.

To create a new access token for Docker Hub, follow these steps:

  1. Log in to Docker Hub: Visit the Docker Hub website (https://hub.docker.com/) and log in to your Docker Hub account if you’re not already logged in.

2. Access Token Settings: Once logged in, click on your profile icon at the top right corner of the page, then select “Account Settings” from the dropdown menu.

3. Navigate to Security Settings: In the Account Settings page, navigate to the “Security” tab.

4. Generate Access Token: Scroll down to the “Access Tokens” section and click on the “New Access Token” button.

docker hub Access Token to manage container images

Chapter 3. Terraform DevOps

Terraform is an infrastructure as code (IaC) tool that plays a significant role in DevOps practices.

In this chapter, we are going to showcase step by step how to create Amazon EKS clusters for dev, QA, staging, and production, also provisioning at the same time, security groups, vpc, subnets, and internet gateways. Also, we are going to apply our deployment manifests using Terraform.

Figure 12: Multi-Cluster Deployment for EKS Cluster

3.1 backend.tf

terraform {
backend "s3" {
bucket = "prodxcloud-store"
region = "us-east-1"
key = "state/terraform.tfstate"
encrypt = true
}
}

3.2 variables.tf

# S3 bucket name
variable "bucket-name" {
default = "prodxcloud.net"
}

# Domain name that you have registered
variable "domain-name" {
default = "prodxcloud.net" // Modify as per your domain name
}

variable "cluster-name" {
default = "prodxcloud-cluster"
}

3.3 provider.tf

provider "aws" {
region = "us-east-1"
}

terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
}
}

3.4 vpc.tf

VPC stands for Virtual Private Cloud. It’s a concept used in cloud computing that provides a logically isolated network segment within a public cloud.

resource "aws_vpc" "my_vpc" {
cidr_block = "10.0.0.0/16"
instance_tenancy = "default"
enable_dns_support = true
enable_dns_hostnames = true
# enable_classiclink = false
# enable_classiclink_dns_support = false

# assign_generated_ipv6_cidr_block = false

tags = {
Name = "my_vpc"
}
}

resource "aws_internet_gateway" "igw" {
vpc_id = aws_vpc.my_vpc.id

tags = {
Name = "igw"
}
}


output "vpc_id" {
value = aws_vpc.my_vpc.id
description = "VPC id. "
sensitive = false
}

3.5 subnets.tf

resource "aws_subnet" "private-us-east-1a" {
vpc_id = aws_vpc.my_vpc.id
cidr_block = "10.0.0.0/19"
availability_zone = "us-east-1a"

tags = {
"Name" = "private-us-east-1a"
"kubernetes.io/role/internal-elb" = "1"
"kubernetes.io/cluster/prodxcloud-cluster" = "owned"
}
}

resource "aws_subnet" "private-us-east-1b" {
vpc_id = aws_vpc.my_vpc.id
cidr_block = "10.0.32.0/19"
availability_zone = "us-east-1b"

tags = {
"Name" = "private-us-east-1b"
"kubernetes.io/role/internal-elb" = "1"
"kubernetes.io/cluster/prodxcloud-cluster" = "owned"

}
}

resource "aws_subnet" "public-us-east-1a" {
vpc_id = aws_vpc.my_vpc.id
cidr_block = "10.0.64.0/19"
availability_zone = "us-east-1a"
map_public_ip_on_launch = true

tags = {
"Name" = "public-us-east-1a"
"kubernetes.io/role/elb" = "1"
"kubernetes.io/cluster/prodxcloud-cluster" = "owned"
}
}

resource "aws_subnet" "public-us-east-1b" {
vpc_id = aws_vpc.my_vpc.id
cidr_block = "10.0.96.0/19"
availability_zone = "us-east-1b"
map_public_ip_on_launch = true

tags = {
"Name" = "public-us-east-1b"
"kubernetes.io/role/elb" = "1"
"kubernetes.io/cluster/prodxcloud-cluster" = "owned"
}
}

3.6 policy.tf

Policies are essential for implementing the least privileged access within your EKS cluster. By defining policies in Terraform, you can control what actions users and roles can perform on your cluster resources (e.g., creating pods, and managing nodes). This helps prevent unauthorized access and malicious activity.

data "tls_certificate" "eks" {
url = aws_eks_cluster.prodxcloud-cluster.identity[0].oidc[0].issuer
}

resource "aws_iam_openid_connect_provider" "eks" {
client_id_list = ["sts.amazonaws.com"]
thumbprint_list = [data.tls_certificate.eks.certificates[0].sha1_fingerprint]
url = aws_eks_cluster.prodxcloud-cluster.identity[0].oidc[0].issuer
}



data "aws_iam_policy_document" "validate_oidc_assume_role_policy" {
statement {
actions = ["sts:AssumeRoleWithWebIdentity"]
effect = "Allow"

condition {
test = "StringEquals"
variable = "${replace(aws_iam_openid_connect_provider.eks.url, "https://", "")}:sub"
values = ["system:serviceaccount:default:aws-validate"]
}

principals {
identifiers = [aws_iam_openid_connect_provider.eks.arn]
type = "Federated"
}
}
}

resource "aws_iam_role" "validate_oidc" {
assume_role_policy = data.aws_iam_policy_document.validate_oidc_assume_role_policy.json
name = "test-oidc"
}

resource "aws_iam_policy" "validate-policy" {
name = "validate-policy"

policy = jsonencode({
Statement = [{
Action = [
"s3:ListAllMyBuckets",
"s3:GetBucketLocation"
]
Effect = "Allow"
Resource = "arn:aws:s3:::*"
}]
Version = "2012-10-17"
})
}

resource "aws_iam_role_policy_attachment" "validate_attach" {
role = aws_iam_role.validate_oidc.name
policy_arn = aws_iam_policy.validate-policy.arn
}

output "test_policy_arn" {
value = aws_iam_role.validate_oidc.arn
}

3.7 eks-cluster.tf

We started by provisioning the vpc and public subnets because An EKS cluster operates within a VPC , which is a logically isolated network segment within the AWS cloud. This provides a layer of security for your cluster resources.

EKS Cluster Control Plane by Joel Wembo
EKS Cluster Control Plane
resource "aws_iam_role" "prodxcloud-cluster" {
name = "${var.cluster-name}"-"${count.name}"

assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "eks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
POLICY
}

resource "aws_iam_role_policy_attachment" "prodxcloud-cluster-AmazonEKSClusterPolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
role = aws_iam_role.prodxcloud-cluster.name
}

resource "aws_eks_cluster" "prodxcloud-cluster" {

# name = "prodxcloud-cluster"
# role_arn = aws_iam_role.prodxcloud-cluster.arn
# version = "1.29"

# multi cluster
count = 3
name = "${var.cluster-name}_${count.index}"
role_arn = "${aws_iam_role.prodxcloud-cluster.arn}"

# assign ips subnet
vpc_config {
subnet_ids = [
aws_subnet.private-us-east-1a.id,
aws_subnet.private-us-east-1b.id,
aws_subnet.public-us-east-1a.id,
aws_subnet.public-us-east-1b.id
]
}

depends_on = [aws_iam_role_policy_attachment.prodxcloud-cluster-AmazonEKSClusterPolicy]
}

3.8 deploy.tf

# Define security group
resource "aws_security_group" "eks_sg" {
name = "allow_tls"
description = "Allow TLS inbound traffic and all outbound traffic"
vpc_id = aws_vpc.my_vpc.id
# Define your security group rules here


egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]

# prefix_list_ids = [aws_vpc_endpoint.my_endpoint.prefix_list_id]
}

ingress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]

# prefix_list_ids = [aws_vpc_endpoint.my_endpoint.prefix_list_id]
}

tags = {
Name = "allow_tls"
}
}


# Define ALB for React application
resource "aws_lb" "my_load_balancer" {
name = "prodxcloud-store"
internal = false
load_balancer_type = "application"
security_groups = [aws_security_group.eks_sg.id]
# subnets = [aws_subnet.public-us-east-1a.id]
subnets = [aws_subnet.public-us-east-1a.id, aws_subnet.public-us-east-1b.id] # Use subnets from different Availability Zones
}


# Define ALB target group
resource "aws_lb_target_group" "my_target_group" {
name = "my-target-group"
port = 80
protocol = "HTTP"
vpc_id = aws_vpc.my_vpc.id
}

# Define ALB listener
resource "aws_lb_listener" "my_listener" {
load_balancer_arn = aws_lb.my_load_balancer.arn
port = 80
protocol = "HTTP"

default_action {
type = "forward"
target_group_arn = aws_lb_target_group.my_target_group.arn
}
}



# Define Kubernetes deployment and service for React application
# (Optional )Since we are going to automate this process using github actions
resource "kubernetes_deployment" "prodxcloud_store_deployment" {
metadata {
name = "prodxcloud-store"
}

spec {
replicas = 2

selector {
match_labels = {
app = "prodxcloud-store"
}
}

template {
metadata {
labels = {
app = "prodxcloud-store"
}
}

spec {
container {
image = "joelwembo/prodxcloud-store:latest" # Update with your ECR repository and image tag
name = "prodxcloud-store"
port {
container_port = 80 # Assuming your React app runs on port 3000
}
}
}
}
}
}
# (Optional ) Since we are going to automate this process using github actions
resource "kubernetes_service" "prodxcloud_store_service" {
metadata {
name = "prodxcloud-store"
}

spec {
selector = {
app = "prodxcloud-store"
}

port {
port = 80
target_port = 80 # Assuming your React app runs on port 3000
}

type = "LoadBalancer"
}
}


# Output the DNS name of the load balancer
output "load_balancer_dns" {
value = aws_lb.my_load_balancer.dns_name
}

Compile your first solution!

terraform init
terraform plan
terraform apply
terraform init with aws s3 as backend

Check your AWS management console

Chapter 4. Multi-environment CI/CD using GitHub Actions

GitHub Actions is a feature provided by GitHub that allows you to automate various tasks within your software development workflows directly from your GitHub repository.

Figure 13: GitHub Actions extensions in VS Code

First, let's set up our GitHub Actions and Environment settings in our GitHub repository

Figure 14: Current Progress of the Demo Project

Adding environments in GitHub Actions involves configuring your workflows to deploy and test your application in different environments. Here’s a step-by-step guide on how to add dev, QA, UAT, and prod environments in GitHub Actions:

Environments are used to describe a general deployment target like production, staging, or development. When a GitHub Actions workflow deploys to an environment, the environment is displayed on the main page of the repository. For more information about viewing deployments to environments, see "Viewing deployment history."

To configure an environment in a personal account repository, you must be the repository owner. To configure an environment in an organization repository, you must have admin access.

Notes:

  • The creation of an environment in a private repository is available to organizations with GitHub Team and users with GitHub Pro.
  • Some features for environments have no or limited availability for private repositories. If you are unable to access a feature described in the instructions below, please see the documentation linked in the related step for availability information.
  1. On GitHub.com, navigate to the main page of the repository.
  2. Under your repository name, click Settings. If you cannot see the “Settings” tab, select the dropdown menu, then click Settings.

2. In the left sidebar, click Environments.

Figure 15: Environments Setup

3. Click New Environment.

4. Enter a name for the environment, then click Configure environment. Environment names are not case-sensitive. An environment name may not exceed 255 characters and must be unique within the repository.

4. Under Environment Secrets, click Add Secret.

5. Enter the secret name.

6. Enter the secret value.

7. Click Add Secret.

Step 1
Multi-stage environment setup with GitHub Actions

In the .github/workflows directory, create files with the .yml or .yaml extension. This demo will use deploy-staging.yaml, deploy-production.yaml and deploy-qa.yaml to demonstrate our use cases, however, we will focus more on production workflows.

Note: Some of the resources that we’ll provision using GitHub actions was already done using terraform, at this stage we are only adding CI/CD features

Option 1

deploy-production. yaml

# Github Workflows Terraform Pipeline Provision To Deploy to AWS EKS 
name: PRODUCTION --> Terraform CI/CD pipeline To AWS EKS Cluster - Enterprise


concurrency:
group: production
cancel-in-progress: true

on:
push:
branches: [master, production/*]
pull_request:
types: [review_requested]
branches: [master, production/*]

workflow_dispatch:
inputs:
git-ref:
description: Git Ref (Optional)
default: master
required: false

account:
description: production
default: production
required: true

account_prod:
description: production
default: production
required: true

environment:
description: production (final, latest)
default: production
required: false

env:
# verbosity setting for Terraform log
TF_LOG: INFO
APP_NAME: prodxcloud-store
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
DOCKER_USERNAME: ${{ secrets.DOCKERHUB_USERNAME }}
DOCKER_PASSWORD: ${{ secrets.DOCKERHUB_TOKEN }}
KUBE_CONFIG_DATA: ${{ secrets.KUBE_CONFIG_DATA }}
AWS_DEFAULT_REGION: "us-east-1"
CONFIG_DIRECTORY: "./deployment/terraform/terraform-provision-ekscluster-use-case-1"
# S3 bucket for the Terraform state
# BUCKET_TF_STATE: ${{ secrets.BUCKET_TF_STATE}}
# TF_CLOUD_ORGANIZATION: "prodxcloud"
# TF_API_TOKEN: ${{ secrets.TF_API_TOKEN}}
# TF_WORKSPACE: "prodxcloud"

jobs:
CodeScan-SonarCloud:
name: SonarCloud Scaning
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Code Scaning process
run: pwd
build:
name: build
runs-on: ubuntu-latest
needs: [CodeScan-SonarCloud]
strategy:
matrix:
node-version: [18]
steps:
- uses: actions/checkout@v3
# caching mechanisme
- name: Cache dependencies
uses: actions/cache@v2
with:
path: |
**/node_modules
key: ${{ runner.os }}-${{ hashFiles('**/package-lock.json') }}

- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node@v3
with:
node-version: ${{ matrix.node-version }}
cache: 'npm'
# npm install all packages
- name: Install dependencies
run: npm install --legacy-peer-deps
# run: npm version

- name: Build reactjs application
run: npm run build --if-present

- name: list all packages & dependencies
run: npm list
# Dockerhub Build and Push
- name: Build docker docker image for production
uses: docker/login-action@v2
with:
username: ${{ env.DOCKER_USERNAME }}
password: ${{ env.DOCKER_PASSWORD }}
- run: docker build -t joelwembo/prodxcloud-store:prod .
- run: docker push joelwembo/prodxcloud-store:prod
- run: docker version

trivyScanDockerImage:
name: trivy scan - security scanner)
runs-on: ubuntu-latest
if: ${{ always() }}
needs: [CodeScan-SonarCloud, build]
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Build an image from Dockerfile
run: |
docker version
# docker build -t docker.io/joelwembo/prodxcloud-store:prod .
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@master
with:
image-ref: 'docker.io/joelwembo/prodxcloud-store:prod'
format: 'table'
# exit-code: '1'
ignore-unfixed: true
vuln-type: 'os,library'
severity: 'CRITICAL,HIGH'
- name: Push Docker Image for production
run: docker push joelwembo/prodxcloud-store:prod

qa:
name: QA Deploy to Staging
environment:
name: staging
url: https://staging.production.net/
runs-on: ubuntu-latest
needs: [build, trivyScanDockerImage]
steps:
- name: Running Tests
uses: actions/checkout@v3
- run: echo "running Tests"
- run: npm test

deploy:
name: Deploy to EKS
environment:
name: production
url: https://production.prodxcloud.net/
runs-on: ubuntu-latest
needs: qa
strategy:
matrix:
environment: [production]
steps:
- name: Checkout repository
uses: actions/checkout@v2

- name: Terraform Setup and Apply
id: terraform_apply
uses: hashicorp/setup-terraform@v3
with:
terraform_version: 1.1.7
# cli_config_credentials_hostname: 'terraform.example.com'
# cli_config_credentials_token: ${{ env.TF_API_TOKEN }}
- name: Terraform Init
run: terraform init
working-directory: ./deployment/terraform/terraform-provision-ekscluster-use-case-1

- name: Terraform Plan
id: terraform_plan
run: terraform plan -out=tfplan -var="environment=${{ matrix.environment }}"
working-directory: ./deployment/terraform/terraform-provision-ekscluster-use-case-1

- name: Terraform Apply
if: matrix.environment == 'qa' || matrix.environment == 'production'
run: terraform apply -auto-approve -input=false -lock=false
working-directory: ./deployment/terraform/terraform-provision-ekscluster-use-case-1

- name: Kubernetes Setup
uses: azure/k8s-set-context@v1
with:
kubeconfig: ${{ env.KUBE_CONFIG_DATA }}

- name: Update kubeconfig
run: aws eks --region us-east-1 update-kubeconfig --name prodxcloud-cluster
env:
AWS_ACCESS_KEY_ID: ${{ env.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ env.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: ${{ env.AWS_DEFAULT_REGION}}

- name: Get Kubernetes Pods
run: kubectl get pods --namespace default

- name: Get Kubernetes Nodes
run: kubectl get nodes

- name: Get Kubernetes Services
run: kubectl get services --namespace default

- name: Get Load Balancer DNS
run: kubectl get services prodxcloud-store

outputs:
APP_NAME: prodxcloud-store
AWS_ACCESS_KEY_ID: ${{ env.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ env.AWS_SECRET_ACCESS_KEY }}
DOCKER_USERNAME: ${{ env.DOCKERHUB_USERNAME }}
DOCKER_PASSWORD: ${{ env.DOCKERHUB_TOKEN }}
KUBE_CONFIG_DATA: ${{ env.KUBE_CONFIG_DATA }}
AWS_DEFAULT_REGION: ${{ env.AWS_DEFAULT_REGION }}

# - name: Kubernetes Deployment
# run: kubectl apply -f k8s/deployment.yaml
Figure 16: Github Actions Output

Option 2 ( Anti-thesis demonstration): We can provision the qa and staging clusters before the production phase using GitHub actions without terraform. We can keep terraforming for production only!

While GitHub Actions offer a convenient solution for CI/CD pipelines with Terraform, alternative tools with a broader feature set or dedicated focus on infrastructure automation might be better suited for complex EKS deployments in multi-environment workflows, potentially leading to improved scalability, security, or cost efficiency.

deploy-staging.yaml


name: STAGING --> Terraform CI/CD pipeline To AWS EKS Cluster - Enterprise

on:
push:
branches: ['master' , 'main', 'staging']
pull_request:
branches: [ "master" , 'main' ,'dev']

workflow_dispatch:
inputs:
git-ref:
description: Git Ref (Optional)
default: master
required: false

account:
description: staging
default: staging
required: true

account_staging:
description: staging
default: staging
required: true

environment:
description: staging
default: staging
required: false

permissions:
contents: write
env:
# verbosity setting for Terraform log
TF_LOG: INFO
APP_NAME: "prodxcloud-store-staging"
DOCKER_IMAGE: "joelwembo/prodxcloud-store:staging"
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
DOCKER_USERNAME: ${{ secrets.DOCKERHUB_USERNAME }}
DOCKER_PASSWORD: ${{ secrets.DOCKERHUB_TOKEN }}
KUBE_CONFIG_DATA: ${{ secrets.KUBE_CONFIG_DATA }}
CLUSTER_NAME: "prodxcloud-store-staging"
CLUSTER_SERVICE: "prodxcloud-cluster-staging-service"
AWS_DEFAULT_REGION: "us-east-1"
CONFIG_DIRECTORY: "./deployment/terraform/terraform-provision-ekscluster-use-case-1"


jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [18]
# See supported Node.js release schedule at https://nodejs.org/en/about/releases/
steps:
- uses: actions/checkout@v3
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node@v3
with:
node-version: ${{ matrix.node-version }}
cache: 'npm'
- run: npm ci
- run: npm run build --if-present
- run: npm list
- run: npm test

push-docker-image:
name: Build Docker image and push to repositories for staging
# run only when code is compiling and tests are passing
runs-on: ubuntu-latest
needs: ['build']
# steps to perform in job
steps:
- name: Checkout code
uses: actions/checkout@v3

- name: Login to DockerHub
uses: docker/login-action@v2
with:
username: ${{ env.DOCKER_USERNAME }}
password: ${{ env.DOCKER_PASSWORD }}
- run: docker build -t ${{env.DOCKER_IMAGE}} .
- run: docker push ${{env.DOCKER_IMAGE}}
- run: docker version

provision-aws-eks-cluster-staging:
runs-on: ubuntu-latest
needs: ['build', 'push-docker-image']
steps:
- name: AWS EKS Deployment
uses: actions/checkout@v3

- name: Create EKS Cluster
run: aws eks create-cluster --region ${{ env.AWS_DEFAULT_REGION}} --name ${{ env.CLUSTER_NAME}} --version 1.29 --without-nodegroup
# run: eksctl create cluster --name ${{ env.CLUSTER_NAME}} --nodegroup-name ng-test --node-type t3.medium --nodes 2
# run: aws eks list-clusters
env:
AWS_ACCESS_KEY_ID: ${{ env.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ env.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: ${{ env.AWS_DEFAULT_REGION}}

deploy-to-staging:
runs-on: ubuntu-latest
needs: ['build', 'push-docker-image', 'provision-aws-eks-cluster-staging']
environment:
name: staging
url: https://dev-staging.prodxcloud.net/
steps:
- name: AWS EKS Deployment
uses: actions/checkout@v3

- name: Pull the Docker image
run: docker pull joelwembo/prodxcloud-store:staging

- name: Update kubeconfig
run: aws eks --region us-east-1 update-kubeconfig --name ${{ env.CLUSTER_NAME}}
- name: Deploy to EKS
run: kubectl apply -f ./k8s/deployment-staging.yaml
working-directory: ./deployment/terraform/terraform-provision-ekscluster-use-case-1

- name: Deploy to EKS
# run: kubectl expose deployment ${{ env.CLUSTER_NAME}} --type=LoadBalancer --type=LoadBalancer --port=80 --name=${{ env.CLUSTER_SERVICE}}
run: kubectl get nodes
- name: Get all running pods
run: kubectl get pods

- name: Load Balancer DNS
run: kubectl get services ${{ env.CLUSTER_SERVICE}}

deploy-qa.yaml

environment: 
name: qa
url: https://dev-qa.prodxcloud.net/
steps:
- name: AWS EKS Deployment
uses: actions/checkout@v3

- name: Pull the Docker image
run: docker pull joelwembo/prodxcloud-store:qa

- name: Update kubeconfig
run: aws eks --region us-east-1 update-kubeconfig --name ${{ env.CLUSTER_NAME}}
- name: Deploy to EKS
run: kubectl apply -f ./k8s/deployment-qa.yaml
working-directory: ./deployment/terraform/terraform-provision-ekscluster-use-case-1

- name: Deploy to EKS
run: kubectl expose deployment ${{ env.CLUSTER_NAME}} --type=LoadBalancer --type=LoadBalancer --port=80 --name=${{ env.CLUSTER_SERVICE}}
# run: kubectl get nodes
- name: Get all running pods
run: kubectl get pods

- name: Load Balancer DNS
run: kubectl get services ${{ env.CLUSTER_SERVICE}}

GitOps Approach

By following these steps, you can set up GitHub Actions to deploy and test your application across multiple clusters, enabling a robust multi-cluster deployment pipeline. However, this practice can lead to other limitations such as custom network configurations and state handling.

Figure 17: GitOps CI/CD pipeline with GitHub Actions and Kubernetes by Brad Morg
Figure 18: Github Actions Workflows
Figure 19: Github Actions Workflows
Figure 20: Staging deployment
Figure 21: QA deployment

Chapter 6. Expected results

Multi-stage Docker Images

Figure 22: SonarQube Code Analysis Using SonarCloud.io Integration with GitHub Actions
Figure 23: SonarCloud Code Inspection
Figure 24: Trivy Scan before deployment
Figure 25: Docker Hub hosted images

AWS AKS Clusters

  • Production
Figure 26: EKS load balancer
  • Staging and QA
QA and Staging Deployment

You can also use kubectl in your VSCode to interact with your Kubernetes clusters.

Testing your load balancer DNS using the Google Chrome browser

Chapter 5. Amazon EKS Cluster Administration

For this tutorial, we are going to use Lens for our eks cluster administration, we do have multiple clusters to manage. Lens is an integrated development environment (IDE) that allows users to connect and manage multiple Kubernetes clusters on Mac, Windows, and Linux platforms.

Here are a few advantages of using Lens Desktop:

  • Confidence that your clusters are properly set up and configured.
  • Increased visibility, real-time statistics, log streams, and hands-on troubleshooting capabilities.
  • The ability to work with your clusters quickly and easily, radically improving productivity and the speed of business.

[ Source: https://docs.k8slens.dev/ ]

Add a Cluster

  1. In Catalog > Clusters, point the mouse cursor on the Add Cluster button.
  2. Select one of the following options: Sync kubeconfig file(s) The path to particular kubeconfig files. Sync kubeconfig folder(s) The path to folders that contain kubeconfig files.
Step 1

Sync kubeconfig file(s): The path to a particular kubeconfig files.

Sync kubeconfig folder(s): The path to folders that contain kubeconfig files.

Or, you can obtain the file using the following instructions:

display the kube config using the cat command
update your kubeconfig path in your dev machine

Now you can navigate to different clusters to check the status of your pods, services, and networks

The same thing can be obtained manually here using a lens!

Pods Monitoring and Troubleshooting

pods running and we can see the logs using Lens IDE

Lens IDE has a shortcut called “ Pod Shell “ to interact, restart, debug /fix your pod issues

Discussion

Why did you combine both Terraform and GitHub actions to provision the same resources?

While GitHub Actions offer a convenient solution for CI/CD pipelines with Terraform, alternative tools with a broader feature set or dedicated focus on infrastructure automation might be better suited for complex EKS deployments in multi-environment workflows, potentially leading to improved scalability, security, or cost efficiency.

Meaning, that the primary usage of GitHub actions is to perform CI/CD, we did provision our eks clusters using Terraform only, but we also tried to use GitHub actions marketplace extensions/plugins to get the same results, however, those plugins lack advanced features, some may pose security issues, some plugins may demand a lot of commands during the deployment workflow process and finally terraform is more expandable offering new integrations along the way

Conclusions

To ensure a focused and manageable reading experience, the second part, The Guide to Terraform DevOps: Kubernetes Tools in infrastructure as code (Iac)will be covered in a separate article and will focus on helping developers learn and get an overview of the suite of DevOps tools available for Kubernetes and its ecosystem.

In summary, this technical paper has outlined a comprehensive approach to implementing CI/CD pipelines for Amazon EKS workloads, utilizing GitHub Actions alongside Terraform, HashiCorp Vault, SonarCloud, and Trivy for multi-environment projects. By integrating these tools, developers can automate the deployment process while ensuring security, code quality, and vulnerability scanning at every stage.

React-based e-commerce application in EKS
React-based e-commerce application in AWS EKS Cluster

The case study of a React-based e-commerce application provides a practical demonstration of these concepts in action, offering insights into how such pipelines can be effectively utilized in real-world scenarios. With this guide, teams can streamline their multi-tenant development and deployment workflows, fostering a culture of continuous improvement and delivery in their software development lifecycle.

“Winners take time to relish their work, knowing that scaling the mountain is what makes the view from the top so exhilarating.” Denis Waitley

You can also find the source codes on GitHub here.

Thank you for Reading !! 🙌🏻, don’t forget to subscribe and give it a CLAP 👏, and if you found this article useful contact me or feel free to sponsor me to produce more public content. see me in the next article.🤘

About me

I am Joel Wembo, AWS certified cloud Solutions architect, Back-end developer, and AWS Community Builder, I‘m based in the Philippines 🇵🇭; and currently working at prodxcloud as a DevOps & Cloud Architect. I bring a powerful combination of expertise in cloud architecture, DevOps practices, and a deep understanding of high availability (HA) principles. I leverage my knowledge to create robust, scalable cloud applications using open-source tools for efficient enterprise deployments.

I’m looking to collaborate on AWS CDK, AWS SAM, DevOps CI/CD, Serverless Framework, CloudFormation, Terraform, Kubernetes, TypeScript, GitHub Actions, PostgreSQL, and Django.”

For more information about the author ( Joel O. Wembo ) visit:

Links:

References

--

--

I am a Cloud Solutions Architect at prodxcloud. Expert in AWS, AWS CDK, EKS, Serverless Computing and Terraform. https://www.linkedin.com/in/joelotepawembo