GitOps: CI/CD automation workflow using GitHub Actions, ArgoCD, and Helm charts deployed on K8s cluster

Seifeddine Rajhi
10 min readNov 29, 2022

--

Introduction

GitOps, as originally proposed by Weaveworks, uses git as a “single source of truth” for CI/CD pipelines, integrating code changes in a single, shared repository per project and using pull requests to manage infrastructure and deployment.

The pull vs push is an important concept to understand if you want to implement a GitOps pattern in your delivery process.

For a classic push CI/CD pipeline, you mostly push the modification from the upstream git repository to the downstream.

In the pull approach, reverse the whole process: the downstream project regularly pulls news from upstream and adapts in consequence.

So, in a nutshell, ArgoCD is a Kubernetes controller, which aims to synchronize a set of Kubernetes resources in a cluster with the content of a Git repository

The main idea is that when you update your Git Repository, ArgoCD will eventually catch the difference and synchronize the current state of the cluster with the target one of the repository

🎯Goals & Objectives:

In this article, we will implement a full CI/CD workflow with GitHub Actions and ArgoCD to deploy our application to Kubernetes.

There is no better way to learn than by doing it yourself 😊

HAPPY LEARNING 💻

🚀 Prerequisites

Before we start, I assume you have the following:

❄ The workflow landscape

End to end workflow landscape

So our workflow consists of the below steps:

  • GitHub Actions to build a Docker Image of the application image.
  • push the image to a private ECR (Elastic Container Image) repository.
  • update the version of the new image in the Helm Chart present in the Git repo.

And once there are some changes in the Helm Chart, ArgoCD detects it and starts rolling out and deploying the new Helm chart in the Kubernetes cluster.

One key step to enable GitOps is to have the CI separate from the CD. Once CI execution is done, the artifact will be pushed to the repository and ArgoCD will be taking care of the CD.

Installation

We will start our demo by deploying an EKS cluster using Terraform. Then, we will configure kubectl using Terraform output and verify that our cluster is ready to use.

The installation is quite straightforward and we will start with the EKS cluster

Provision of the Kubernetes cluster on AWS (EKS)

The terraform configuration is organized across multiple files:

  1. providers.tf sets the Terraform version to at least 1.2. It also sets versions for the providers used by the configuration.
  2. variables.tf contains a region variable that controls where to create the EKS cluster.
  3. vpc.tf provisions a VPC, subnets, and availability zones using the AWS VPC Module. The module creates a new VPC.
  4. node-groups.tf provisions the node groups the EKS will use.
  5. eks-cluster.tf uses the AWS EKS Module to provision an EKS Cluster and other required resources, including Auto Scaling Groups, Security Groups, IAM Roles, and IAM Policies.
  6. outputs.tf defines the output values for this configuration.

Initialize Terraform workspace

In order to initialize the terraform configuration, we will run the below command :

terraform init

Provision of the EKS cluster

Run the below command to create your cluster and other necessary resources:

terraform apply -auto-approve

Configure kubectl

Now that we have provisioned the EKS cluster, we need to configure kubectl.

First, we open the outputs.tf file to review the output values. We will use the region and cluster_name outputs to configure kubectl using the below command :

aws eks --region $(terraform output -raw region) update-kubeconfig \
--name $(terraform output -raw cluster_name)

Installation and configuration of ArgoCD on the K8s cluster

We will install ArgoCD using the official YAML:

kubectl create namespace argocd
kubectl config set-context --current --namespace=argocd
kubectl apply -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

This will create a new namespace, argocd, where Argo CD services and application resources will live.

As the Argo CD has been deployed, we now need to configure argocd-server and then login:

Expose argocd-server

By default, argocd-server is not publically exposed. For the purpose of this demo, we will use a Load Balancer to make it accessible:

kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'
export ARGOCD_SERVER=`kubectl get svc argocd-server -n argocd -o json | jq --raw-output '.status.loadBalancer.ingress[0].hostname'`

Accessing the Web UI

The initial password is autogenerated with the pod name of the ArgoCD API server:

export ARGO_PWD=`kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d`

Once we open the web page in the UI, it will ask for the user name password. admin is the default username and password we need to check argocd-initial-admin-secret and decode the value.

After you sign in your initial page looks like the below

Initial ArgoCD page

Connect ArgoCD with the Git repository

In order to achieve this step, we will create a GitHub repository and we will put the application source code in it.

Here just for demo purposes, I am using the Nginx Webserver, so putting a simple HTML webpage and Dockerfile in the GitHub repo.

index.html

<Html>
<body>
<center>
<h1>GitOps: Kubernetes CI/CD with GitHub Actions and Argo CD</h1> <br>
<br>
<img src='https://rogerwelin.github.io/assets/images/argocd.png' width=600 height=480>
</center>
</body>
</Html>

Dockerfile:

From nginx:alpine
COPY index.html /usr/share/nginx/html

🦑 Implementing the ArgoCD app

Now, in this step, we will create an app in ArgoCD in which we basically define where is our application’s Helm Chart located and where to deploy it, and some other small configurations.

Since the application manifests are located in a private repository then repository credentials have to be configured. Argo CD supports both HTTPS and SSH Git credentials.

In our demo, we will use SSH Private Key Credential to authenticate.

Private repositories that require an SSH private key have a URL that typically starts with git@ or ssh:// rather than https://

  1. Navigate to Settings/Repositories

2. Click Connect Repo using the SSH button, enter the URL and paste the SSH private key

3. Click Connect to test the connection and have the repository added

Now, that our Git repo is linked to ArgoCD, We can create the app either by declaring everything in a YAML file or from WebUI.

We will demonstrate both ways.

The YAML way:

We will be creating the application.yaml file to store the configuration details about connectivity between argocd and GitHub. This file is very similar to the kubernetes manifest file but with different options.

In the application.yaml manifest file, under source and destination we mention the required values.

I have clearly explained each line, so please follow the below code to understand more about it, you can always refer to the official documentation.

on Argo CD will track the changes from git and deploy it.

$ kubectl apply -f application.yaml

Once the application manifest is applied, then you can check from the web UI, that the application will be available.

you can click on that application to check the application mapping. You can click on each tile to view the summary, logs, events, parameters & YAML config.

the WebUI way:

here I am creating the app from WebUI. So this is how we can create apps in ArgoCD.

So, now we can create the application via WebUI

Creating App in ArgoCD

In the above image notice, We have defined the GitHub repository and path to my applications Helm Chart. So, now when the app is created successfully, it looks like this

ArgoCD Application

Gluing everything with GitHub Actions

In this step, we will set up GitHub Actions in the repository to build the docker image from the Dockerfile present in the repo and then push the image to AWS ECR private repository. This is actually the GitOps part t of our Workflow.

So in the GitHub repository click Actions and select set up a workflow yourself, it will create a YAML file at path .github/workflows/main.yml. And this is the only file that we need to create and modify in the GitOps part.

Below is the file we needed for this workflow

.github/workflows/main.yml

In the above file, you can clearly see, that I have given an OpenID Connect (OIDC) Identity Provider for GitHub Actions, which enables us to configure workflows that request temporary, on-demand credentials from any service provider on the internet that supports OIDC authentication.

So using OpenID Connect we can remove the need to have keys stored in GitHub Actions, saving the headache of rotating the keys or other tedious tasks.

And now, I will walk you through Terraform configuration for setting up the authentication and an Actions workflow that uses it.

Setup

  1. Configure AWS Identity and Access Management (IAM) in our AWS account to believe what the GitHub Actions Identity Provider says.
  2. Make an IAM role available to GitHub Actions entities with specific properties.
  3. Add an Actions workflow to request and use credentials from AWS.

Here is a schema representing what we are going to accomplish

Automating with terraform:

It is really a straightforward terraform script to create an OpenID Connect Identity provider with its required role and permissions.

Luckily, the Terraform registry contains a complete AWS provider to use.

We will need to make sure you have provider "aws" {} configured to actually interact with the AWS account in question.

Another block or resource is actually configuring the OpenID Connect Identity provider

Then we will need to grant it a role to assume within AWS.

Now that we have a working OpenID Connect provider within AWS, we need to add the configuration to GitHub for use in our GitHub Actions. To do this, we simply add another step to the desired yaml workflow.

The relevant blocks look like so:

permissions:  
id-token: write
contents: read # This is required for actions/checkout@v2

Adding the permissions to the job allows the action that gets the credentials from AWS to store them for use in further steps. The permission that is specifically required is id-token: write.

The next step is where the credential-retrieving magic actually happens

    - name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v1
with:
role-to-assume: arn:aws:iam::XXXXXXXXXX:role/github-oidc
role-duration-seconds: 900
aws-region: eu-west-1

where “xxxxxxxx” is the AWS account ID.

Pull the image from the private ECR registry

I described a straightforward way of deploying a container to a Kubernetes cluster that is hosted in a private ECR registry.

Besides a familiar look of service and deployment definition, there are a couple of items that are needed to be highlighted:

  • ECR Image Registry URL: <aws_account_id>.dkr.ecr.aws_region.amazonaws.com/<image-name>:<tag>
  • <aws_account_id> - your account id, e.g. e9ae3c220b23
  • <aws_region> - AWS region name
  • <image-name> - image name
  • <tag> - image tag, usually defines a version.
  • Image Pull Policy: Always enforce image force pull.

Now, we create a registry secret within the above namespace that would be used to pull an image from a private ECR repository:

kubectl create secret docker-registry regcred \
--docker-server=${AWS_ACCOUNT}.dkr.ecr.${AWS_REGION}.amazonaws.com \
--docker-username=AWS \
--docker-password=$(aws ecr get-login-password)

This command would utilize aws-cli aws ecr get-login-password and save the generated credentials in a special docker-registry secret type. More info about it is in the official kubernetes docs.

Please note, that username is always set as AWS for all accounts.

Scan the image for vulnerabilities

And last but not least, it makes sense to add our vulnerability scanning at the same time as we’re building our Docker image. Here we can leverage Trivy’s GitHub Action to add vulnerability scanning and use GitHub code scanning to view the results. Code scanning is free for all public repositories.

Now the GitOps CI part is ready, so when we commit to the main branch then this file will automatically get triggered and start building and pushing the image and updates the version of the new image in the Helm Chart which finally triggers ArgoCD to deploy Helm Chart in the Kubernetes Cluster. In the below image, you can see our build is successful.

And here we can access our WebApp from the browser 🎊🎉

You can find the source code of the project on my Github.

🌟Conclusion 🌟

And there you have it, with the full setup we can able to separate CI & CD and automatically deploy the applications into the cluster with Argo CD.

I hope you guys have enjoyed this hands-on tutorial and learned a bit more than what you know before. Let me know if you have any questions related to this blog.

Thank you for Reading !! 🙌🏻😁📃, see you in the next blog.🤘

🚀 Feel free to connect with me :

LinkedIn: https://www.linkedin.com/in/rajhi-saif/

Twitter : https://twitter.com/rajhisaifeddine

The end ✌🏻

Resources

🔰 Keep Learning !! Keep Sharing !! 🔰

--

--

Seifeddine Rajhi

AWS Community builder | → I build and break stuff, preferably in the cloud, ❤ OpenSource. Twitter: @rajhisaifeddine