Continuous Deployment with Jenkins, GitOps, and Minikube: Deploying a Flask App to Kubernetes
Welcome to a complete journey of seamless automation and continuous deployment! In this article, we’ll unveil the power of Jenkins, GitOps, and Minikube as we automate the deployment of your Flask application.
Imagine having a deployment pipeline that springs into action every time you commit code changes to your GitHub repository, building and deploying your application effortlessly. Visualize your application updates flowing smoothly from development to your local Kubernetes cluster.
Here a flow diagram that will guide you through each step, making complex concepts simple and approachable.
Let’s dive in and explore how this magic happens!
Explaining the Workflow Diagram
Now, let’s take a closer look at the workflow diagram that encapsulates our entire continuous deployment process:
- GitHub Webhooks: When changes are pushed to your Flask application’s GitHub repository, a webhook triggers the start of our deployment pipeline.
- Jenkins Automation: Jenkins, running on an AWS EC2 instance, receives the webhook event and kicks off two critical jobs: buildimage and updatemanifest.
- Docker Image Building: The buildimage job orchestrates the building of Docker images from your application’s source code. These images are then securely pushed to Docker Hub, ensuring they are readily accessible.
- Manifest Update: The updatemanifest job handles the critical task of updating Kubernetes deployment manifests, hosted in the “kubernetesmanifest-flask-app” GitHub repository. These manifests are the blueprint for our application’s deployment.
- GitOps with ArgoCD: ArgoCD watches the “kubernetesmanifest-flask-app” repository, continuously syncing the desired state with the actual state of your Minikube Kubernetes cluster.
- Local Kubernetes Cluster: Minikube, running on your local machine, provides a real Kubernetes environment for testing and validating changes before they reach production.
- Seamless Deployment: The result is a seamlessly orchestrated deployment process. Any code changes made to your Flask application are automatically built, tested, and deployed to your Minikube cluster, thanks to the power of GitOps.
This article draws inspiration from the informative YouTube tutorial by Cloud With Raj. For a detailed walkthrough of this setup, complete with hands-on demonstrations and expert insights, I recommend checking out the tutorial here: YouTube Tutorial by Cloud With Raj.
With this comprehensive understanding of our deployment workflow, let’s proceed to implement this process step by step.
Section 1: Prerequisites
Before we dive into the implementation of our continuous deployment pipeline, let’s make sure you have everything you need to get started. Here are the prerequisites for this tutorial:
- GitHub Account: You’ll need a GitHub account to host your Flask application’s source code and deployment manifests. If you don’t have one, you can sign up for free at GitHub.
- Docker Hub Account: Docker Hub will serve as the repository for your Docker images. If you don’t have a Docker Hub account, you can create one here: Docker Hub.
- AWS Account: To set up Jenkins on an EC2 instance, you’ll need an AWS account. If you don’t have one, you can create an AWS account here: AWS Account.
- No Prior Knowledge Required: You don’t need prior knowledge of Jenkins, Docker, or Kubernetes to follow along with this tutorial. I’ll guide you through each step, making it accessible even for beginners.
Section 2: Setting up Jenkins/Docker/Git on AWS EC2
Before we dive into configuring Jenkins on an AWS EC2 instance, it’s worth noting that there’s a detailed tutorial available on the official Jenkins website. This tutorial covers the entire process, from launching an EC2 instance to setting up Jenkins. If you prefer a step-by-step guide with additional details, you can check it out here: Tutorial for Installing Jenkins on AWS. But notice that you do not need to follow the steps where it configures cloud on Jenkins, it is not neccesary to this tutorial.
I called the instance jenkins-instance
:
In this tutorial, I’ve chosen a t3.micro instance because it offers improved performance. However, it’s important to note that the t2.micro instance can also be used for this setup.
Now, let’s continue from the point where you have your EC2 instance up and running:
- Setting Permissions for Your Private Key: Ensure that the private key file (key.pem) you use for SSH access to your EC2 instance has the correct permissions. Run the following command to set the appropriate permissions (replace
your-key.pem
with your key file's name):
chmod 400 your-key.pem
2. Accessing Your EC2 Instance: Use SSH to connect to your EC2 instance using the “ec2-user”. Replace your-key.pem
and your-instance-ip
with your key pair and the instance public IPv4 address or public DNS name:
ssh -i your-key.pem ec2-user@your-instance-ip
3. Downloading and Installing Jenkins: On your EC2 instance, we’ll download and install Jenkins. Execute these commands on your EC2 instance:
Ensure that your software packages are up to date on your instance by using the following command to perform a quick software update:
[ec2-user ~]$ sudo yum update –y
Add the Jenkins repo using the following command:
[ec2-user ~]$ sudo wget -O /etc/yum.repos.d/jenkins.repo \ https://pkg.jenkins.io/redhat-stable/jenkins.repo
Import a key file from Jenkins-CI to enable installation from the package:
[ec2-user ~]$ sudo rpm --import https://pkg.jenkins.io/redhat-stable/jenkins.io-2023.key
[ec2-user ~]$ sudo yum upgrade
The next steps is the instalation of Java and it depends of what Amazon Linux version you are using. Use the cat command to display the Amazon Linux version:
[ec2-user ~]$ cat /etc/os-release
Install Java (for Amazon Linux 2): Skip this step if you have Amazon Linux 2023.
[ec2-user ~]$ sudo amazon-linux-extras install java-openjdk11 -y
Install Java (for Amazon Linux 2023):
[ec2-user ~]$ sudo dnf install java-11-amazon-corretto -y
Install Jenkins: (Yey!)
[ec2-user ~]$ sudo yum install jenkins -y
Enable the Jenkins service to start at boot:
[ec2-user ~]$ sudo systemctl enable jenkins
Start Jenkins as a service:
[ec2-user ~]$ sudo systemctl start jenkins
4. Installing Docker: We’ll install Docker on your EC2 instance using the following commands:
Update the package repository:
[ec2-user ~]$ sudo yum update -y
Install Docker:
[ec2-user ~]$ sudo yum install docker -y
Start the Docker Service:
[ec2-user ~]$ sudo service docker start
Add the ec2-user to the docker group:
[ec2-user ~]$ sudo usermod -a -G docker ec2-user
Run the following command to apply group changes
[ec2-user ~]$ sudo chmod 666 /var/run/docker.sock
These commands ensure that Docker is installed, the Docker service is started, and the “ec2-user” can execute Docker commands without using sudo
.
5. Installing Git: Install Git on your EC2 instance with the following command:
[ec2-user ~]$ sudo yum install git -y
Write ‘y’ as shown to install all the packages and continue to the next step!
Section 3: Configuring Jenkins
Accessing Jenkins Web Interface: Jenkins should now be running on your EC2 instance. You can access the Jenkins web interface by opening a web browser and navigating to:
http://your-instance-ip:8080
You will be prompted to unlock Jenkins using an initial administrative password. To obtain this password, you can SSH into your EC2 instance and retrieve it using the following command:
sudo cat /var/lib/jenkins/secrets/initialAdminPassword
Copy and paste the admin password and click on continue.
The Jenkins installation script directs you to the Customize Jenkins page. Click Install suggested plugins.
Once the installation is complete, the Create First Admin User will open. Enter your information, and then select Save and Continue.
On the left-hand side, select Manage Jenkins, and then select Manage Plugins.
Select the Available tab, and then enter Amazon EC2 plugin at the top right.
I follow the on-screen instructions to set up Jenkins, including installing suggested plugins. For this demo, I need to install the following Jenkins plugins:
- Amazon EC2 plugin: (No need to set up Configure Cloud after)
- Docker plugin
- Docker Pipeline
- GitHub Integration Plugin
- Parameterized trigger Plugin
For example:
3.2 Setting Up Jenkins Credentials
Now, let’s configure Jenkins to work seamlessly with both your GitHub and Docker Hub accounts. This step is crucial for securely accessing your repositories and Docker images during the deployment process.
Step 1: Accessing Jenkins Credentials
- To start, navigate to the Jenkins Dashboard. You can do this by clicking on the “Manage Jenkins” button on the left-hand side.
- In the “Manage Jenkins” page, locate and click on the “Manage Credentials” button. This action will take you to a page where you can configure various types of credentials that Jenkins can use.
Step 2: Creating Docker Hub Credentials
- In the “Stores scoped to Jenkins” section, you will find the global domain. Click on it to proceed.
2. You will now be on the “Global Credentials (unrestricted)” page. Here, click on the “Add Credentials” button to begin configuring your Docker Hub credentials.
Step 3: Configuring Docker Hub Credentials
- In the “Kind” dropdown menu, select “Username with password.” This choice allows you to enter your Docker Hub username and password.
- In the “Username” field, input your Docker Hub username.
- In the “Password” field, provide your Docker Hub password that you use to login.
- Now, the most important part: in the “ID” field, enter “dockerhub”. This ID is significant because we will reference it in our Jenkinsfile when interacting with Docker Hub.
- Finally, click on the “Create” button to save your Docker Hub credentials securely within Jenkins.
3.3 Setting Up GitHub Credentials
In this section, we’ll configure Jenkins to access your GitHub account securely.
Step 1: Accessing Jenkins Credentials
Repeat the step 1 and 2 from the last section.
Step 2: Configuring GitHub Credentials
- In the “Kind” dropdown menu, select “Username with password.”
- In the “Username” field, enter your GitHub username. In my case, it’s “TadeopCreator.”
3. In the “Password” field, you won’t use your GitHub password. Instead, you’ll use a personal access token. Follow these steps to generate a personal access token:
- Log in to your GitHub account.
- Go to “Settings” from the dropdown menu in the upper right corner.
- In the left sidebar, select “Developer Settings” (found at the end of the list).
- Click on “Personal access tokens.”
- Select “Tokens (classic)” from the dropdown menu.
- Click “Generate new token” and choose the classic option.
- Under the “Note” input field, put “test gitops.”
- Under “Select scopes,” make sure to check the following options: “repo,” “admin:repo_hook,” and “notifications.”
- After selecting the scopes, you can create the token. It will look something like this:
- Copy this token and paste it into the “Password” field in Jenkins.
- In the “ID” field, enter “github.” This ID is essential because we will reference it later in our Jenkinsfile.
- Finally, click on the “Create” button to save your GitHub credentials securely within Jenkins.
Now we’ve created all the necessary credentials:
Section 4: Understanding the GitHub Repositories
Before we dive into creating Jenkins jobs and setting up the deployment process, let’s familiarize ourselves with the two GitHub repositories that form the backbone of our project.
GitHub Repository 1: flask-kubernetes-app
You can access the first repository here: flask-kubernetes-app. This repository serves as the foundation for our Flask application. Make sure to clone it using the “main” branch as it’s crucial for our project.
Contents of “flask-kubernetes-app” Repository:
- app.py: This Python file contains the code for our simple Flask application. It displays a message on the root page.
- requirements.txt: You’ll find a “requirements.txt” file with the Flask dependency, which is essential for our application.
- Dockerfile: Our Dockerfile is used to containerize the Python program. It’s based on the Python 3.8 Docker image and performs the following tasks:
- Copies over the “requirements.txt” file.
- Runs
pip install -r
to install all the requirements. - Runs the Python Flask program to accept incoming connections.
4. Jenkinsfile: The Jenkinsfile in this repository is essential for our Jenkins jobs. It defines the buildimage job, which we’ll use to build the Docker image and push it to Docker Hub using the credentials we set up earlier.
Important Modification in Jenkinsfile:
In the Jenkinsfile, there are several stages:
- “Clone repository”: This stage clones this repository into the Jenkins environment.
- “Build image”: This stage builds the Docker container image using the Dockerfile in the repository.
Modify Line 12: Change app = docker.build("tadeop/test")
to use your Docker Hub username and the name of the repository on Docker Hub. For example, if your Docker Hub username is "yourusername" and your repository name is "yourrepository," it should be: app = docker.build("yourusername/yourrepository")
.
- “Test image”: You can run unit tests or additional tests in this stage if needed.
- “Push image”: In this stage, we push the Docker image to Docker Hub using the credential ID dockerhub that we defined in the previous section. The image is pushed with a tag of the Jenkins build number using
app.push("${env.BUILD_NUMBER}")
.
Important Note: If you haven’t created a Docker Hub repository to host the image, it’s essential to do so now and use its name to replace the line mentioned above.
- “Trigger ManifestUpdate”: This stage triggers the “updatemanifest” Jenkins job, which is responsible for updating the “deployment.yaml” file in another repository.
Sending a Parameter: Notice that we send a parameter called DOCKERTAG
with the value of the BUILD_NUMBER
of the job. This corresponds to the tag of the image on Docker Hub and is crucial for updating the “deployment.yaml” file in the other repository.
node {
def app
stage('Clone repository') {
checkout scm
}
stage('Build image') {
app = docker.build("tadeop/test")
}
stage('Test image') {
app.inside {
sh 'echo "Tests passed"'
}
}
stage('Push image') {
docker.withRegistry('https://registry.hub.docker.com', 'dockerhub') {
app.push("${env.BUILD_NUMBER}")
}
}
stage('Trigger ManifestUpdate') {
echo "triggering updatemanifestjob"
build job: 'updatemanifest', parameters: [string(name: 'DOCKERTAG', value: env.BUILD_NUMBER)]
}
}
GitHub Repository 2: kubernetesmanifest-flask-app
You can access it here: kubernetesmanifest-flask-app.
This repository contains only two files:
- deployment.yaml: The “deployment.yaml” file defines the Kubernetes deployment for our Flask application. It specifies various deployment settings, including the container image to use. However, one specific modification needs to be made to ensure it uses the correct Docker image.
On line 21, you must modify the container image reference. Replace it with your Docker Hub username, the repository name (in my case “test”), and use the “latest” tag.
This “deployment.yaml” file also creates a load balancer service with three replicas.
2. Jenkinsfile: The Jenkinsfile in this repository is crucial for the “updatemanifest” Jenkins job, which we’ll create in the next section. Several modifications are required here as well:
- The first stage, called “Clone repository,” is similar to the buildimage job explained earlier. It clones this repository into the Jenkins environment.
- The last stage, called “Update GIT,” utilizes the GitHub credentials created in the previous section with the ID
github
.
Jenkinsfile Code:
sh "git config user.email your-email@gmail.com"
sh "git config user.name your-github-username"
sh "cat deployment.yaml"
sh "sed -i 's+your-dockerhub-username/test.*+your-dockerhub-username/test:${DOCKERTAG}+g' deployment.yaml"
sh "cat deployment.yaml"
sh "git add ."
sh "git commit -m 'Done by Jenkins Job changemanifest: ${env.BUILD_NUMBER}'"
sh "git push @github.com/${GIT_USERNAME}/kubernetesmanifest-flask-app.git">https://${GIT_USERNAME}:${GIT_PASSWORD}@github.com/${GIT_USERNAME}/kubernetesmanifest-flask-app.git HEAD:main"
Here’s what each step does:
- It configures the Git user email and name with your GitHub credentials.
- It displays the content of the “deployment.yaml” file before any changes.
- It uses
sed
to update the "deployment.yaml" file, replacing the container image reference with the Docker image tag, which is provided as theDOCKERTAG
parameter sent from the buildimage Jenkins job. - It displays the “deployment.yaml” file after the modification.
- It adds, commits, and pushes the changes to the GitHub repository.
Be sure to replace “your-email@gmail.com” “your-github-username” and “your-dockerhub-username/test” with your actual values.
If your GitHub repository has a different name or if you’re working with a branch other than “main,” you must modify the following line in the Jenkinsfile accordingly:
sh "git push https://${GIT_USERNAME}:${GIT_PASSWORD}@github.com/${GIT_USERNAME}/kubernetesmanifest-flask-app.git HEAD:main"
Replace the repository name (“kubernetesmanifest-flask-app”) with your actual GitHub repository name, and if you’re using a branch other than “main,” replace “main” with your branch name.
Section 5: Creating Jenkins Jobs
In this section, we’ll create the Jenkins jobs. The first one is named buildimage. This job is responsible for building the Docker image, pushing it to Docker Hub, and triggering another Jenkins job called updatemanifest as you know.
5.2 Creating the buildimage job
- Start by navigating to your Jenkins dashboard.
- Click on “New Item” to create a new job.
- In the “Name” field, enter “buildimage” as the job name.
- From the available options, select “Pipeline”:
5. Click “Ok” to proceed.
6. In the next page, scroll down to the “Pipeline” section.
7. Under “Definition,” choose “Pipeline script from SCM.”
8. From the SCM options, select “Git.”
9. Now, go to your “flask-kubernetes-app” repository where your Flask app is located, click on the “Code” button to access your repository’s code and copy the HTTPS URL of your repository:
10. Paste this URL into the “Repository URL” field in Jenkins.
- If your repository is private, you’ll need to provide your GitHub credentials here.
11. Under “Branch specifier”, change it from “master” to “main” (or your specific branch name, if different).
12. Ensure that your Pipeline settings match the options described above:
- Important: We’ll set up the GitHub trigger as a webhook to automate this job in a later section. For now, we’ll manually initiate this job.
13. Click “Save” to create the buildimage Jenkins job.
5.3 Creating the updatemanifest job
Now we’ll create the updatemanifest Jenkins job, responsible for updating the deployment.yaml
file. This job will be triggered by the buildimage job created earlier.
- Return to your Jenkins dashboard.
- Click on “New Item” to create a new job.
- Name the job “updatemanifest” (ensure it matches the name specified in the Jenkins job file in your “flask-kubernetes-app” repository).
- Select “Pipeline” from the available options:
5. Click “OK” to proceed.
6. In the job configuration page, check the box that says “This project is parameterized.”
7. Click on “Add Parameter” and select “String Parameter”:
My apologies for the image in Spanish. You can understand the steps without problem.
8. Name the parameter “DOCKERTAG” and set the default value to “latest”:
- We will override this default value with the build job number from the “buildimage” job, as explained in the previous section where we explored the code in the repositories.
9. Under the “Pipeline” section, configure the job as follows:
- Under “Definition,” select “Pipeline script from SCM”.
- For SCM, choose “Git”.
- In the “Repository URL” field, provide the URL of the “kubernetesmanifest-flask-app” repository where the
deployment.yaml
file resides. - Set the “Branch specifier” to “main”.
10. Confirm that your Pipeline settings match the options described above:
11. Click “Save” to create the updatemanifest Jenkins job.
These jobs work together to build the Docker image, update the deployment.yaml
file, and push the updated image to Docker Hub.
To view the list of Jenkins jobs, refer to the image below:
Let’s start by running the buildimage job:
As the buildimage job runs, it progresses through various stages. Refer to the image below:
Once the buildimage job completes, it triggers the execution of the “updatemanifest” job.
The “updatemanifest” job consists of two stages: “Clone repository” and “GIT update.” View the image below:
After running the job, you can observe the changes on your GitHub repository. Below is an image showing the commit made by the "updatemanifest" job:
Let’s check Docker Hub to see the Docker image pushed by the buildimage job. The image should be in the “test” repository with the tag “1” and it is!:
This demonstrates the successful execution of our Jenkins jobs in our CI/CD pipeline. The buildimage job creates and pushes the Docker image, while the updatemanifest job updates the Kubernetes deployment configuration 😎.
Section 6: Setting Up GitHub Webhooks
To automate the execution of the buildimage job whenever there’s a new push to your GitHub repository, we’ll set up GitHub Webhooks.
- Start by copying your Jenkins Dashboard URL. This URL can typically be accessed by combining your AWS EC2 instance’s public IPv4 address with the appropriate port (e.g.,
http://ec2-54-167-112-217.compute-1.amazonaws.com:8080/
). - Next, navigate to the GitHub repository where your Flask application is hosted, in my case, the “flask-kubernetes-app” repository.
- Click on the “Settings” tab within your repository.
- On the left-hand sidebar, select “Webhooks”.
- Click the “Add webhook” button to set up a new webhook.
- In the “Payload URL” field, paste the Jenkins Dashboard URL. Ensure you add
github-webhook/
at the end of the URL to create the correct webhook endpoint. - For the “Content type”, select “application/json.”
- Under the section labeled “Which events would you like to trigger this webhook?” choose the option “Just the push event”.
- Finally, click the “Add webhook” button to save your webhook configuration
With GitHub Webhooks configured, your Jenkins pipeline will now automatically trigger the buildimage job whenever new code changes are pushed to your repository.
To enable this webhook to trigger the buildimage job, return to your Jenkins Dashboard:
- Select the buildimage job from your list of Jenkins jobs.
- Click on “Configure” to access the job’s configuration settings.
- In the “Build Triggers” section, select the option “GitHub hook trigger for GITScm polling”:
4. And click “Save” to apply these changes.
Section 7: Setting Up GitOps with ArgoCD
In this section, we’ll dive into the world of GitOps and explore how to set up ArgoCD, a powerful tool for managing Kubernetes applications. GitOps is a modern approach to continuous delivery that leverages the principles of Git for managing and automating deployments. Before we proceed with the setup, let’s understand the key concepts and significance of GitOps.
7.2 What is GitOps?
GitOps is a methodology that extends the principles of Infrastructure as Code (IaC) to the entire continuous delivery process. At its core, GitOps relies on using Git repositories as the source of truth for both infrastructure and application code. Here’s a brief overview of its core principles:
- Declarative Configuration: GitOps relies on declarative configuration files (e.g., Kubernetes YAML manifests) stored in Git repositories to define the desired state of applications and infrastructure.
- Version Control: All configuration files are versioned in a Git repository, providing an audit trail of changes and enabling rollbacks to previous states.
- Automation: GitOps automation continuously monitors the Git repository for changes. When changes are detected, it automatically applies those changes to the target environment.
- Self-Healing: If the actual state of the target environment diverges from the declared state in Git, GitOps tools can automatically reconcile the differences, ensuring the desired state is maintained.
7.3 The Role of ArgoCD
ArgoCD is a popular GitOps tool that provides a powerful and intuitive way to manage Kubernetes applications. It excels at:
- Git Synchronization: ArgoCD continuously syncs with Git repositories, ensuring that the actual state of your Kubernetes cluster matches the desired state defined in your Git repository.
- Automated Deployments: It automates application deployments, allowing you to roll out changes with confidence.
- Rollback Capabilities: ArgoCD makes it easy to roll back to a previous version of your application in case of issues or errors.
7.4 Why GitOps with ArgoCD Matters
The GitOps approach with ArgoCD offers several benefits:
- Consistency: It ensures consistency between development, testing, and production environments, reducing the risk of configuration drift.
- Visibility: Git repositories provide a transparent view of all changes and configurations, enhancing collaboration and troubleshooting.
- Security: Role-based access control (RBAC) ensures that only authorized personnel can make changes, enhancing security.
7.5 Installing ArgoCD
In this section, We’ll walk you through the steps mention on this page, primarily for Minikube, but you can adapt these steps to your preferred Kubernetes setup. For example you could use EKS service from AWS.
1. Installing kubectl:
The first step is to ensure you have the kubectl
command-line tool installed. In my case I’am in macOS (Intel).
- Visit the Kubernetes documentation to install
kubectl
, I use curl:
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl"
- Validate the binary installation (optional).
- Make the
kubectl
binary executable and move it to a directory included in your system PATH.
Here’s how you can make kubectl
executable and move it:
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
sudo chown root: /usr/local/bin/kubectl
Verify the installation with:
kubectl version --client
2. Setting Up Minikube (Local Kubernetes):
For a local Kubernetes cluster, you can use Minikube. Follow these steps:
- Install Minikube. Visit the Minikube documentation for installation instructions.
- Start the Minikube cluster with:
minikube start
Ensure that Docker Desktop is running on your local machine for this to work.
- Simplify your terminal commands by creating an alias for
kubectl
:
alias kubectl="minikube kubectl --"
You can then use kubectl
commands seamlessly with your Minikube cluster.
- Access the Minikube dashboard:
minikube dashboard
With Minikube up and running, we’re ready to install ArgoCD. Here’s how:
- Create a new Kubernetes namespace for ArgoCD:
kubectl create namespace argocd
- Apply the ArgoCD installation manifest:
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
7.6 Installing ArgoCD CLI
To interact with ArgoCD, you’ll need the ArgoCD CLI. If you’re on macOS, you can install it using Homebrew:
brew install argocd
Check ArgoCD CLI installation instructions to install it on other machines.
7.7 Accessing ArgoCD
ArgoCD offers multiple ways to access its API server, and we’ll use port forwarding for simplicity. Run the following command:
kubectl port-forward svc/argocd-server -n argocd 8080:443
This command will keep running, so open a new terminal tab for the next steps.
Now, access the ArgoCD API Server by visiting http://localhost:8080/. You might encounter a warning; you can ignore it as we are running on localhost.
Use the following credentials to log in:
- Username: admin
- Password: Generate the password with
argocd admin initial-password -n argocd
, following the instructions on the ArgoCD webpage. If you're on a Windows machine, you may need to decode the base64 code.
7.8 Setting up ArgoCD
Now that we’ve successfully logged into the ArgoCD web API and have Docker Desktop running our local Kubernetes cluster using Minikube, it’s time to create a new application within the ArgoCD interface.
1. Create a New Application in ArgoCD:
In the General section, specify the following:
- Click the “New app” button.
- In the “Application Name” field, provide a name for your application. I’ve named it “flaskdemo” to match the deployment name.
- Under “Project Name,” select “default.”
- For “SYNC POLICY,” choose
Automatic
. This option ensures that ArgoCD checks the GitHub repository (kubernetesmanifest-flask-app) every 3 minutes. If it detects any differences between the desired and actual states, it will automatically apply the changes.
In the Source section, specify the following:
- Repository URL: The HTTPS URL of the GitHub repository containing the deployment.yaml file. For example: https://github.com/TadeopCreator/kubernetesmanifest-flask-app.git
- Path: The path to the deployment.yaml file within the GitHub repository. In this case, “./” points to the root directory.
In the Destination section, configure the deployment target:
- Cluster URL: Select
https://kubernetes.default.svc
- Namespace: Set it to “default”.
- Click on “CREATE” to create the application.
Section 8: Verify Deployment
Once the application is deployed, you can view its details in the ArgoCD interface:
Check the load balancer service and the three pods associated with your application running on a new terminal:
kubectl get pods
8.2 Access the Application
Before obtaining the load balancer URL, it’s important to note that if you are using a different load balancer provider like AWS, you won’t need to perform this specific step. However, when working with Minikube, a tool for running a local Kubernetes cluster, there’s an additional step required.
With Minikube, we need to run the command minikube tunnel
. This command establishes a tunnel that connects our local machine to the Kubernetes cluster at the address 127.0.0.1. Once this tunnel is in place, we can execute the command kubectl get svc
to retrieve information about the services within our cluster.
minikube tunnel
kubectl get svc
When we run kubectl get svc
it provides us with the URL to access the load balancer. In the case of Minikube, because of the tunneling, the external IP is automatically set to 127.0.0.1 on our local machine. This configuration allows us to conveniently access the services as if they were hosted locally on our computer.
Copy and paste the URL into your browser, and you should see your Flask application running:
8.3 Testing the GitOps Workflow
To test the GitOps workflow, make a change to your GitHub repository (e.g., modify the HTML text). This change should trigger the buildimage Jenkins job automatically via the webhook:
The buildimage job will build a new Docker image with a tag, such as “2”, and push it to your Docker Hub repository:
Subsequently, the buildimage job will trigger the updatemanifest job:
The updatemanifest job will update the deployment.yaml
file in the kubernetesmanifest-flask-app GitHub repository:
ArgoCD will detect the change, terminate the old pods, and create new pods with the updated Docker image:
Now you can refresh the application page in your browser, and you’ll see the updated content, confirming that the GitOps workflow has successfully deployed changes!
Congratulations! You’ve achieved full-cycle deployment using GitOps principles.
8.4 Cleaning Up the Environment
To ensure you don’t incur any unnecessary charges or resource usage, it’s essential to clean up the environment after you’ve completed your GitOps setup. Here are the steps to follow:
- Terminate the EC2 Instance: If you launched an EC2 instance to host Jenkins, it’s important to terminate it when you’re done. This can be done through the AWS Management Console by selecting your EC2 instance and choosing the “Terminate” option:
2. Stop Minikube: If you used Minikube to run a local Kubernetes cluster for testing, you should stop it to release system resources. Open a terminal and execute the following command:
minikube stop
3. Close Running Terminals: Ensure that all terminals or command-line sessions running ArgoCD and Minikube dashboard are closed. These should be terminated to prevent any background processes from consuming resources.
Section 9: Conclusion
This tutorial has taken you through the process of setting up a robust and automated continuous deployment pipeline using Jenkins, GitOps, and ArgoCD. Let’s recap the key takeaways from this tutorial:
- GitOps Simplifies Deployment: GitOps is a powerful methodology that simplifies and streamlines the deployment process. By managing your infrastructure and application deployments through Git repositories, you achieve greater consistency, traceability, and control over your deployments.
- Jenkins for CI/CD: Jenkins serves as the core of our continuous integration and continuous deployment (CI/CD) pipeline. It allows us to automate the building and pushing of Docker images, as well as updating Kubernetes manifests.
- Docker for Containerization: Docker is an essential tool for containerizing applications, making them easily portable and scalable. It enables us to package our Flask application into a Docker container for consistent deployment.
- ArgoCD for GitOps: ArgoCD acts as the GitOps engine, continuously monitoring our Git repositories for changes and ensuring that the Kubernetes cluster’s state matches the desired state defined in our manifests.
- GitHub Webhooks: Leveraging GitHub Webhooks, we set up automatic triggers for Jenkins jobs upon code changes in our repository. This automation streamlines the development and deployment process.
- Local Kubernetes Cluster with Minikube: We used Minikube to set up a local Kubernetes cluster for testing and development purposes. This approach allows us to simulate a real production environment locally.
I encourage you to take the knowledge gained from this tutorial and apply it to your projects. Whether you’re working on a personal project or within a team, GitOps principles can help you deliver software more effectively and with greater confidence. Happy deploying!
Section 10: Acknowledgments
I would like to express my gratitude to:
- This tutorial was inspired by the Cloud With Raj YouTube Channel by Saha Rajdeep, where I learned about the fundamentals of DevOps, Jenkins, and GitOps practices. The clear explanations and practical demonstrations provided valuable insights for this tutorial. See the video on this link.
- The open-source communities behind tools like Jenkins, Docker, Kubernetes, ArgoCD, and Minikube off course.
- Last but not least, I would like to thank to you who have taken the time to explore this tutorial.