Creating a Full CI/CD Pipeline on AWS with Jenkins, Slack, and GitHub: Part 5— Jenkins and Web App User Data

Rapidcode Technologies
9 min readNov 23, 2023

--

In this fifth part, we’re diving into the Web App and Jenkins setup. We’ll ensure everything’s good to go by setting up user data. For Jenkins, we’ll create some scripts. These scripts will be stored in the appropriate S3 bucket, so we won’t run into any character limits when they’re pulled down and executed as user data.

Part 1 (here) → We’ll kick things off by setting up our project. We’ll download a Web App to test our infrastructure and pipeline. We’ll also create and test some Dockerfiles for the project and upload it all to GitHub.

Part 2 (here) → We’ll get Slack in on the action. We’ll create a Bot for Jenkins to keep us posted on how the pipeline’s doing.

Part 3 (here) → It’s time to build the AWS Infrastructure with Terraform. We’ll whip up some EC2 instances, set up SSH keys, create the network infrastructure, and lay the foundation for IAM roles.

Part 4 (here) → We’re not done with AWS yet. In this step, we’ll make S3 buckets, and ECR repositories, and finish defining the IAM roles with the right policies.

Part 5 (Right now) → We’ll fine-tune our Jenkins and Web App instances by making sure the user data is just right.

Part 6 (here) → We’ll put the icing on the cake by implementing the pipeline in a Jenkinsfile. We’ll run the pipeline and see everything come together smoothly. Then, we’ll wrap things up with some final thoughts.

Let’s get started!

Setup User data

1. Web App User Data

We have already created a user_data.sh in Terraform/application-server/

So, in that user_data.sh we are going to put the following:

#! /bin/bash

sudo yum update -y

# Install Docker
sudo amazon-linux-extras install docker

# Start Docker
sudo systemctl start docker
sudo systemctl enable docker

# Create a shell script to run the server by taking the image tagged as simple-web-app:release from the ECR
cat << EOT > start-website
/bin/sh -e -c 'echo $(aws ecr get-login-password --region us-east-1) | docker login -u AWS --password-stdin ${repository_url}'
sudo docker pull ${repository_url}:release
sudo docker run -p 80:8000 ${repository_url}:release
EOT

# Move the script into the specific amazon ec2 linux start up folder, in order for the script to run after boot
sudo mv start-website /var/lib/cloud/scripts/per-boot/start-website

# Mark the script as executable
sudo chmod +x /var/lib/cloud/scripts/per-boot/start-website

# Run the script
/var/lib/cloud/scripts/per-boot/start-website

2. Jenkins User Data

Let’s modify the jenkins-server/main.tf , and in the user_data we are going to put the following:

user_data = templatefile(
"${path.module}/user_data.sh",
{
repository_url = var.repository-url,
repository_test_url = var.repository-test-url,
repository_staging_url = var.repository-staging-url,
instance_id = var.instance-id,
bucket_logs_name = var.bucket-logs-name,
public_dns = var.public-dns,
admin_username = var.admin-username,
admin_password = var.admin-password,
admin_fullname = var.admin-fullname,
admin_email = var.admin-email,
remote_repo = var.remote-repo,
job_name = var.job-name,
job_id = var.job-id,
bucket_config_name = var.bucket-config-name
}
)

and our main.tf file looks like :

resource "aws_instance" "default" {
ami = var.ami-id
iam_instance_profile = var.iam-instance-profile
instance_type = var.instance-type
key_name = var.key-pair
network_interface {
device_index = var.device-index
network_interface_id = var.network-interface-id
}

user_data = templatefile(
"${path.module}/user_data.sh",
{
repository_url = var.repository-url,
repository_test_url = var.repository-test-url,
repository_staging_url = var.repository-staging-url,
instance_id = var.instance-id,
bucket_logs_name = var.bucket-logs-name,
public_dns = var.public-dns,
admin_username = var.admin-username,
admin_password = var.admin-password,
admin_fullname = var.admin-fullname,
admin_email = var.admin-email,
remote_repo = var.remote-repo,
job_name = var.job-name,
job_id = var.job-id,
bucket_config_name = var.bucket-config-name
}
)

tags = {
Name = var.name
}
}

And now configure user_data.sh file, open it, and paste the following code:

#! /bin/bash

sudo yum update -y

# Install Git
sudo yum install -y git

# Install Jenkins

sudo wget -O /etc/yum.repos.d/jenkins.repo \
https://pkg.jenkins.io/redhat-stable/jenkins.repo
sudo rpm --import https://pkg.jenkins.io/redhat-stable/jenkins.io-2023.key
sudo yum upgrade
sudo dnf install java-11-amazon-corretto -y
sudo yum install jenkins -y


# Enable jenkins to run on boot
sudo systemctl enable jenkins

# Start Jenkins
sudo systemctl start jenkins

# sudo wget -O /etc/yum.repos.d/jenkins.repo \
# https://pkg.jenkins.io/redhat-stable/jenkins.repo
# sudo rpm --import https://pkg.jenkins.io/redhat-stable/jenkins.io.key
# sudo yum upgrade -y
# sudo yum install -y jenkins java-1.8.0-openjdk-devel
sudo systemctl daemon-reload



# Install Docker
sudo yum install docker -y
# sudo amazon-linux-extras install docker

# Start Docker
sudo systemctl start docker

# Enable Docker to run on boot
sudo systemctl enable docker

# Let Jenkins and the current user use docker
sudo usermod -a -G docker ec2-user
sudo usermod -a -G docker jenkins

# Create the opt folder in the jenkins home
sudo mkdir /var/lib/jenkins/opt
sudo chown jenkins:jenkins /var/lib/jenkins/opt

# Download and install arachni as jenkins user
wget https://github.com/Arachni/arachni/releases/download/v1.5.1/arachni-1.5.1-0.5.12-linux-x86_64.tar.gz
tar -zxf arachni-1.5.1-0.5.12-linux-x86_64.tar.gz
rm arachni-1.5.1-0.5.12-linux-x86_64.tar.gz
sudo chown -R jenkins:jenkins arachni-1.5.1-0.5.12/
sudo mv arachni-1.5.1-0.5.12 /var/lib/jenkins/opt

# Save the instance_id, repositories urls and bucket name to use in the pipeline
sudo /bin/bash -c "echo ${repository_url} > /var/lib/jenkins/opt/repository_url"
sudo /bin/bash -c "echo ${repository_test_url} > /var/lib/jenkins/opt/repository_test_url"
sudo /bin/bash -c "echo ${repository_staging_url} > /var/lib/jenkins/opt/repository_staging_url"
sudo /bin/bash -c "echo ${instance_id} > /var/lib/jenkins/opt/instance_id"
sudo /bin/bash -c "echo ${bucket_logs_name} > /var/lib/jenkins/opt/bucket_name"

# Change ownership and group of these files
sudo chown -R jenkins:jenkins /var/lib/jenkins/opt/

# Wait for Jenkins to boot up
sudo sleep 60


#####################################################
####### SET UP JENKINS #######
#####################################################

#---------------------------------------------#
#------> DEFINE THE GLOBAL VARIABLES <--------#
#---------------------------------------------#

export url="http://${public_dns}:8080"
export user="${admin_username}"
export password="${admin_password}"
export admin_fullname="${admin_fullname}"
export admin_email="${admin_email}"
export remote="${remote_repo}"
export jobName="${job_name}"
export jobID="${job_id}"

#---------------------------------------------#
#-----> COPY THE CONFIG FILES FROM S3 <-------#
#---------------------------------------------#

sudo aws s3 cp s3://${bucket_config_name}/ ./ --recursive
sudo chmod +x *.sh

#---------------------------------------------#
#----------> RUN THE CONFIG FILES <----------#
#---------------------------------------------#

./create_admin_user.sh
./download_install_plugins.sh
sudo sleep 120
./confirm_url.sh
./create_credentials.sh

# Output the credentials id in a credentials_id file
python -c "import sys;import json;print(json.loads(raw_input())['credentials'][0]['id'])" <<< $(./get_credentials_id.sh) > credentials_id

./create_multibranch_pipeline.sh

#---------------------------------------------#
#---------> DELETE THE CONFIG FILES <---------#
#---------------------------------------------#

sudo rm *.sh credentials_id

reboot

Now, the script create_credentials.sh will create the Jenkins credentials needed to access github (we will store there the GitHub private key). These credentials will have an ID which is needed in the next script create_multibranch_pipeline.sh

In order to retrieve this ID we make use of the script get_credentials_id.sh . This Script will make a GET request to the Jenkins API, which will return a JSON with the credentials. The code python -c "import sys;import json;print(json.loads(......)) > credentials_id is used to parse that JSON, extract the ID and output it to a file credentials_id , so that the create_multibranch_pipeline.sh will easily read it to get the ID

3. Create Jenkins Configuration Files

Since the above user_data we reference a bunch of scripts, we need to also create them.

In the Terraform root directory create a jenkins-config folder:

And create the following 6 scripts:

confirm_url.sh
create_admin_user.sh
create_credentials.sh
create_multibranch_pipeline.sh
download_install_plugins.sh
get_credentials.sh

we need to upload all these files to the jenkins-config bucket. In order to do that, we add a new resource of type aws_s3_bucket_object which will upload all files in the folder jenkins-config to that bucket. In the s3.tf file, add the following resource:

resource "aws_s3_object" "jenkins-config" {
bucket = aws_s3_bucket.jenkins-config.id
for_each = fileset("jenkins-config/", "*")
key = each.value
source = "jenkins-config/${each.value}"
etag = filemd5("jenkins-config/${each.value}")
}

And now create scripts

1. create_admin_user.sh

With this script, our Jenkins admin account will be created.

#! /bin/bash
old_password=$(sudo cat /var/lib/jenkins/secrets/initialAdminPassword)

# NEW ADMIN CREDENTIALS URL ENCODED USING PYTHON
password_URLEncoded=$(python -c "import urllib.parse;print(urllib.parse.quote(input(), safe=''))" <<< "$password")
username_URLEncoded=$(python -c "import urllib.parse;print(urllib.parse.quote(input(), safe=''))" <<< "$user")
fullname_URLEncoded=$(python -c "import urllib.parse;print(urllib.parse.quote(input(), safe=''))" <<< "$admin_fullname")
email_URLEncoded=$(python -c "import urllib.parse;print(urllib.parse.quote(input(), safe=''))" <<< "$admin_email")

# GET THE CRUMB AND COOKIE
cookie_jar="$(mktemp)"
full_crumb=$(curl -u "admin:$old_password" --cookie-jar "$cookie_jar" $url/crumbIssuer/api/xml?xpath=concat\(//crumbRequestField,%22:%22,//crumb\))
arr_crumb=(${full_crumb//:/ })
only_crumb=$(echo ${arr_crumb[1]})

# MAKE THE REQUEST TO CREATE AN ADMIN USER
curl -X POST -u "admin:$old_password" $url/setupWizard/createAdminUser \
-H "Accept: application/json, text/javascript" \
-H "X-Requested-With: XMLHttpRequest" \
-H "$full_crumb" \
-H "Content-Type: application/x-www-form-urlencoded" \
--cookie $cookie_jar \
--data-raw "username=$username_URLEncoded&password1=$password_URLEncoded&password2=$password_URLEncoded&fullname=$fullname_URLEncoded&email=$email_URLEncoded&Jenkins-Crumb=$only_crumb&json=%7B%22username%22%3A%20%22$username_URLEncoded%22%2C%20%22password1%22%3A%20%22$password_URLEncoded%22%2C%20%22%24redact%22%3A%20%5B%22password1%22%2C%20%22password2%22%5D%2C%20%22password2%22%3A%20%22$password_URLEncoded%22%2C%20%22fullname%22%3A%20%22$fullname_URLEncoded%22%2C%20%22email%22%3A%20%22$email_URLEncoded%22%2C%20%22Jenkins-Crumb%22%3A%20%22$only_crumb%22%7D&core%3Aapply=&Submit=Save&json=%7B%22username%22%3A%20%22$username_URLEncoded%22%2C%20%22password1%22%3A%20%22$password_URLEncoded%22%2C%20%22%24redact%22%3A%20%5B%22password1%22%2C%20%22password2%22%5D%2C%20%22password2%22%3A%20%22$password_URLEncoded%22%2C%20%22fullname%22%3A%20%22$fullname_URLEncoded%22%2C%20%22email%22%3A%20%22$email_URLEncoded%22%2C%20%22Jenkins-Crumb%22%3A%20%22$only_crumb%22%7D"

2. download_install_plugins.sh

In this script,

we specify some plugins to install. The defaults would be:

‘plugins’:[‘cloudbees-folder’,’antisamy-markup-formatter’,’build-timeout’,’credentials-binding’,’timestamper’,’ws-cleanup’,’ant’,’gradle’,’workflow-aggregator’,’github-branch-source’,’pipeline-github-lib’,’pipeline-stage-view’,’git’,’ssh-slaves’,’matrix-auth’,’pam-auth’,’ldap’,’email-ext’,’mailer’]

#! /bin/bash
cookie_jar="$(mktemp)"
full_crumb=$(curl -u "$user:$password" --cookie-jar "$cookie_jar" $url/crumbIssuer/api/xml?xpath=concat\(//crumbRequestField,%22:%22,//crumb\))
arr_crumb=(${full_crumb//:/ })
only_crumb=$(echo ${arr_crumb[1]})

# MAKE THE REQUEST TO DOWNLOAD AND INSTALL REQUIRED MODULES
curl -X POST -u "$user:$password" $url/pluginManager/installPlugins \
-H 'Accept: application/json, text/javascript, */*; q=0.01' \
-H 'X-Requested-With: XMLHttpRequest' \
-H "$full_crumb" \
-H 'Content-Type: application/json' \
-H 'Accept-Language: en,en-US;q=0.9,it;q=0.8' \
--cookie $cookie_jar \
--data-raw "{'dynamicLoad':true,'plugins':['cloudbees-folder','antisamy-markup-formatter','build-timeout','credentials-binding','timestamper','ws-cleanup','ant','gradle','workflow-aggregator','github-branch-source','pipeline-github-lib','pipeline-stage-view','git','ssh-slaves','matrix-auth','pam-auth','ldap','email-ext','mailer','bitbucket','docker-workflow','blueocean'],'Jenkins-Crumb':'$only_crumb'}"

3. confirm_url.sh

In this script sets up the Jenkins URL that is used to access the Jenkins server from the web.

#! /bin/bash
url_urlEncoded=$(python -c "import urllib.parse;print(urllib.parse.quote(input(), safe=''))" <<< "$url")

cookie_jar="$(mktemp)"
full_crumb=$(curl -u "$user:$password" --cookie-jar "$cookie_jar" $url/crumbIssuer/api/xml?xpath=concat\(//crumbRequestField,%22:%22,//crumb\))
arr_crumb=(${full_crumb//:/ })
only_crumb=$(echo ${arr_crumb[1]})

curl -X POST -u "$user:$password" $url/setupWizard/configureInstance \
-H 'Accept: application/json, text/javascript, */*; q=0.01' \
-H 'X-Requested-With: XMLHttpRequest' \
-H "$full_crumb" \
-H 'Content-Type: application/x-www-form-urlencoded' \
-H 'Accept-Language: en,en-US;q=0.9,it;q=0.8' \
--cookie $cookie_jar \
--data-raw "rootUrl=$url_urlEncoded%2F&Jenkins-Crumb=$only_crumb&json=%7B%22rootUrl%22%3A%20%22$url_urlEncoded%2F%22%2C%20%22Jenkins-Crumb%22%3A%20%22$only_crumb%22%7D&core%3Aapply=&Submit=Save&json=%7B%22rootUrl%22%3A%20%22$url_urlEncoded%2F%22%2C%20%22Jenkins-Crumb%22%3A%20%22$only_crumb%22%7D"

4. create_credentials.sh

At this point, we need to ‘create’ the credentials to allow Jenkins to access the Bitbucket repo. We will create Jenkins credentials storing the SSH private key pulled down from the AWS Secrets Manager.

#! /bin/bash

# Retrieve Secrets and Extract the Private key using a python command
aws secretsmanager get-secret-value --secret-id nodejs-web-app5 --region us-east-1 | python -c "import sys;import json;print(json.loads(json.loads(sys.stdin.read())['SecretString'])['private'])" > ssh_tmp

# Correctly parse the new line characters and store the key in a variable
ssh_private_key=$(awk -v ORS='\\n' '1' ssh_tmp)

rm ssh_tmp

cookie_jar="$(mktemp)"
full_crumb=$(curl -u "$user:$password" --cookie-jar "$cookie_jar" $url/crumbIssuer/api/xml?xpath=concat\(//crumbRequestField,%22:%22,//crumb\))
arr_crumb=(${full_crumb//:/ })
only_crumb=$(echo ${arr_crumb[1]})

curl -u "$user:$password" -X POST "$url/credentials/store/system/domain/_/createCredentials" \
-H "$full_crumb" \
--cookie $cookie_jar \
--data-urlencode "json={
'': '2',
'credentials': {
'scope': 'GLOBAL',
'id': '',
'username': 'Git',
'password': '',
'description': '',
'privateKeySource': {
'value': '0',
'stapler-class': 'com.cloudbees.jenkins.plugins.sshcredentials.impl.BasicSSHUserPrivateKey\$DirectEntryPrivateKeySource',
'privateKey': \"$ssh_private_key\"
},
'stapler-class': 'com.cloudbees.jenkins.plugins.sshcredentials.impl.BasicSSHUserPrivateKey',
},
'Jenkins-Crumb': '$only_crumb'
}"

5. get_credentials_id.sh

This script will return a JSON with a given structure. This JSON is parsed by the code in /jenkins-server/user_data.sh (before the./create_multibranch_pipeline.sh line):

#! /bin/bash

cookie_jar="$(mktemp)"
full_crumb=$(curl -u "$user:$password" --cookie-jar "$cookie_jar" $url/crumbIssuer/api/xml?xpath=concat\(//crumbRequestField,%22:%22,//crumb\))

curl -u "$user:$password" -X GET "$url/credentials/store/system/domain/_/api/json?tree=credentials[id]" \
-H "$full_crumb" \
--cookie $cookie_jar

6. create_multibranch_pipeline.sh

  • First, we grab the ID of the credentials stored in the credentials_id file.
  • We URL-encode the Job name and the remote URL of the bitbucket repo.
  • Grab the token and the cookie.
  • Create the Job with a POST request to /createItem pass as the name of the Job the content of the variable jobName . We also specify that it needs to be a multibranch pipeline.
  • Finally, we configure the actual Pipeline by making a POST request to /job/$jobName_URLEncoded/configSubmit .
#! /bin/bash
credentials=$(cat credentials_id) # output from get_credentials_id.sh

jobName_URLEncoded=$(python -c "import urllib.parse;print(urllib.parse.quote(input(), safe=''))" <<< "$jobName")
remote_URLEncoded=$(python -c "import urllib.parse;print(urllib.parse.quote(input(), safe=''))" <<< "$remote")

cookie_jar="$(mktemp)"
full_crumb=$(curl -u "$user:$password" --cookie-jar "$cookie_jar" $url/crumbIssuer/api/xml?xpath=concat\(//crumbRequestField,%22:%22,//crumb\))
arr_crumb=(${full_crumb//:/ })
only_crumb=$(echo ${arr_crumb[1]})

# Create Job

curl -u "$user:$password" -X POST "$url/createItem" \
-H "$full_crumb" \
--cookie $cookie_jar \
--data-urlencode "name=$jobName" \
--data-urlencode "mode=org.jenkinsci.plugins.workflow.multibranch.WorkflowMultiBranchProject" \
--data-urlencode "Jenkins-Crumb=$only_crumb"

# Config

curl -u "$user:$password" -X POST "$url/job/$jobName_URLEncoded/configSubmit" \
-H "$full_crumb" \
--cookie $cookie_jar \
--data-raw "_.displayNameOrNull=&_.description=&stapler-class=jenkins.plugins.git.GitSCMSource&id=$jobID&_.remote=$remote_URLEncoded&includeUser=false&_.credentialsId=$credentials&stapler-class=jenkins.plugins.git.traits.BranchDiscoveryTrait&%24class=jenkins.plugins.git.traits.BranchDiscoveryTrait&stapler-class=jenkins.branch.DefaultBranchPropertyStrategy&%24class=jenkins.branch.DefaultBranchPropertyStrategy&stapler-class=jenkins.branch.NamedExceptionsBranchPropertyStrategy&%24class=jenkins.branch.NamedExceptionsBranchPropertyStrategy&stapler-class=jenkins.branch.BranchSource&kind=jenkins.branch.BranchSource&_.scriptPath=Jenkinsfile&stapler-class=org.jenkinsci.plugins.workflow.multibranch.WorkflowBranchProjectFactory&%24class=org.jenkinsci.plugins.workflow.multibranch.WorkflowBranchProjectFactory&_.interval=1d&stapler-class=com.cloudbees.hudson.plugins.folder.computed.DefaultOrphanedItemStrategy&%24class=com.cloudbees.hudson.plugins.folder.computed.DefaultOrphanedItemStrategy&_.pruneDeadBranches=on&_.daysToKeepStr=&_.numToKeepStr=&stapler-class=com.cloudbees.hudson.plugins.folder.icons.StockFolderIcon&%24class=com.cloudbees.hudson.plugins.folder.icons.StockFolderIcon&stapler-class=jenkins.branch.MetadataActionFolderIcon&%24class=jenkins.branch.MetadataActionFolderIcon&_.dockerLabel=&_.url=&includeUser=false&_.credentialsId=&core%3Aapply=&Jenkins-Crumb=$only_crumb&json=%7B%22displayNameOrNull%22%3A+%22%22%2C+%22description%22%3A+%22%22%2C+%22disable%22%3A+false%2C+%22sources%22%3A+%7B%22source%22%3A+%7B%22stapler-class%22%3A+%22jenkins.plugins.git.GitSCMSource%22%2C+%22id%22%3A+%22$jobID%22%2C+%22remote%22%3A+%22$remote_URLEncoded%22%2C+%22includeUser%22%3A+%22false%22%2C+%22credentialsId%22%3A+%22$credentials%22%2C+%22traits%22%3A+%7B%22stapler-class%22%3A+%22jenkins.plugins.git.traits.BranchDiscoveryTrait%22%2C+%22%24class%22%3A+%22jenkins.plugins.git.traits.BranchDiscoveryTrait%22%7D%7D%2C+%22%22%3A+%220%22%2C+%22strategy%22%3A+%7B%22stapler-class%22%3A+%22jenkins.branch.DefaultBranchPropertyStrategy%22%2C+%22%24class%22%3A+%22jenkins.branch.DefaultBranchPropertyStrategy%22%7D%2C+%22stapler-class%22%3A+%22jenkins.branch.BranchSource%22%2C+%22kind%22%3A+%22jenkins.branch.BranchSource%22%7D%2C+%22%22%3A+%5B%220%22%2C+%221%22%5D%2C+%22projectFactory%22%3A+%7B%22scriptPath%22%3A+%22Jenkinsfile%22%2C+%22stapler-class%22%3A+%22org.jenkinsci.plugins.workflow.multibranch.WorkflowBranchProjectFactory%22%2C+%22%24class%22%3A+%22org.jenkinsci.plugins.workflow.multibranch.WorkflowBranchProjectFactory%22%7D%2C+%22orphanedItemStrategy%22%3A+%7B%22stapler-class%22%3A+%22com.cloudbees.hudson.plugins.folder.computed.DefaultOrphanedItemStrategy%22%2C+%22%24class%22%3A+%22com.cloudbees.hudson.plugins.folder.computed.DefaultOrphanedItemStrategy%22%2C+%22pruneDeadBranches%22%3A+true%2C+%22daysToKeepStr%22%3A+%22%22%2C+%22numToKeepStr%22%3A+%22%22%7D%2C+%22icon%22%3A+%7B%22stapler-class%22%3A+%22jenkins.branch.MetadataActionFolderIcon%22%2C+%22%24class%22%3A+%22jenkins.branch.MetadataActionFolderIcon%22%7D%2C+%22org-jenkinsci-plugins-docker-workflow-declarative-FolderConfig%22%3A+%7B%22dockerLabel%22%3A+%22%22%2C+%22registry%22%3A+%7B%22url%22%3A+%22%22%2C+%22includeUser%22%3A+%22false%22%2C+%22credentialsId%22%3A+%22%22%7D%7D%2C+%22core%3Aapply%22%3A+%22%22%2C+%22Jenkins-Crumb%22%3A+%22$only_crumb%22%7D&Submit=Save"

Alright, we have completed the Jenkins configuration and the application user data, let’s see whether everything works fine!

And Then:

terraform apply

Nice, at this point we can add , commit and push our changes to Git Hub. Let’s go into the nodejs-web-app folder and:

Wrapping up part five of our journey! We’ve tackled user data for both the Web App and Jenkins instances, making sure they’re all set to run smoothly. For Jenkins, we’ve even crafted special scripts and stored them in the right place to avoid any character limit issues.

Part six is the grand finale! In this last leg of our adventure, we’re diving into the heart of the action. We’ll configure and create the Jenkins pipeline. This is where all the pieces come together, and we’ll see the magic happen.

It’s been quite a journey, and we’re excited to see everything we’ve built in action. Stay tuned for the grand finale! 🚀🛠️👋

--

--

Rapidcode Technologies

Architecting the future of innovation and design with cloud-native skills. 🌟 Let's transform your business! 🌟 #Innovation #Perseverance