Continuous Deployment on a Kubernetes Cluster

Paul Zhao
Paul Zhao Projects
Published in
17 min readJul 10, 2020

A step-by-step hands-on project

Before jumping into our project, let us discuss both CI/ CD and Kubernetes respectively. So that we have a clear picture about the philosophy behind this project.

CI/ CD

Continuous integration (CI) and continuous delivery (CD) embody a culture, set of operating principles, and collection of practices that enable application development teams to deliver code changes more frequently and reliably. The implementation is also known as the CI/CD pipeline.

CI/CD is one of the best practices for devops teams to implement. It is also an agile methodology best practice, as it enables software development teams to focus on meeting business requirements, code quality, and security because deployment steps are automated.

Continuous integration is a coding philosophy and set of practices that drive development teams to implement small changes and check in code to version control repositories frequently. Because most modern applications require developing code in different platforms and tools, the team needs a mechanism to integrate and validate its changes.

The technical goal of CI is to establish a consistent and automated way to build, package, and test applications. With consistency in the integration process in place, teams are more likely to commit code changes more frequently, which leads to better collaboration and software quality.

Continuous deployment, on the other hands, makes sure every change that passes all stages of your production pipeline is released to your customers. There’s no human intervention, and only a failed test will prevent a new change to be deployed to production.

The technical goal of CD, is to accelerate the feedback loop with your customers and take pressure off the team as there isn’t a Release Day anymore. Developers can focus on building software, and they see their work go live minutes after they’ve finished working on it.

Why Kubernetes?

Kubernetes is a powerful container management tool that automates the deployment and management of containers. Kubernetes (k8’s) is the next big wave in cloud computing and it’s easy to see why as businesses migrate their infrastructure and architecture to reflect a cloud-native, data-driven era.

Here are 5 factors why we need to adopt Kubernetes

Container orchestration

Containers are great. They provide you with an easy way to package and deploy services, allow for process isolation, immutability, efficient resource utilization, and are lightweight in creation.

But when it comes to actually running containers in production, you can end up with dozens, even thousands of containers over time. These containers need to be deployed, managed, and connected and updated; if you were to do this manually, you’d need an entire team dedicated to this.

It’s not enough to run containers; you need to be able to:

  • Integrate and orchestrate these modular parts
  • Scale up and scale down based on the demand
  • Make them fault tolerant
  • Provide communication across a cluster

You might ask: aren’t containers supposed to do all that? The answer is that containers are only a low-level piece of the puzzle. The real benefits are obtained with tools that sit on top of containers — like Kubernetes. These tools are today known as container schedulers.

Great for multi-cloud adoption

With many of today’s businesses gearing towards microservice architecture, it’s no surprise that containers and the tools used to manage them have become so popular. Microservice architecture makes it easy to split your application into smaller components with containers that can then be run on different cloud environments, giving you the option to choose the best host for your needs. What’s great about Kubernetes is that it’s built to be used anywhere so you can deploy to public/private/hybrid clouds, enabling you to reach users where they’re at, with greater availability and security. You can see how Kubernetes can help you avoid potential hazards with “vendor lock-in”.

Deploy and update applications at scale for faster time-to-market

Kubernetes allows teams to keep pace with the requirements of modern software development. Without Kubernetes, large teams would have to manually script their own deployment workflows. Containers, combined with an orchestration tool, provide management of machines and services for you — improving the reliability of your application while reducing the amount of time and resources spent on DevOps.

Better management of your applications

Containers allow applications to be broken down into smaller parts which can then be managed through an orchestration tool like Kubernetes. This makes it easy to manage codebases and test specific inputs and outputs.

As mentioned earlier, Kubernetes has built-in features like self-healing and automated rollouts/rollbacks, effectively managing the containers for you.

To go even further, Kubernetes allows for declarative expressions of the desired state as opposed to an execution of a deployment script, meaning that a scheduler can monitor a cluster and perform actions whenever the actual state does not match the desired. You can think of schedulers as operators who are continually monitoring the system and fixing discrepancies between the desired and actual state.

Overview/additional benefits

  • You can use it to deploy your services, to roll out new releases without downtime, and to scale (or de-scale) those services.
  • It is portable.
  • It can run on a public or private cloud.
  • It can run on-premise or in a hybrid environment.
  • You can move a Kubernetes cluster from one hosting vendor to another without changing (almost) any of the deployment and management processes.
  • Kubernetes can be easily extended to serve nearly any needs. You can choose which modules you’ll use, and you can develop additional features yourself and plug them in.
  • Kubernetes will decide where to run something and how to maintain the state you specify.
  • Kubernetes can place replicas of service on the most appropriate server, restart them when needed, replicate them, and scale them.
  • Self-healing is a feature included in its design from the start. On the other hand, self-adaptation is coming soon as well.
  • Zero-downtime deployments, fault tolerance, high availability, scaling, scheduling, and self-healing add significant value in Kubernetes.
  • You can use it to mount volumes for stateful applications.
  • It allows you to store confidential information as secrets.
  • You can use it to validate the health of your services.
  • It can load balance requests and monitor resources.
  • It provides service discovery and easy access to logs.

Enough theoretical concepts, let us work on this project now!

Establishing a Github Organization

If you don’t have a Github account, please register it here.

Then you may login as shown below.

As shown below, you may locate New organization under plus menu at upright corner.

Find new project

Then you may choose free option

Choose free option

After that, you need to provide Organization account name (it must be unique, otherwise it will be shown as already exists), contact email and my personal account or a business/ institution.

Info provided

Successfully created your organization account

Successful creation of organization

Locate your organization under your account name at upleft corner as shown below

Find your organization

Then click your organization to get access to it

Your organization page

Here you may find the resources where we will fork from.

Find 7 resources

At upright corner, click fork and your organization account so that repo will be forked into your organization

Fork the repo

Verify 7 repos under your organization

7 repos confirmed

Now we will focus on installation process

To speed up your installation in the future, here we recommend Homebrew. If you using Windows, you may consider installing Chocolatey.

Since I’m use a mac, we’ll discuss Homebrew installation here in details.

Installing Homebrew

To install Linuxbrew on your Linux distribution, first you need to install following dependencies as shown. (This Linuxbrew installation applies to MacOS as well, you may make adjustment in regards to name)

--------- On Debian/Ubuntu --------- 
$ sudo apt-get install build-essential curl file git--------- On Fedora 22+ ---------
$ sudo dnf groupinstall 'Development Tools' && sudo dnf install curl file git--------- On CentOS/RHEL ---------
$ sudo yum groupinstall 'Development Tools' && sudo yum install curl file git

Once the dependencies installed, you can use the following script to install Linuxbrew package in /home/linuxbrew/.linuxbrew (or in your home directory at ~/.linuxbrew) as shown.

$ sh -c "$(curl -fsSL https://raw.githubusercontent.com/Linuxbrew/install/master/install.sh)"

Next, you need to add the directories /home/linuxbrew/.linuxbrew/bin (or ~/.linuxbrew/bin) and /home/linuxbrew/.linuxbrew/sbin (or ~/.linuxbrew/sbin) to your PATH and to your bash shell initialization script ~/.bashrc as shown.

$ echo 'export PATH="/home/linuxbrew/.linuxbrew/bin:/home/linuxbrew/.linuxbrew/sbin/:$PATH"' >>~/.bashrc
$ echo 'export MANPATH="/home/linuxbrew/.linuxbrew/share/man:$MANPATH"' >>~/.bashrc
$ echo 'export INFOPATH="/home/linuxbrew/.linuxbrew/share/info:$INFOPATH"' >>~/.bashrc

Then source the ~/.bashrc file for the recent changes to take effect.

$ source  ~/.bashrc

Check the version to confirm if it is installed correctly.

$ brew --version
$ Homebrew 2.2.16
$ Homebrew/homebrew-core (git revision a59d5e; last commit 2020-05-13)
$ Homebrew/homebrew-cask (git revision d25f8; last commit 2020-05-13)

With Homebrew installed, we are able to install tons of tools with ease. And don’t forget to verify your installation as well.

Installing Minikube

$ brew cask install minikube
$ minikube version
minikube version: v1.11.0
commit: 57e2f55f47effe9ce396cea42a1e0eb4f611ebbd

Installing Docker

$ brew cask install docker
$ docker --version
Docker version 19.03.8, build afacb8b
### To make sure docker proper installation, please also verify following areas
$ docker version
Client: Docker Engine - Community
Version: 19.03.8
API version: 1.40
Go version: go1.12.17
Git commit: afacb8b
Built: Wed Mar 11 01:21:11 2020
OS/Arch: darwin/amd64
Experimental: false

Server: Docker Engine - Community
Engine:
Version: 19.03.8
API version: 1.40 (minimum version 1.12)
Go version: go1.12.17
Git commit: afacb8b
Built: Wed Mar 11 01:29:16 2020
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: v1.2.13
GitCommit: 7ad184331fa3e55e52b890ea95e65ba581ae3429
runc:
Version: 1.0.0-rc10
GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd
docker-init:
Version: 0.18.0
GitCommit: fec3683

$ docker-compose version
docker-py version: 2.5.1
CPython version: 2.7.12
OpenSSL version: OpenSSL 1.0.2j 26 Sep 2016

$ docker-machine --version
docker-machine version 0.16.0, build 702c267f

$ kubectl version --client
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.3", GitCommit:"2e7996e3e2712684bc73f0dec0200d64eec7fe40", GitTreeState:"clean", BuildDate:"2020-05-21T14:51:23Z", GoVersion:"go1.14.3", Compiler:"gc", Platform:"darwin/amd64"}

Prior to Installing kubectl, we need to initialize our minikube

$ minikube start
😄 minikube v1.11.0 on Darwin 10.15.5
✨ Using the hyperkit driver based on existing profile
👍 Starting control plane node minikube in cluster minikube
🔄 Restarting existing hyperkit VM for "minikube" ...
🎉 minikube 1.12.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.12.0
💡 To disable this notice, run: 'minikube config set WantUpdateNotification false'

🐳 Preparing Kubernetes v1.18.3 on Docker 19.03.8 ...
🔎 Verifying Kubernetes components...
🌟 Enabled addons: default-storageclass, storage-provisioner
🏄 Done! kubectl is now configured to use "minikube"

Installing kubectl

$ brew install kubectl
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.3", GitCommit:"2e7996e3e2712684bc73f0dec0200d64eec7fe40", GitTreeState:"clean", BuildDate:"2020-05-21T14:51:23Z", GoVersion:"go1.14.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.3", GitCommit:"2e7996e3e2712684bc73f0dec0200d64eec7fe40", GitTreeState:"clean", BuildDate:"2020-05-20T12:43:34Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
### for server to be connected, you need to minikube start

After installation, we now move on to our project

We first make a directory to store our project files

$ mkdir ci-cd-demo
$ cd ci-cd-demo/

Then, we will clone jenkins file from our GitHub to our local environment

As shown below, under jenkins repo, click code, in the dropdown, copy the URL

URL for jenkins

Then use “git clone” to clone to our local environment

$ git clone https://github.com/Kubernetes-Projects-2020/jenkins.git
Cloning into 'jenkins'...
remote: Enumerating objects: 7, done.
remote: Total 7 (delta 0), reused 0 (delta 0), pack-reused 7
Receiving objects: 100% (7/7), done.
Resolving deltas: 100% (1/1), done.

To list docker image of our jenkins, you need to type into command as below

$ ls
jenkins
$ cd jenkins
$ ls
Dockerfile jenkins.yaml
$ minikube docker-env
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.64.3:2376"
export DOCKER_CERT_PATH="/Users/paulzhao/.minikube/certs"
export MINIKUBE_ACTIVE_DOCKERD="minikube"

# To point your shell to minikube's docker-daemon, run:
# eval $(minikube -p minikube docker-env)
$ eval $(minikube -p minikube docker-env)
$ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-proxy v1.18.3 3439b7546f29 7 weeks ago 117MB
k8s.gcr.io/kube-apiserver v1.18.3 7e28efa976bd 7 weeks ago 173MB
k8s.gcr.io/kube-scheduler v1.18.3 76216c34ed0c 7 weeks ago 95.3MB
k8s.gcr.io/kube-controller-manager v1.18.3 da26705ccb4b 7 weeks ago 162MB
kubernetesui/dashboard v2.0.0 8b32422733b3 2 months ago 222MB
k8s.gcr.io/pause 3.2 80d28bedfe5d 4 months ago 683kB
k8s.gcr.io/coredns 1.6.7 67da37a9a360 5 months ago 43.8MB
k8s.gcr.io/etcd 3.4.3-0 303ce5db0e90 8 months ago 288MB
kubernetesui/metrics-scraper v1.0.2 3b08661dc379 8 months ago 40.1MB
gcr.io/k8s-minikube/storage-provisioner v1.8.1 4689081edb10 2 years ago 80.8MB

Then let us build up our docker image in the current directory

$ docker image build -t myjenkins .
.
.
.
### After a few minutes, it should be done.

Here we will verify our docker image to see if myjenkins image is built

$ docker image ls
myjenkins latest 5f73373f46e5 42 seconds ago 676MB

Now let us execute our yaml file

$ kubectl apply -f jenkins.yaml 
serviceaccount/jenkins created
role.rbac.authorization.k8s.io/jenkins created
rolebinding.rbac.authorization.k8s.io/jenkins created
clusterrolebinding.rbac.authorization.k8s.io/jenkins-crb created
clusterrole.rbac.authorization.k8s.io/jenkinsclusterrole created
deployment.apps/jenkins created
service/jenkins created

Verify what was created

$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/jenkins-5f7b7f58dc-bzdfw 1/1 Running 0 27s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/jenkins NodePort 10.102.41.80 <none> 8080:31000/TCP,50000:30559/TCP 27s
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 23d

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/jenkins 1/1 1 1 28s

NAME DESIRED CURRENT READY AGE
replicaset.apps/jenkins-5f7b7f58dc 1 1 1 28s

Find out our jenkins ip

$ minikube ip
192.168.64.3

Now you may open a browser and paste this ip address in and add :31000 at the end to get access to our jenkins

e.g. 192.168.64.3:31000

Jenkins

Make it in a modern way : go Automation!

Obviously, you can accomplish our pipeline by simply doing it manually. But think of tens of thousands of pipelines, would that be applicable if you do one by one?

Therefore automation is the way to go!

Let’s go!

Under Manage Jenkins, we’ll find configure system

Configure system

Under configure system, we locate Environment variables as shown below

Environment variables

Under our Github organization, we locate repo named fleetman-api-gateway. Then in this repo, we click Jenkinsfile.

Jenkinsfile

Find out the two following environment variables and fill in both, docker username is optional unless you would push local image

Environment variables

Here we will create a new item

Create a new item

Under creation page, we need to provide item name and choose Multibranch Pipeline, then click OK (only once, or you may create more than one version of this item)

Creation

Under Branch Sources, choose GitHub. Then provide credential shown below. Username is “Your Github User Name”, Password is “Your Github password”. Here ID must “GitHub” since in Jenkinsfile, it requires Jenkins to fetch GitHub as an ID.

Credentials provided
GitHub id

After configuration, you need to choose your setup credentials as user. Then under Repository Scan, you need to provide owner, which is the organization name of your Github as well as repository you would like to use in the dropdown menu.

Repo and organization name

After saving the item, it will start to build up. Successful buildup is shown below

Success

Notes: Keep in mind, your organization name cannot have upper case. Otherwise, the build would not succeed as shown below.

Failed buildup

After I reconfigured configuration credentials in Jenkins, it was built up successfully.

To verify our new pod, we will go back to our terminal

$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/api-gateway-764d985c66-hh6zk 1/1 Running 0 23s
pod/jenkins-5f7b7f58dc-bzdfw 1/1 Running 0 103m

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/fleetman-api-gateway NodePort 10.99.80.140 <none> 8080:30020/TCP 22s
service/jenkins NodePort 10.102.41.80 <none> 8080:31000/TCP,50000:30559/TCP 103m
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 23d

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/api-gateway 1/1 1 1 23s
deployment.apps/jenkins 1/1 1 1 103m

NAME DESIRED CURRENT READY AGE
replicaset.apps/api-gateway-764d985c66 1 1 1 23s
replicaset.apps/jenkins-5f7b7f58dc 1 1 1 103m

As we complete one pod, we will move forward to our whole orgnization!

First, the Multibranch Pipeline should be deleted

Deletion of multibrand pipeline

Create a new item, this time is GitHub Organization, which is build up every single resource applicable in the organization

GitHub organization buildup

Fill in display name and choose previous Github credentials as well as owner as your Github organization name

Info provided

Under Build History, we could verify that jenkins went through every single repo with file Jenkinsfile, and build up resources.

Build history

Below is one of the buildup process among under the repo

Process

Then we are able to verify our buildup on terminal as well

$ kubectl get pods
NAME READY STATUS RESTARTS AGE
api-gateway-6dbf8d5969-gv24n 1/1 Running 0 69s
jenkins-5f7b7f58dc-bzdfw 1/1 Running 0 139m
mongodb-65784d9f9d-77xvt 1/1 Running 0 69s
position-simulator-6f49889666-79lnt 1/1 Running 0 2m2s
position-tracker-6845b5b5b6-8z2kz 1/1 Running 0 2m2s
queue-7b76fb77f-rqgbs 1/1 Running 0 96s
webapp-7944cf574f-5w5wh 1/1 Running 0 74s

After verification, we check out our services so that we can tell which port we may use to deploy our app

$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
fleetman-api-gateway NodePort 10.99.80.140 <none> 8080:30020/TCP 38m
fleetman-mongodb ClusterIP 10.102.239.154 <none> 27017/TCP 2m54s
fleetman-position-tracker ClusterIP 10.101.152.136 <none> 8080/TCP 3m47s
fleetman-queue NodePort 10.102.53.6 <none> 8161:30010/TCP,61616:31520/TCP 3m21s
fleetman-webapp NodePort 10.107.8.160 <none> 80:30080/TCP 2m59s
jenkins NodePort 10.102.41.80 <none> 8080:31000/TCP,50000:30559/TCP 141m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 23d

As we can see, our fleetman-webapp is deployed on port 30080, with ip address shown below, we can get access to our app

$ minikube ip
192.168.64.3

Below is the app when accessing to 192.168.64.3:30080

App shown

At last, we will look into how CI/CD is accomplished

As we see from app shown below, the speed is shown as 0, let us try to update it

App speed

First, we copy url of our repo named fleetman-position-tracker since it directly controls over speed shown above

Copy url

Then we git clone into our local environment

$ git clone https://github.com/kubernetes-projects-2020/fleetman-position-tracker.git
Cloning into 'fleetman-position-tracker'...
remote: Enumerating objects: 81, done.
remote: Total 81 (delta 0), reused 0 (delta 0), pack-reused 81
Receiving objects: 100% (81/81), 23.58 KiB | 689.00 KiB/s, done.
Resolving deltas: 100% (19/19), done.
$ ls
Dockerfile jenkins.yaml
fleetman-position-tracker
$ cd fleetman-position-tracker/
$ ls
Dockerfile LICENSE deploy.yaml mvnw.cmd src
Jenkinsfile README.md mvnw pom.xml

Here we showcase how I made the change with Atom, you are free to choose other developer tool.

Open your Atom tool, and then type in code like shown below

$ atom .

Then you will be directed to Atom. As shown below, you locate to line 36 under folder indicated on the left. Then type in “.withSpeed(45.4)”. It will generate an error since decimal must be written with BigDecimal in java file.

Diagram of change

Then jump back to our temnial, apply git status and git add and commit

$ git status
On branch master
Your branch is up to date with 'origin/master'.

Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git restore <file>..." to discard changes in working directory)
modified: src/main/java/com/virtualpairprogrammers/tracker/messaging/MessageProcessor.java

no changes added to commit (use "git add" and/or "git commit -a")
$ git commit -am "deliberate break"
[master e65f756] deliberate break
1 file changed, 7 insertions(+), 6 deletions(-)
localhost:fleetman-position-tracker paulzhao$ git push
Username for 'https://github.com/kubernetes-projects-2020/fleetman-position-tracker.git': lightninglife ### Provide with your Github username
Password for 'https://lightninglife@github.com/kubernetes-projects-2020/fleetman-po ### Provide with your Github password
Enumerating objects: 19, done.
Counting objects: 100% (19/19), done.
Delta compression using up to 8 threads
Compressing objects: 100% (7/7), done.
Writing objects: 100% (10/10), 741 bytes | 741.00 KiB/s, done.
Total 10 (delta 4), reused 0 (delta 0), pack-reused 0
remote: Resolving deltas: 100% (4/4), completed with 4 local objects.
To https://github.com/kubernetes-projects-2020/fleetman-position-tracker.git
8495bb5..e65f756 master -> master

Here we should be able to have change made via Jenkins if we have our app on a public domain. But we have it on our local server, so it would not work.

If you have our app built on say AWS, our change will be detected and applied accordingly. We can find out the soure IP shown below.

Settings of your organization
Webhooks

Well, in our case, we need to “build now” to push our change. As we deliberately made an error, the buildup will not be successful.

Unsuccessful buildup

We jump back to our terminal and open our Atom

$ atom .

Fix our java file as shown below

Line 36 fixed

Then we went through git status and git add/ commit again

$ git status
On branch master
Your branch is up to date with 'origin/master'.

Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git restore <file>..." to discard changes in working directory)
modified: src/main/java/com/virtualpairprogrammers/tracker/messaging/MessageProcessor.java

no changes added to commit (use "git add" and/or "git commit -a")
$ git commit -am "Fix it"
[master 5088557] Fix it
1 file changed, 1 insertion(+), 1 deletion(-)
localhost:fleetman-position-tracker paulzhao$ git push
Enumerating objects: 19, done.
Counting objects: 100% (19/19), done.
Delta compression using up to 8 threads
Compressing objects: 100% (7/7), done.
Writing objects: 100% (10/10), 717 bytes | 717.00 KiB/s, done.
Total 10 (delta 4), reused 0 (delta 0), pack-reused 0
remote: Resolving deltas: 100% (4/4), completed with 4 local objects.
To https://github.com/kubernetes-projects-2020/fleetman-position-tracker.git
e65f756..5088557 master -> master

Now we set up our terminal with command below to monitor pods terminiating and building process as well as on browser to monitor the changes

Change occured
Complete

In the end, it’s successfully complete.

$ kubectl get po -w
NAME READY STATUS RESTARTS AGE
api-gateway-6dbf8d5969-gv24n 1/1 Running 0 88m
jenkins-5f7b7f58dc-bzdfw 1/1 Running 0 3h47m
mongodb-65784d9f9d-77xvt 1/1 Running 0 88m
position-simulator-6f49889666-79lnt 1/1 Running 0 89m
position-tracker-79db74b99b-q92zl 1/1 Running 0 2m44s
queue-7b76fb77f-rqgbs 1/1 Running 0 88m
webapp-7944cf574f-5w5wh 1/1 Running 0 88m

Conclusion:

Throughout this project, we deployed multiple kubernetes resouces via Jenkins.

Kubernetes is the most popular orchestartion tool in DeVops industry due to a number of reasons: including Container orchestration, Great for multi-cloud adoption, Deploy and update applications at scale for faster time-to-market, Better management of your applications and among other merits.

Jenkins, on the other hands, makes CI/ CD available seemlessly.

In this project, we took advantage of GitHub as a repo to build our infrastructure in the first place, then we utilized organization to make multiple resources simultaneously. Ultimately, we also tested changes made in an automatic manner. Since we applied in our local environment, we couldn’t accomlish 100% automataion, but it provided a guideline for complete autmoation if similar structure is deployed in a cloud platform such as AWS.

--

--

Paul Zhao
Paul Zhao Projects

Amazon Web Service Certified Solutions Architect Professional & Devops Engineer