Jenkins CI- Flux CD demo with Kind K8s cluster locally: Part 1

Mohitverma
12 min readAug 15, 2024

--

This is first part of the two parts series.

Part 1: Bootstrap the infrastructure with Ansible playbook and Flux CD.

Part 2: Deploying the application with flux CD and demonstrating complete CI-CD flow using Flux image-automation-controller.

Flow to manage application with CI-CD

In 2022, while I was studying CI-CD concepts, I was required to put together a CI-CD demo with well-known tools like Flux CD. I didn’t know much at the time, so it was to be anticipated that I would bomb the interview. After two years, now I can connect all the dots and prepare a functional CI-CD demo. The steps to demonstrate CI-CD in a local environment are outlined in this article.

For preparing the local environment, we will be using the Ansible automation. The ansible will execute the following steps.

  1. Install-VM: On a Macbook, use the multipass utility to deploy a virtual machine.
  2. Provision-VM: Provision the VM with all the packages/tools needed, deploy the k8s with kind.
  3. Flux-Bootstrap: Connect to the remote github repository and flux bootstrap the infra components required for this demo.

All code used in this demo is uploaded on my github repository, please feel free to check.

https://github.com/mvtech88/fluxcd-demo

Step 1: Install VM

When it comes to VM deployment and provisioning, I will be using Ansible roles to organize my Ansible code. Under the hood, the multipass tool will be used by ansible install VM role to create a virtual machine (VM). This step will also generate the ansible playbook yaml for the provisioning the VM automatically.

Ansible role structure

The playbook is organized with tags, and these tags can be used to initiate a specific operation. In this part we will be using the install tag to deploy and create the VM.

ansible-playbook -v bootstrap.yml — tags install — vault-password-file pass.txt

➜  gitops ansible-playbook -v bootstrap.yml --tags install --vault-password-file pass.txt
No config file found; using defaults
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'

PLAY [localhost] *********************************************************************************************************************************************************

TASK [Gathering Facts] ***************************************************************************************************************************************************
ok: [localhost]

TASK [install-vm : Read K8S cluster node VM configuration from file] *****************************************************************************************************
ok: [localhost] => {"ansible_facts": {"ubuntu": {"ubuntu": "--mem 4G --cpus 2 --disk 30G 22.04"}}, "ansible_included_var_files": ["/Users/mohitverma/Documents/WR-Backup/STUDY-Mac/gitops/roles/install-vm/tasks/vm.yml"], "changed": false}

TASK [install-vm : Create keypair and cloud-init] ************************************************************************************************************************
included: /Users/mohitverma/Documents/WR-Backup/STUDY-Mac/gitops/roles/install-vm/tasks/create-keys-and-cloudinit.yml for localhost

TASK [install-vm : Delete any existing public and private key] ***********************************************************************************************************
changed: [localhost] => (item=./roles/install-vm/tasks/user_key) => {"ansible_loop_var": "item", "changed": true, "item": "./roles/install-vm/tasks/user_key", "path": "./roles/install-vm/tasks/user_key", "state": "absent"}
changed: [localhost] => (item=./roles/install-vm/tasks/user_key.pub) => {"ansible_loop_var": "item", "changed": true, "item": "./roles/install-vm/tasks/user_key.pub", "path": "./roles/install-vm/tasks/user_key.pub", "state": "absent"}
.
.
.
.
TASK [install-vm : Create openssl file] *********************************************************************************************************************************
changed: [localhost] => {"changed": true, "checksum": "695604eb74b3c167a5d75f9ba490a6ea8d48427f", "dest": "./roles/provision-vm/tasks/docker-reg/openssl.conf", "gid": 20, "group": "staff", "md5sum": "ee18314ecdf33984ebb21708bb28314d", "mode": "0700", "owner": "mohitverma", "size": 484, "src": "/Users/mohitverma/.ansible/tmp/ansible-tmp-1723529562.6244411-98010-20453853906747/.source.conf", "state": "file", "uid": 501}

TASK [install-vm : Create kind. yaml file] ******************************************************************************************************************************
changed: [localhost] => {"changed": true, "checksum": "9464222e784a0ea5fdeaf9cb71d1aa7a0c4d7161", "dest": "./roles/provision-vm/tasks/kind.yaml", "gid": 20, "group": "staff", "md5sum": "9fae132e262d2fa458a7e7a2aa4577a0", "mode": "0644", "owner": "mohitverma", "size": 382, "src": "/Users/mohitverma/.ansible/tmp/ansible-tmp-1723529562.835535-98040-121908743736406/.source.yaml", "state": "file", "uid": 501}

TASK [install-vm : Generate new cert] ************************************************************************************************************************************
changed: [localhost] => {"changed": true, "cmd": "cd ./roles/provision-vm/tasks/docker-reg \nopenssl req -x509 -newkey rsa:4096 -days 365 -config openssl.conf -keyout certs/domain.key -out certs/domain.crt \n", "delta": "0:00:00.216072", "end": "2024-08-13 11:42:43.341260", "msg": "", "rc": 0, "start": "2024-08-13 11:42:43.125188", "stderr": ".+...................+..+.+...+..+.........+.+.....+.......+......+++++++++++++++++++++++++++++++++++++++++++++*...+..+.............+...+.....+...+.......+++++++++++++++++++++++++++++++++++++++++++++*..+.......+........+......+....+......+.....+......+.......+.....+...+....+........+......+............+.......+...+............+...............+......+......+.............................+....+..+.+..+......+.........+.+.........+++++\n........+.....+.+.....+.........+......+.+........+......+.+.....+.............+.................+...+.+...+......+......+++++++++++++++++++++++++++++++++++++++++++++*....+.+........+...+....+++++++++++++++++++++++++++++++++++++++++++++*.....+.+.....+....+..+........................+.........+....+...+........+....+...........+.......+...+..............+...+......................+..+...+.......+.....+.............+.....+.+..+...+....+.....+.+...........+.+.....+.......+...+..+...+.............+..............+.+......+..............+.+.........+.....+.......+.........+......+........+......+..........+...+...........................+...+......+............+.........+...............+...............+.....+..........+......+.....+...+...............................+......+.........+......+..+...............+.+..+.......+.....+.......+++++\n-----", "stderr_lines": [".+...................+..+.+...+..+.........+.+.....+.......+......+++++++++++++++++++++++++++++++++++++++++++++*...+..+.............+...+.....+...+.......+++++++++++++++++++++++++++++++++++++++++++++*..+.......+........+......+....+......+.....+......+.......+.....+...+....+........+......+............+.......+...+............+...............+......+......+.............................+....+..+.+..+......+.........+.+.........+++++", "........+.....+.+.....+.........+......+.+........+......+.+.....+.............+.................+...+.+...+......+......+++++++++++++++++++++++++++++++++++++++++++++*....+.+........+...+....+++++++++++++++++++++++++++++++++++++++++++++*.....+.+.....+....+..+........................+.........+....+...+........+....+...........+.......+...+..............+...+......................+..+...+.......+.....+.............+.....+.+..+...+....+.....+.+...........+.+.....+.......+...+..+...+.............+..............+.+......+..............+.+.........+.....+.......+.........+......+........+......+..........+...+...........................+...+......+............+.........+...............+...............+.....+..........+......+.....+...+...............................+......+.........+......+..+...............+.+..+.......+.....+.......+++++", "-----"], "stdout": "", "stdout_lines": []}
[WARNING]: Could not match supplied host pattern, ignoring: ubuntu

PLAY [ubuntu] ************************************************************************************************************************************************************
skipping: no hosts matched

PLAY RECAP ***************************************************************************************************************************************************************
localhost : ok=20 changed=12 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

After the successful execution of the playbook, we can check try to login to the VM . The private key to be used for the login is generated by this playbook at roles/install-vm/tasks/user_key. Default username for the login is vmadmin.

➜  gitops multipass ls
Name State IPv4 Image
ubuntu Running 192.168.64.14 Ubuntu 22.04 LTS

➜ gitops ssh -i /Users/mohitverma/Documents/WR-Backup/STUDY-Mac/gitops/roles/install-vm/tasks/user_key vmadmin@192.168.64.14
Welcome to Ubuntu 22.04.4 LTS (GNU/Linux 5.15.0-118-generic aarch64)

* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/pro

System information as of Tue Aug 13 11:47:05 IST 2024

System load: 0.05
Usage of /: 5.5% of 28.90GB
Memory usage: 4%
Swap usage: 0%
Processes: 98
Users logged in: 0
IPv4 address for enp0s1: 192.168.64.14
IPv6 address for enp0s1: fd93:ad2d:c5a:ec7:5054:ff:fe0e:dbea


Expanded Security Maintenance for Applications is not enabled.

3 updates can be applied immediately.
To see these additional updates run: apt list --upgradable

Enable ESM Apps to receive additional future security updates.
See https://ubuntu.com/esm or run: sudo pro status


Last login: Tue Aug 13 11:47:05 2024 from 192.168.64.1
vmadmin@ubuntu:~$

This step will also generate two files “kind.yaml” and “playbook.yml” at path “roles/provision-vm/tasks” which will be used in the step 2 during provisioning of the VM. kind.yaml will provide the configuration to deploy k8s cluster using kind and playbook.yaml will have all the steps needed to provision the VM.

➜  gitops cat roles/provision-vm/tasks/kind.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
serviceSubnet: "192.168.5.70/28"
apiServerAddress: "192.168.64.14"
nodes:
- role: control-plane
image: kindest/node:v1.28.9@sha256:dca54bc6a6079dd34699d53d7d4ffa2e853e46a20cd12d619a09207e35300bd0
- role: worker
image: kindest/node:v1.28.9@sha256:dca54bc6a6079dd34699d53d7d4ffa2e853e46a20cd12d619a09207e35300bd0


➜ gitops less roles/provision-vm/tasks/playbook.yml

- name: Update apt repo and cache on all Debian/Ubuntu boxes
apt: update_cache=yes force_apt_get=yes cache_valid_time=3600
tags: provision

- name: Upgrade all packages on servers
apt: upgrade=dist force_apt_get=yes
tags: provision

- name: Check if a reboot is needed on all servers
tags: provision
register: reboot_required_file
stat: path=/var/run/reboot-required get_checksum=no

.
.
.
.
- name: bootstrap the flux
tags: flux-bootstrap
become_user: vmadmin
environment:
GITHUB_TOKEN: gh*_QHHFq*********
shell: "flux bootstrap github --components-extra=image-reflector-controller,image-automation-controller --owner=mvtech88 --token-auth=true --repository=fluxcd-demo --path=clusters/fluxbootstrap --personal"
when: "flux.rc !=0 "

- name: create the flux secret to access docker registry
tags: flux-bootstrap
become_user: vmadmin
command: "{{ item }}"
with_items:
- kubectl -n flux-system create secret docker-registry docker-credentials --docker-username=mohitverma1688 --docker-password=***** --docker-email=mohitverma160288@gmail.com

Step 2 : Provision VM

Provisioning the VM will also be accomplished by the ansible code using a role called provision-vm. Installing docker, the docker registry, helm, kubectl, kind, kubeseal, flux, and other necessary packages and tools will be the major focus of this stage. Kind is the preferred method for deploying the K8s cluster on the virtual machine.

Previous install step also generates a “hosts” inventory file at parent directory path to provision the VM.

➜  gitops cat hosts
[ubuntu]
192.168.64.14


[ubuntu:vars]
ansible_ssh_user=vmadmin
ansible_ssh_private_key_file=/Users/mohitverma/Documents/WR-Backup/STUDY-Mac/gitops/roles/install-vm/tasks/user_key
ansible_ssh_common_args='-o StrictHostKeyChecking=no'

We can now provision the VM using below command.

ansible-playbook -v bootstrap.yml — tags provision -i hosts

➜  gitops ansible-playbook -v bootstrap.yml --tags provision  -i hosts
No config file found; using defaults

PLAY [localhost] *********************************************************************************************************************************************************

TASK [Gathering Facts] ***************************************************************************************************************************************************
ok: [localhost]

PLAY [ubuntu] ************************************************************************************************************************************************************

TASK [Gathering Facts] ***************************************************************************************************************************************************
[WARNING]: Platform linux on host 192.168.64.14 is using the discovered Python interpreter at /usr/bin/python3.10, but future installation of another Python interpreter
could change the meaning of that path. See https://docs.ansible.com/ansible-core/2.17/reference_appendices/interpreter_discovery.html for more information.
ok: [192.168.64.14]

TASK [provision-vm : provision the VM with ansible] **********************************************************************************************************************
included: /Users/mohitverma/Documents/WR-Backup/STUDY-Mac/gitops/roles/provision-vm/tasks/playbook.yml for 192.168.64.14

TASK [provision-vm : Update apt repo and cache on all Debian/Ubuntu boxes] ***********************************************************************************************
ok: [192.168.64.14] => {"cache_update_time": 1723529557, "cache_updated": false, "changed": false}

TASK [provision-vm : Upgrade all packages on servers] ********************************************************************************************************************
changed: [192.168.64.14] => {"changed": true, "msg": "Reading package lists...\nBuilding dependency tree...\nReading state information...\nCalculating upgrade...\nThe following packages have been kept back:\n ubuntu-minimal ubuntu-serv.
.
.
.
TASK [provision-vm : Allow everything and enable UFW] ********************************************************************************************************************
changed: [192.168.64.14] => {"changed": true, "commands": ["/usr/sbin/ufw status verbose", "/usr/bin/grep -h '^### tuple' /lib/ufw/user.rules /lib/ufw/user6.rules /etc/ufw/user.rules /etc/ufw/user6.rules /var/lib/ufw/user.rules /var/lib/ufw/user6.rules", "/usr/sbin/ufw -f enable", "/usr/sbin/ufw default allow", "/usr/sbin/ufw status verbose", "/usr/bin/grep -h '^### tuple' /lib/ufw/user.rules /lib/ufw/user6.rules /etc/ufw/user.rules /etc/ufw/user6.rules /var/lib/ufw/user.rules /var/lib/ufw/user6.rules"], "msg": "Status: active\nLogging: on (low)\nDefault: allow (incoming), allow (outgoing), disabled (routed)\nNew profiles: skip"}

TASK [provision-vm : Add Docker GPG apt Key] *****************************************************************************************************************************
changed: [192.168.64.14] => {"after": ["8D81803C0EBFCD88", "7EA0A9C3F273FCD8", "D94AA3F0EFE21092", "871920D1991BC93C"], "before": ["D94AA3F0EFE21092", "871920D1991BC93C"], "changed": true, "fp": "8D81803C0EBFCD88", "id": "8D81803C0EBFCD88", "key_id": "8D81803C0EBFCD88", "short_id": "0EBFCD88"}

TASK [provision-vm : Add Docker Repository] ******************************************************************************************************************************
changed: [192.168.64.14] => {"changed": true, "repo": "deb https://download.docker.com/linux/ubuntu bionic stable", "sources_added": ["/etc/apt/sources.list.d/download_docker_com_linux_ubuntu.list"], "sources_removed": [], "state": "present"}
.
.
.
.
TASK [provision-vm : Copying a kind file to directory] *******************************************************************************************************************
changed: [192.168.64.14] => {"changed": true, "checksum": "9464222e784a0ea5fdeaf9cb71d1aa7a0c4d7161", "dest": "/home/vmadmin/kind/kind.yaml", "gid": 0, "group": "root", "md5sum": "9fae132e262d2fa458a7e7a2aa4577a0", "mode": "0644", "owner": "root", "size": 382, "src": "/home/vmadmin/.ansible/tmp/ansible-tmp-1723531325.313077-99639-63850253284567/.source.yaml", "state": "file", "uid": 0}

TASK [provision-vm : installing KIND for Ubuntu] *************************************************************************************************************************
changed: [192.168.64.14] => {"changed": true, "cmd": "cd /home/vmadmin/kind/\nsudo curl -Lo ./kind https://github.com/kubernetes-sigs/kind/releases/download/v0.17.0/kind-linux-arm64\nsudo chmod +x ./kind\nsudo mv ./kind /bin/kind\n", "delta": "0:00:05.191320", "end": "2024-08-13 12:12:10.967734", "msg": "", "rc": 0, "start": "2024-08-13 12:12:05.776414", "stderr": " % Total % Received % Xferd Average Speed Time Time Time Current\n Dload Upload Total Spent Left Speed\n\r 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\r 0 0 0 0 0 0
.
.
TASK [provision-vm : checking for kubeseal installation] *****************************************************************************************************************
ok: [192.168.64.14] => {"changed": false, "stat": {"exists": false}}

TASK [provision-vm : install kubeseal in the cluster] ********************************************************************************************************************
changed: [192.168.64.14] => {"changed": true, "cmd": "sudo wget https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.23.1/kubeseal-0.23.1-linux-arm64.tar.gz --no-check-certificate\nsudo tar xvfz kubeseal-0.23.1-linux-arm64.tar.gz\nsudo mv ./kubeseal /bin\n", "delta": "0:00:29.013672", "end": "2024-08-13 12:35:00.291829", "msg": "", "rc": 0, "start": "2024-08-13 12:34:31.278157", "stderr": "\nRedirecting output to ‘wget-log.1’.", "stderr_lines": ["", "Redirecting output to ‘wget-log.1’."], "stdout": "LICENSE\nREADME.md\nkubeseal", "stdout_lines": ["LICENSE", "README.md", "kubeseal"]}

PLAY RECAP ***************************************************************************************************************************************************************
192.168.64.14 : ok=34 changed=26 unreachable=0 failed=0 skipped=2 rescued=0 ignored=1
localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

For simplicity, I have not copied the complete logs output of the playbook. To verify if everything was successful, login to the VM and run basic check.

gitops ssh -i /Users/mohitverma/Documents/WR-Backup/STUDY-Mac/gitops/roles/install-vm/tasks/user_key vmadmin@192.168.64.14
Welcome to Ubuntu 22.04.4 LTS (GNU/Linux 5.15.0-118-generic aarch64)

vmadmin@ubuntu:~$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
39eb69565bb1 registry:2 "/entrypoint.sh /etc…" 5 minutes ago Up 5 minutes 0.0.0.0:5000->5000/tcp, :::5000->5000/tcp docker-registry
e1bb289aed3b kindest/node:v1.28.9 "/usr/local/bin/entr…" 8 minutes ago Up 8 minutes k8s-cluster-worker
d23cd6f60034 kindest/node:v1.28.9 "/usr/local/bin/entr…" 8 minutes ago Up 8 minutes 192.168.64.14:45129->6443/tcp k8s-cluster-control-plane

vmadmin@ubuntu:~$ kubectl get nodes -A
NAME STATUS ROLES AGE VERSION
k8s-cluster-control-plane Ready control-plane 13m v1.28.9
k8s-cluster-worker Ready <none> 13m v1.28.9

vmadmin@ubuntu:~$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-5dd5756b68-gpg5n 1/1 Running 0 8m27s
kube-system coredns-5dd5756b68-wbn2h 1/1 Running 0 8m27s
kube-system etcd-k8s-cluster-control-plane 1/1 Running 0 8m41s
kube-system kindnet-c4kp8 1/1 Running 0 8m28s
kube-system kindnet-jhgcs 1/1 Running 0 8m22s
kube-system kube-apiserver-k8s-cluster-control-plane 1/1 Running 0 8m42s
kube-system kube-controller-manager-k8s-cluster-control-plane 1/1 Running 0 8m41s
kube-system kube-proxy-69bl2 1/1 Running 0 8m22s
kube-system kube-proxy-8l8gk 1/1 Running 0 8m28s
kube-system kube-scheduler-k8s-cluster-control-plane 1/1 Running 0 8m41s
local-path-storage local-path-provisioner-75b59d495-z6qzp 1/1 Running 0 8m27s
vmadmin@ubuntu:~$
vmadmin@ubuntu:~$ docker login 192.168.64.14:5000
Username: mohit
Password:
WARNING! Your password will be stored unencrypted in /home/vmadmin/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

Now we are ready to proceed to the step 3.

Step 3: Flux-bootstrap.

In this stage, Flux system related components will be installed on a cluster that is linked to a Git repository. The flux will scan the git repository for the additional custom resources that need to be deployed on the cluster when it has been successfully installed on the cluster. Following this stage, the cluster will be ready to move forward with the application deployment since all the components required for the demo have been deployed.

ansible-playbook -v bootstrap.yml — tags flux-bootstrap -i hosts

➜  gitops ansible-playbook -v bootstrap.yml --tags flux-bootstrap -i hosts
No config file found; using defaults

PLAY [localhost] *********************************************************************************************************************************************************

TASK [Gathering Facts] ***************************************************************************************************************************************************
ok: [localhost]

PLAY [ubuntu] ************************************************************************************************************************************************************

TASK [Gathering Facts] ***************************************************************************************************************************************************
[WARNING]: Platform linux on host 192.168.64.14 is using the discovered Python interpreter at /usr/bin/python3.10, but future installation of another Python interpreter
could change the meaning of that path. See https://docs.ansible.com/ansible-core/2.17/reference_appendices/interpreter_discovery.html for more information.
ok: [192.168.64.14]

TASK [provision-vm : provision the VM with ansible] **********************************************************************************************************************
included: /Users/mohitverma/Documents/WR-Backup/STUDY-Mac/gitops/roles/provision-vm/tasks/playbook.yml for 192.168.64.14

TASK [provision-vm : checking for flux installation] *********************************************************************************************************************
ok: [192.168.64.14] => {"changed": false, "stat": {"exists": false}}

TASK [provision-vm : install the flux] ***********************************************************************************************************************************
changed: [192.168.64.14] => {"changed": true, "cmd": "sudo curl -s https://fluxcd.io/install.sh | sudo bash\n", "delta": "0:00:29.185792", "end": "2024-08-13 12:52:26.977469", "msg": "", "rc": 0, "start": "2024-08-13 12:51:57.791677", "stderr": "", "stderr_lines": [], "stdout": "[INFO] Downloading metadata https://api.github.com/repos/fluxcd/flux2/releases/latest\n[INFO] Using 2.3.0 as release\n[INFO] Downloading hash https://github.com/fluxcd/flux2/releases/download/v2.3.0/flux_2.3.0_checksums.txt\n[INFO] Downloading binary https://github.com/fluxcd/flux2/releases/download/v2.3.0/flux_2.3.0_linux_arm64.tar.gz\n[INFO] Verifying binary download\n[INFO] Installing flux to /usr/local/bin/flux", "stdout_lines": ["[INFO] Downloading metadata https://api.github.com/repos/fluxcd/flux2/releases/latest", "[INFO] Using 2.3.0 as release", "[INFO] Downloading hash https://github.com/fluxcd/flux2/releases/download/v2.3.0/flux_2.3.0_checksums.txt", "[INFO] Downloading binary https://github.com/fluxcd/flux2/releases/download/v2.3.0/flux_2.3.0_linux_arm64.tar.gz",


PLAY RECAP ***************************************************************************************************************************************************************
192.168.64.14 : ok=5 changed=2 unreachable=0 failed=1 skipped=0 rescued=0 ignored=1
localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

The playbook executes the below command to deploy the flux components on the cluster connected to the github.

flux bootstrap github — components-extra=image-reflector-controller,image-automation-controller — owner=mvtech88 — token-auth=true — repository=fluxcd-demo — path=clusters/fluxbootstrap — personal

Login to the VM and can check the flux-system components deployed.

vmadmin@ubuntu:~$ kubectl get pods -n flux-system
NAME READY STATUS RESTARTS AGE
helm-controller-5f7457c9dd-k5fmd 1/1 Running 0 13m
image-automation-controller-79447887bb-bdv5w 1/1 Running 0 13m
image-reflector-controller-65df777f5c-cc4tm 1/1 Running 0 13m
kustomize-controller-5f58d55f76-rkwsm 1/1 Running 0 13m
notification-controller-685bdc466d-l5x4m 1/1 Running 0 13m
source-controller-86b8b57796-qk6ch 1/1 Running 0 13m

Flux CD will take charge and will deploy the infra present in git and will deploy automatically. The infra-components folder in the git contains all the infra deployment manifests needed by the flux.

➜  fluxcd-demo git:(main) tree  clusters/fluxbootstrap/infra-components
clusters/fluxbootstrap/infra-components

├── helm_repo_jenkins.yaml
├── helm_repo_nginxingress.yaml
├── helm_repo_sealed-secrets.yaml
├── kustomization_ingressnginx.yaml
├── kustomization_jenkins.yaml
├── kustomization_metallb.yaml
└── kustomization_sealed-secrets.yaml

➜ fluxcd-demo git:(main) cat clusters/fluxbootstrap/infra-components/kustomization_jenkins.yaml
apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
name: jenkins
namespace: flux-system
spec:
interval: 10m0s
path: ./infra/jenkins. --> Here you can see the manifest path for infra component
prune: true
sourceRef:
kind: GitRepository
name: flux-system
validation: client

➜ fluxcd-demo git:(main) tree infra
infra
├── ingress
│ ├── helm_release.yaml
│ ├── kustomization.yaml
│ ├── kustomizeconfig.yaml
│ └── my-nginxingress.yaml
├── jenkins
│ ├── helm_release.yaml
│ ├── jenkins_values.yaml
│ ├── kustomization.yaml
│ └── kustomizeconfig.yaml
├── metallb
│ ├── L2adv.yaml
│ ├── configmap.yaml
│ ├── kustomization.yaml
│ ├── metallb.yaml
│ └── secret.yaml
└── sealed-secrets
├── helm_release.yaml
└── kustomization.yaml

Once the flux reconciles the git repository, all the infra components are deployed and up and running. This will take some time depending upon the internet speed.


vmadmin@ubuntu:~$ kubectl get pods -n ingress-nginx
NAME READY STATUS RESTARTS AGE
ingress-nginx-controller-8gmx7 1/1 Running 0 27m
ingress-nginx-controller-nn7ql 1/1 Running 0 27m

vmadmin@ubuntu:~$ kubectl get pods -n metallb-system
NAME READY STATUS RESTARTS AGE
controller-67d9f4b5bc-fqgpk 1/1 Running 0 55m
speaker-59rgg 1/1 Running 0 55m
speaker-l7jl4 1/1 Running 0 55m

vmadmin@ubuntu:~$ kubectl get pods -n sealed-secrets
NAME READY STATUS RESTARTS AGE
sealed-secrets-68b7d8c5f4-mfwkb 1/1 Running 0 55m

vmadmin@ubuntu:~$ kubectl get pods -n jenkins
NAME READY STATUS RESTARTS AGE
jenkins-0 1/1 Running 0 55m

We can also check the status of flux monitored artifacts by using flux cli installed during provisining step.

vmadmin@ubuntu:~$ flux get sources git
NAME REVISION SUSPENDED READY MESSAGE
flux-system main@sha1:c318ecc5 False True stored artifact for revision 'main@sha1:c318ecc5'

vmadmin@ubuntu:~$ flux get sources helm
NAME REVISION SUSPENDED READY MESSAGE
helmrepo-jenkins sha256:d338f7da False True stored artifact: revision 'sha256:d338f7da'
helmrepo-nginxingress sha256:a03f6d02 False True stored artifact: revision 'sha256:a03f6d02'
helmrepo-sealed-secrets sha256:26320ce7 False True stored artifact: revision 'sha256:26320ce7'

vmadmin@ubuntu:~$ flux get kustomization
NAME REVISION SUSPENDED READY MESSAGE
flux-system main@sha1:c318ecc5 False True Applied revision: main@sha1:c318ecc5
jenkins main@sha1:c318ecc5 False True Applied revision: main@sha1:c318ecc5
metallb main@sha1:c318ecc5 False True Applied revision: main@sha1:c318ecc5
nginx-ingress main@sha1:c318ecc5 False True Applied revision: main@sha1:c318ecc5
sealed-secrets main@sha1:c318ecc5 False True Applied revision: main@sha1:c318ecc5

vmadmin@ubuntu:~$ flux get helmrelease
NAME REVISION SUSPENDED READY MESSAGE
jenkins 5.4.3 False True Helm install succeeded for release jenkins/jenkins.v1 with chart jenkins@5.4.3
nginx-ingress 4.0.18 False True Helm upgrade succeeded for release ingress-nginx/ingress-nginx.v2 with chart ingress-nginx@4.0.18
sealed-secrets 2.16.1 False True Helm install succeeded for release sealed-secrets/sealed-secrets.v1 with chart sealed-secrets@2.16.1

Now we are ready to configure the Jenkins, deploy the application and test the workflow with flux image-automation-controller.

--

--