Sitemap

Manage your Kubernetes clusters with Flux2

9 min readSep 2, 2021

In this article, we will see together how to manage Kubernetes clusters using Flux 2 and the Gitops methodology.

GitOps : A method based on synchronization

GitOps is a combination of good practices for automating the deployment of your containerized applications and infrastructure. Rather than pushing changes, you pulls and syncs code changes in your Kubernetes cluster.

In simple words :
GitOps is a way of doing DevOps using synchronisation mecanisms and Git repositories as the source of truth for all configuration and code.

Below is a classic model, developers used CICD pipeline to build and deploy their applications with Gitlab CI and Ops manage the clusters and setup infrastructure services.

In the new GitOps model, no manual actions and changes are allowed directly in the cluster :

Instead of making changes directly, cluster state is now synchronized against git code repository :

Infrastructure services and applications are installed and updated automatically when changes in the repository are detected. This mechanism is done with controllers in the clusters which detects change in the source code or in the Docker images tags.

Flux 2

Flux is a tool for syncing Kubernetes clusters with code repository and automating updates when there is new code or modification to deploy.

It’s important to note that every actions is done in “pull”, Flux and the manipulation done by Kubernetes to keep his state updated connect to ressources directly. This is really great for security, you don’t need anymore to exposed your Kubernetes API to your external CICD tools in SaaS for example.

Flux version 2 (“v2”) is built from the ground up to use Kubernetes’ API extension system, and to integrate with Prometheus and other core components of the Kubernetes ecosystem. In version 2, Flux supports multi-tenancy and syncing an arbitrary number of Git repositories, among other long-requested features. Flux v2 is constructed with the GitOps Toolkit, a set of composable APIs and specialized tools for building Continuous Delivery on top of Kubernetes.

Flux 2 is built around controllers, controllers manage the lifecycle of your sources and deployments whether it is Helm packages or standard Kubernetes Yaml files.

Why it’s cool ?

The major thing is that at anytime you know that your Kubernetes clusters are setup in the desired state defined from a code repository. This mean that you don’t have to manipulate the cluster on your own, thanks to the sync mechanism everything is fully automated.
Traditionally when you’re doing classic clusters manipulations with CICD tools, once the last delivery has been made and before the next delivery, you do not know the state of your cluster.

Think having for example many Kubernetes clusters to manage in your organisation and you need to change for example a common ressource. With this approach, the clusters will realize themselves that they have a ressource to modify in their configurations. You will no longer have to connect to the Kube API of each cluster to give it the order to make the changes.

It is therefore easier and faster but also more industrialized and controlled to manipulate resources on clusters.

Manipulating configuration with Kustomize

Kustomize is used to build the configuration code and manipulate also Helm charts and their values. For example you can set custom Helm values for different Kubernetes clusters by using Kustomize templating without the need to duplicate code. This allow to keep a maximum of common configuration code unique, only the specific parameters are needs for each clusters in a different files.

Automatic image update

Automatic Image update is done by Flux2 by scanning container image registry and triggering deployment update from a defined policy based on tags.

3 types of policies are supported :

  • Regex
  • Calendar Versioning
  • Semantic Versioning

This feature is optional but Semantic versioning is probably the best way to manipulate and control image update.

If a new tag allowed by the policy is detected by Flux when scanning the image registry, Flux will replace the tag automatically in the yaml files of the deployment and push the changes to the remote Git repository. This is called Git reconciliation in the Flux terminology.

Then the synchronisation trigger the deployment update with the new image. This is really useful for doing continuous delivery and ensure the repository math the current K8S cluster state.

Simple use case : setup infrastructure services on 2 clusters

To get started easily and quickly illustrate the power of Flux 2, we’ll take the following use case : Manage the infrastructure services configuration of 2 Kubernetes clusters called “kube-prod” and “kube-dev”, a production cluster and a development cluster. We want to automatically deploy the Helm charts of Ingress-nginx first with the default configuration then Prometheus-operator with different configurations for each clusters.

Flux Setup and credentials

Install Flux cli and configure your local kube config context (prod&dev):

$ curl -s https://toolkit.fluxcd.io/install.sh | sudo bash

Now that Flux Cli binary is setup on your local environment you can setup Flux on each Kubernetes Clusters.

$ export GITHUB_TOKEN=xxxxxxmysupertokenxxxxx$ flux bootstrap github --owner=cyrilbkr  --repository=flux-multicluster-example --branch=master  --token-auth  --personal --path=clusters/kube-prod

The thing here is that integrating a cluster with Flux is just simple as a single line command. You can of course integrate this installation process into an infra as code tool like Terraform very easily.

Code Repository

Let’s take a look now at our Flux configuration that we want to setup on our Kubernetes clusters and use Kustomize to have minimal duplicated declarations of code.

You can find the source code of this complete example here on my public Github and try it directly by yourself on your clusters.

Here is the final files hierarchy :

As you can see, we have two main folders at the top called “clusters” and “infrastructure”.
The clusters directory contain the list of the ressources for each clusters, this is related to the option — path of the previous Flux Cli bootstrap command ( example : — path=clusters/kube-prod ). Flux scan this sub-directory and determine which ressources definition it need to called.
The infrastructure directory contain our ressources definition using Kustomize. It’s here that we build the code to setup Nginx-ingress and Prometheus-operator Helm chart.

Sources

Before writing our ressources definitions we need to to setup where the Helm sources are available. To do this we need to create a HelmRepository definition and define the Helm url and the Flux monitoring time interval. Flux use this parameters to know where the Helm package is downloadable and how frequently it need to verify the repository.

In my example, I stored all my sources definition in a single ressource called “common” but you can also split your source definition in each ressource directory.

apiVersion: source.toolkit.fluxcd.io/v1beta1
kind: HelmRepository
metadata:
name: ingress-nginx
spec:
interval: 30m
url: https://kubernetes.github.io/ingress-nginx

In this declaration we specify the Nginx-ingress Helm package url and the time interval (30m). Don’t forget to add this to your Kuztomize definition :

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: flux-system
resources:
- ingress-nginx.yaml

Nginx-ingress

Let’s start by the simple part of our use case, setup Nginx-ingress Helm package with the same Kubernetes configuration on each clusters.

Add a Kustomize definition for including the namespace used and the chart definition :

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: ingress-nginx
resources:
- namespace.yaml
- nginx-ingress.yaml

Then create the Nginx-ingress Helm ressource definition :

apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: nginx-ingress
spec:
releaseName: nginx-ingress
chart:
spec:
chart: ingress-nginx
sourceRef:
kind: HelmRepository
name: ingress-nginx
namespace: flux-system
version: "3.23.0"
interval: 1h0m0s
install:
remediation:
retries: 3
values:
controller:
kind: DaemonSet

The Nginx-ingress Helm release definition call our Nginx-ingress Helm source previously created. We also customize the Helm chart a little bit by specifying custom values from the values.yaml traditional files, in this case we specify that Nginx-ingress is a Daemonset (by default, it’s a deployment).

Last step, we need to affect now this definition to our clusters.

Here is the common.yaml file of the dev cluster (kube-dev) :

apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
name: common
namespace: flux-system
spec:
interval: 10m0s
sourceRef:
kind: GitRepository
name: flux-system
path: ./infrastructure/common
prune: true
validation: client
healthChecks:
- apiVersion: apps/v1
kind: DaemonSet
name: nginx-ingress-ingress-nginx-controller
namespace: ingress-nginx

The path parameter defines where the Nginx-ingress ressources definition are available (./infrastructure/common).

We also setup a health check for allowing Flux to verify if Nginx-ingress works successfully and consider the managed ressource operational.

You can now commit your change in the repository and check that Flux apply the modification automatically with Flux Cli :

$ flux  get  sources   helm
NAME READY MESSAGE
ingress-nginx True Fetched revision: a0a1a2a0a1a2a0a1a2
$ flux get helmreleases -n ingress-nginx
NAME READY MESSAGE REVISION
nginx-ingress True Release reconciliation succeeded 3.23.0

You can check also with Kubectl manually that everything is correctly setup on both clusters :

$ kubectl  get pod -n ingress-nginx
NAME READY STATUS
nginx-ingress-ingress-nginx-controller-6thsk 1/1 Running

Prometheus

Now that you know how to setup ressources on your clusters, let’s take a more advanced example. We will create the configuration to manage Prometheus but with different parameters for each clusters. On the production cluster we want to setup 3 Prometheus replicas and a metrics retention of 1 month while on the dev cluster we want to setup only 1 replica and a lighter retention of 1 week. To do this, we’ll use the power of Kustomize to keep as much common code as possible and only define unique parameters specific to our clusters.

As we did earlier, we must first add the source where the Helm Chart is stored

apiVersion: source.toolkit.fluxcd.io/v1beta1
kind: HelmRepository
metadata:
name: prometheus-community
spec:
interval: 30m
url: https://prometheus-community.github.io/helm-charts

Then create a directory called monitoring with 3 sub directories containing the base and specific code.

  • base : common code definition
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: prometheus-operator
spec:
releaseName: prometheus-operator
chart:
spec:
chart: kube-prometheus-stack
sourceRef:
kind: HelmRepository
name: prometheus-community
namespace: flux-system
version: "13.13.1"
interval: 1h0m0s
install:
remediation:
retries: 3
values:
grafana:
enabled: false
alertmanager:
enabled: false
prometheus:
ingress:
enabled: false
  • kube-prod : specific values for the production cluster
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: prometheus-operator
namespace: monitoring
spec:
values:
prometheus:
prometheusSpec:
replicas: 3
retention: 30d

Don’t forget to add the Kustomize patchesStrategicMerge definition in your kustomization.yaml. This allow Kustomize to merge the common code with specific cluster values :

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../base
patchesStrategicMerge:
- prometheus-operator-values.yaml
  • kube-dev : specific values for the development cluster
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: prometheus-operator
namespace: monitoring
spec:
values:
prometheus:
prometheusSpec:
replicas: 1
retention: 7d

Again, don’t forget to add this to your Kustomize definition like we do earlier for the production environment and commit your code.

Verify that everything is working on the Flux side on both cluster :

$ flux get sources chart
NAME READY MESSAGE
monitoring-prometheus-operator True Fetched revision: 13.13.1 13.13.1
$ flux get helmreleases -n monitoring
NAME READY MESSAGE REVISION
prometheus-operator True Release reconciliation succeeded 13.13.1

And finally check manually with Kubectl on the production cluster for example that we have 3 replicas of Prometheus :

$ kubectl get pod -n monitoring
prometheus-operator-prometheus-0 2/2 Running 1
prometheus-operator-prometheus-1 2/2 Running 1
prometheus-operator-prometheus-3 2/2 Running 1

Awesome ! :)

You now know how to manage the deployments of your Kubernetes cluster in a synchronized and industrialized way, thanks to Flux and Kustomize !

--

--

Cyril Becker
Cyril Becker

Written by Cyril Becker

Head of Infrastructure @ XBTO

Responses (2)