Part 2: How to Improve Your Kubernetes CI/CD Pipelines with GitLab and Open Source
In my previous article “Part:1 How Containerized CI/CD Pipelines Work with Kubernetes and GitLab”, I wrote about Kubernetes’ popularity and importance in 2019. I also described the advantages of containerized pipelines with GitLab CI/CD and Kaniko offer. In this post, I would like to introduce more open source projects and GitLab features that help you deploy and run your cloud native application.
Enhance Application Deployments
Now let’s get back to application deployment and introduce you to the open source project Kustomize. Kustomize, which is part of the Kubernetes project and sponsored by sig-cli, lets you customize raw and template-free YAML files for multiple purposes, leaving the original YAML untouched and usable as is. Kustomize is a CLI tool that is also integrated into kubectl by default.
For me, Kustomize is the perfect tool to deploy containerized applications into a Kubernetes cluster using continuous delivery (CD). It allows us to define customizations with a declarative approach, supporting us to deploy our applications to different environments by not duplicating our code. Unlike other deployment tools, Kustomize has the smallest overhead and only focuses on features that are needed in an automated CD pipeline.
To customize our existing YAML manifests, we only need to define our customizations in a kustomization.yaml, which Kustomize then uses as a ruleset to build the outcome YAML definitions. Let me give you an example (you can review the whole example here):
This kustomization.yaml, for example, is used to deploy an application into a development environment. Therefore, it customizes our existing YAML manifest (linked via the bases parameter) and adds some specific configurations:
- adds an env=dev label to all resources.
- patches the existing YAML manifest based on the defined files; in this example, it updates the replica count and adds a specific container environment variables.
- adds the name Prefix dev- to all resources.
In a complex deployment, we also have the possibility to define multiple customization definitions. The Kustomize documentation serves as a good overview of possible use cases and further customization options.
Let’s have a look at how we can integrate Kustomize into a containerized pipeline (you can review the whole example here):
- kubectl apply -k deploy/overlay/$ENV
Once again, we have a pipeline definition with one job only. The script section will be executed in a container providing kubectl based on Alpine. The job executes the kubectl CLI with the apply parameter. -k tells kubectl to use the Kustomize plugin. Both parameters are followed by the path where our deployment files are located. In this example, we use a pipeline variable to define the customizations we would like to deploy.
Secure Your Application Ingress
We now are able to build and deploy our application using containerized pipelines. We now show how we can secure our application workload running in our Kubernetes cluster.
When integrating our existing Kubernetes cluster with a GitLab project or group, we can opt-in to install an Ingress controller. The deployed Ingress controller is called GitLab Web Application Firewall. The GitLab WAF provides you real-time security monitoring based on an Nginx Proxy with enabled ModSecurity module. The default-enabled OWASP core rule set is customized based on GitLab’s best practices and is configured to detection-only mode. Of course, it is possible to enable further security settings if needed. The Web Application Firewall helps you detect and prevent cross-site scripting as well as SQL injection attacks.
As mentioned above, GitLab WAF is configured to detect-only by default. The Web Application Firewall will log all security-related issues to an audit log (/var/log/modsec/audit.log) in the Ingress controller pod, which can be forwarded to any log management for further analysis or acting. An example output of a security issue:
Why You Should Only Care About Your Business Logic
In the last part of this post, I cover serverless, which is another approach that has attracted lots of energy over the past year. One way to describe serverless is that it means you stop caring about your servers and infrastructure and just focus on your business logic or the issue you would like to solve. With GitLab Serverless you can do exactly that. GitLab Serverless is packed with Knative, Kaniko and Istio, which are all open source project built on top of Kubernetes and which abstract away the complex details to allow developers to focus on what matters.
GitLab Serverless automatically builds a container image without us providing a Dockerfile, deploys it to Kubernetes and automatically scales it based on user needs. This is done in a Function-as-a-Service (FaaS)-like manner, which also allows us to scale our application to zero to save resources and money on an as-needed basis.
Once we have configured GitLab Serverless on our Kubernetes cluster, we only need to configure it with two files in our project: A GitLab CI definition as well as a serverless.yaml describing your function, or a Dockerfile describing your containerized application. In the below example we deploy a NodeJS-based function (you can review the whole example here).
The .gitlab-ci.yml that defines the pipeline to build and deploy our function:
A serverless.yaml that describes the functions and required runtime:
GitLab Serverless will also provide us with detailed metrics on the scaling of your function:
All examples and code snippets are available here. My previous article “How Containerized CI/CD Pipelines Work with Kubernetes and GitLab” details how to create containerized pipelines with GitLab CI/CD and Kaniko. You can also see a live recording of my talk at GitLab Commit 2020 in San Francisco on containerized pipelines, Kubernetes and open source in general:
This article was first published on The New Stack.