Kustomize your Kubernetes deployment

Engineering at Zafin
Engineering at Zafin
13 min readNov 28, 2023

By Robin Huiser, Chief Architect, Marc Charbonneau, Senior Architect, and Lloyd Chandran, Principal Architect

Zafin moved away from traditional on-premises deployments for its Billing & Fees engine and offers its services today as a hosted solution in the form of Software as a Service (SaaS).

Most customers migrated to our hosted solution in Microsoft Azure — being part of a lift and shift exercise for every environment (SIT, UAT & PROD), Zafin has optimized the delivery, deployment & support model for cloud native delivery in combination with a developer-centric approach.

This article describes our journey — moving from a traditional continuous deployment model to full GitOps-enabled deliveries and deployments to address the challenges we had with traditional pipelines:

  • Complex Configuration Management: Traditional CD pipelines often rely on manually managing configuration files and scripts for deployment, which can become complex and error prone as the application grows.
  • Manual Intervention: Traditional CD pipelines may require manual intervention for tasks such as approval gates, which can slow down the release process and introduce human errors.
  • Difficult Rollbacks: In traditional CD pipelines, rolling back to a previous version can be challenging and time-consuming, especially if the rollback involves multiple components.
  • Inconsistent Environments: Maintaining consistent development, testing, and production environments can be difficult in traditional CD pipelines, leading to potential issues when promoting code to different stages.
  • Limited Visibility: It can be challenging to gain a comprehensive view of the deployment process and monitor changes in real-time, making it harder to detect and respond to issues quickly.
  • Scalability: As the complexity and scale of applications increase, traditional CD pipelines may struggle to handle the increased load efficiently.

Why GitOps?

To overcome the shortcomings of the traditional continuous deployment model described above and to fulfil the DevOps promise “You build it, you deploy it”, we needed a process which would enable developers to take ownership of the deployment in the highly regulated environment of finance where Zafin operates.

Before we dive into the details, let’s first understand what GitOps is all about and why Zafin embraced this methodology for its internal software delivery to the cloud engineering teams in more details:

  • Infrastructure as Code (IaC) Alignment: GitOps treats infrastructure as code, just like application code. This means that both the application code and the infrastructure code are stored in the same version control system (e.g., Git). This alignment simplifies management, versioning, and auditing of both application and infrastructure changes.
  • Declarative Configuration: GitOps relies on declarative configurations stored in a Git repository. This means you specify the desired state of your infrastructure and applications rather than writing imperative scripts to perform deployments. Declarative configurations are typically easier to understand, maintain, and version.
  • Version Control: GitOps leverages the powerful version control capabilities of Git. Every change made to the infrastructure or application configuration is versioned, allowing you to track who made the change, when it was made, and why it was made. This audit trail is invaluable for troubleshooting and compliance purposes.
  • Infrastructure Automation: GitOps encourages automation through tools like Kubernetes operators or custom controllers. This ensures that infrastructure and application deployments are consistent, repeatable, and less error prone. Automation also helps reduce the manual effort required to manage and deploy changes.
  • Rollback and Rollforward: GitOps enables easy rollback to previous configurations by reverting changes in the Git repository. Additionally, it supports rollforward, which allows you to apply hotfixes or updates by simply pushing new configurations to the repository, making it easy to respond to issues quickly.
  • Observability and Monitoring: GitOps platforms often include built-in observability and monitoring capabilities. They can automatically monitor the state of your applications and infrastructure, trigger alerts, and even auto-remediate issues based on defined policies.
  • Policy Enforcement: GitOps allows you to enforce policies and best practices by defining them in your Git repository. These policies can include security, compliance, and deployment guidelines. When changes are made, the GitOps platform can automatically check them against these policies, preventing unauthorized or non-compliant changes.
  • Collaboration and Code Review: GitOps encourages collaboration among development, operations, and security teams. Changes go through the same code review and pull request process as application code, promoting better collaboration and visibility across teams.
  • Multi-Environment Support: GitOps is well-suited for managing multiple environments (e.g., development, staging, production) with ease. You can maintain separate branches or directories for different environments in your Git repository and manage their configurations independently.
  • Disaster Recovery and High Availability: GitOps practices can help you implement disaster recovery and high availability strategies effectively. By versioning and replicating infrastructure configurations, you can easily recreate your environments in case of failures or disasters.
  • Cross-Platform Compatibility: GitOps is not limited to a specific cloud provider or technology stack. It can be used with various infrastructure orchestration tools, including Kubernetes, Terraform, and cloud-specific services, making it flexible and adaptable to different environments.
Figure 1 — GitOps in a nutshell

In summary, GitOps offers a more streamlined, automated, and transparent approach to continuous delivery, aligning infrastructure and application code in a single version-controlled repository. This approach improves collaboration, repeatability, auditability, and agility in managing complex software systems. While traditional CD pipelines are still valuable in some scenarios, GitOps has emerged as a powerful alternative for modern, cloud-native applications, moduliths, and microservices architectures.

Source, configuration, and deployment manifest as code

Ownership is in general only felt when it impacts your day-to-day work as a developer — hence we embraced the principle “eat your own dog food” where developers are not only responsible during development to solve for code updates/issues but also to use/maintain/extend the same configuration and deployment manifests within the local dev environment which are then shipped with the product as the bases for the cloud team to extend and adapt for specific customer environments.

As changes in configuration, secrets management, and deployment manifests would become business as usual — there was no way a traditional pipeline could provide this flexibility. GitOps was, for us, the solution to enable flexibility, increase testability, stimulate ownership, and remove complexity from the chain.

So how do we enable flexibility and extensibility in such a way the developer can focus on writing business logic and is concerned only with the application configuration and basic deployment descriptors — environment agnostic?

We started to look for “the glue” between deployments which can be maintained by developers and allows us to manage deployment manifests that can be extended in case of specific customer configuration. The requirements for encryption and environment needs would still persist in a (remote) git repository disconnected from the main application repo (as these deployment extensions are customer/environment specific) which is in line with the 12-factor app Single Code Base principle.

The answer to this was to embrace a deployment framework which is natively supported by the container runtime and easy to learn for developers. Keep on reading — the next paragraph will dive deeper.

Why Kustomize?

Kustomize, Helm, and Jsonnet are three popular tools in the Kubernetes ecosystem that help manage and configure Kubernetes applications and resources. Each tool serves a slightly different purpose and has its own strengths and use cases.

Before we dive into “which one is better”, let’s quickly go over what Kustomize is: Kustomize is a configuration management tool for Kubernetes that helps you customise and manage your Kubernetes manifests (YAML files) in a declarative and scalable way. It allows you to create reusable templates and overlays for your Kubernetes resources, making it easier to manage configurations for different environments (e.g., development, staging, production) and share common configurations across multiple applications.

Here’s a high-level explanation of how Kustomize works:

  • Base Resources: Start with a set of base Kubernetes resource manifests. These are typically YAML files that define your application’s deployment, services, config maps, secrets, and other resources.
  • Kustomization Files: Create a Kustomization file (usually named kustomization.yaml) in the same directory as your base resources. This file is used to specify how you want to customize your resources and what overlays to apply.
  • Customization Rules: Inside the Kustomization file, you define customization rules to modify the base resources. Common customization operations include adding, modifying, or removing fields in the YAML files, changing resource names, and specifying additional resources to include.
  • Overlays: You can create overlays for different environments or scenarios. Overlays are directories containing their own Kustomization files, which inherit and extend the settings from the base resources. Overlays allow you to make environment-specific changes without modifying the base resources directly.
  • Kustomize Build: To generate the final Kubernetes manifests, you use the kustomize build command on the directory containing the Kustomization file. Kustomize combines the base resources with the specified customization rules and overlays to produce a customized set of Kubernetes YAML files.
  • Deployment and Management: You can apply the generated YAML files to your Kubernetes cluster using kubectl apply or other deployment tools. These customized manifests reflect the desired configuration for your specific environment or scenario.
Figure 2 — Applying generated YAML files to your Kubernetes clusters

In the section below we have mentioned the reasons why Kustomize was chosen over Helm (as Jsonnet is a lesser popular option nowadays).

  1. Native Kubernetes Resources: Kustomize operates at the level of native Kubernetes resources (YAML files), making it more aligned with Kubernetes conventions. In contrast, Helm uses its own templating language and packaging format, which can introduce a learning curve and potential compatibility issues.
  2. Declarative Configuration: Kustomize follows a declarative approach, allowing you to define desired states without procedural logic. This makes it easier to understand and maintain configurations, especially for complex applications.
  3. Modularization: Kustomize encourages a modular approach to managing Kubernetes resources. You can break down your configurations into smaller, reusable components and compose them together, leading to cleaner and more maintainable code compared to Helm’s charts, which can become monolithic.
  4. Fine-grained Customization: Kustomize provides fine-grained customization through overlays. You can create overlay files that selectively modify specific parts of your base configuration, allowing for incremental adjustments without duplicating entire resource definitions.
  5. GitOps-Friendly: Kustomize aligns well with GitOps practices, where your configuration is stored in a Git repository and changes are applied automatically when there are commits to the repository. This approach simplifies the deployment and management of applications in a Kubernetes environment.
  6. Versioning and Dependency Management: Helm uses a package-centric approach with its own package manager (Helm charts). Managing dependencies between Helm charts can be challenging. Kustomize, on the other hand, does not have its own package manager and relies on native Kubernetes resource references, making it easier to manage dependencies.
  7. Simplicity: Kustomize is known for its simplicity and minimal learning curve. It does not require installing additional software or plugins, making it more accessible for developers and operators who are new to Kubernetes.
  8. Open Composition: Kustomize allows you to compose configurations from multiple sources, including remote URLs and local directories. This flexibility makes it easier to reuse configurations and integrate external resources into your application.
  9. Continuous Integration (CI) and Continuous Deployment (CD) Integration: Kustomize can be seamlessly integrated into CI/CD pipelines for automation. It simplifies the process of applying changes and updates to Kubernetes resources as part of your deployment workflow.
  10. Community and Ecosystem: Kustomize has gained popularity within the Kubernetes community and is actively maintained. It benefits from the broader ecosystem of Kubernetes tools and practices.

While Kustomize offers these advantages over Helm in certain use cases, it’s essential to note that the choice between Kustomize and Helm depends on specific requirements and preferences.

Zafin is not required to provide installable packages for 3rd parties to be deployed in remotely managed Kubernetes clusters. This eliminates the need for a high level of customizability between internally hosted environments. Given this, Kustomize was selected considering the benefits (as a “package manager”) and limitations (pre-configure every possible switch / toggle you need for an environment in this installer) of Helm and the additional skillsets required by development to create such a package.

Developer optimized experience

With flexibility, complexity is introduced. How do we shield the developer from the extended workflows introduced on the dev workstation but still assure the compile code, build container, and deploy using the manifests is applied with (every) change?

Figure 3 — Streamlined Developer-Centric Workflow

Tilt and Kustomize are both tools that can help developers optimize their workflows when working with Kubernetes, but they serve slightly different purposes. Tilt is designed to streamline the development and deployment process by automating tasks like building container images and deploying them to a Kubernetes cluster. Kustomize, on the other hand, is a tool for customizing Kubernetes configurations through overlays and patches.

To deploy applications to a local Kubernetes environment using Tilt and Kustomize together, the developers can follow these general steps:

  • Install Tilt: First, you will need to install Tilt on your local development machine. You can usually do this by downloading the binary from the Tilt website or by using a package manager like brew if you’re on macOS.
  • Initialize Your Project: Navigate to your project directory and create a Tiltfile. This file will define how Tilt should manage your project. You can create one from scratch or use a Tiltfile template if available.
  • Set Up Kustomize: Ensure that your project is using Kustomize for managing Kubernetes configurations. This typically involves organizing your Kubernetes manifests into a directory structure that Kustomize understands.
  • Define Tilt Configuration: In your Tiltfile, you will specify how Tilt should build and deploy your application. You’ll use Tilt’s DSL (Domain-Specific Language) to describe your project’s dependencies, build steps, and deployment targets. Here’s a simplified example:
  • Start Tilt: Run tilt up in your project directory. Tilt will start monitoring your project files and automatically trigger actions like building container images and deploying them to your local Kubernetes cluster.
  • View Logs and Status: Tilt provides a real-time dashboard and logs for your project, making it easy to monitor changes and see the status of your application.
  • Iterate and Develop: As you make changes to your code or Kubernetes configurations, Tilt will automatically rebuild and redeploy as needed, providing a streamlined development loop.
  • Clean Up: When you’re done, you can stop Tilt by running tilt down.

By combining Tilt and Kustomize, you can create a development workflow that automates many repetitive tasks in building and deploying Kubernetes applications. This allows you to focus on writing code and configurations while ensuring your changes are quickly reflected in your local Kubernetes environment. Remember that the specific details and features of these tools may have evolved since we wrote this blog, so it’s a good idea to consult the official documentation for the most up-to-date information and leading practices.

End to end overview & hands-on

The whole process — from Tilt local development to deployment in a higher environment is executed in 4 main steps:

Figure 4 — Tilt Local Development to Deployment Overview
  1. The developer working with Tilt takes full ownership of: the binary (code compiled via Maven), the configuration of the app (Kustomize base resources describing configmaps and secrets), and the deployment of the app (Kustomize base resources using replicasets for example). Changes are committed to the “app repo”.
  2. The Git repo is tagged with a version and a pipeline (CI) builds from the “app repo” and pushes the image to the Docker registry with the same version as the tag.
  3. The Cloud team has created an overlay in their “deploy repo” — referring to the base configuration provided in the “app repo” (using the tag) — for the deployment, here environment specifics such as nodeSelectors, encrypted secrets, serviceAccounts, etc.
  4. A GitOps agent Argo (CD) polls for changes on the deploy repo and during sync pulls from the “app repo” the Kustomize base configuration for our application which is overlayed with the YAML retrieved from the “deploy repo”.

At Zafin, this flexible combination allows developers to focus on the application and enables the cloud team to configure what is needed to operationalize the deployment in Kubernetes. Now, our code, configuration, and deployment manifests are in sync and under version control!

So… how does the magic look like to create an overlay in Kustomize based upon a remote Git repository using a specific tag.

Assume the directory structure for the repository apprepo:

…and the file kustomize.yaml in the repository deployrepo:

As you can see, my deploy repository’s kustomize.yaml refers to the source code repository of my app which contains the base deployment descriptors (maintained by dev) — inheriting all resources for a base deployment such as configmaps, secrets, replicasets and services. The cloud team has the option to patch or aggregate configuration (such as service accounts or ingress) in their repository. 90% of the configuration and deployment manifests are under the control of the development team — only the customer / environment specifics are maintained by the cloud team.

Mission accomplished.

Conclusion

Using the process described in this article, we could follow the concept “You build it, you deploy it” — in a version-controlled, flexible, secure, and simple way for all parties involved. Developers only need to be concerned about base configuration and deployment, which are required to run the app in a local (kind) Kubernetes cluster. At the same time, the cloud team can extend the deployment manifests on a needed basis.

To kickstart this process, we emphasized the importance of educating both development and operations teams, given the introduction of numerous new concepts. The exciting part is that it is possible to demonstrate all these concepts on a local laptop by running kind Kubernetes with a local Docker Registry combined with Tilt and Argo CD.

Unleashing the power of the combination of frameworks opens the doors for the next steps — such as using Argo Waves, Sealed Secrets, and middleware container initialization services — topics our team will cover in future blogs. Stay tuned.

References

  1. SV, S. (2023) Gitops — A Begineer Guide, Medium. Available at: https://medium.com/@surendar.grind/gitops-a-begineer-guide-2ca1fded446e (Accessed: 21 November 2023).
  2. Tilt. Available at: https://tilt.dev/ (Accessed: 11 October 2023).
  3. Argo CD — Declarative GitOps CD for Kubernetes. Available at: https://argo-cd.readthedocs.io/en/stable/(Accessed: 11 October 2023).
  4. GitHub. Available at: https://github.com/ (Accessed: 11 October 2023).
  5. Kind. Available at: https://kind.sigs.k8s.io/ (Accessed: 11 October 2023).
  6. Porter, B., Zyl, J. van and Lamy, O. (no date) Welcome to Apache Maven, Maven. Available at: https://maven.apache.org/ (Accessed: 11 October 2023).
  7. Kubernetes Native Configuration Management, Kustomize. Available at: https://kustomize.io/ (Accessed: 11 October 2023).
  8. Spring Modulith. Available at: https://spring.io/projects/spring-modulith (Accessed: 11 October 2023).
  9. Wiggins, A. (no date) The twelve-factor app. Available at: https://12factor.net/codebase (Accessed: 11 October 2023).

--

--