From Chaos to Consistency and Simplicity: Using Centralized Helm Charts with Externalized Values

Sava Simic
project44 TechBlog
Published in
5 min readJul 17, 2023

Adopting Kubernetes as a container orchestration system has become a common choice for managing and deploying applications. However, managing the Helm charts files of these applications can be a daunting task. This is where Helm (a templating tool used to create Kubernetes resources), often referred to as the “Kubernetes package manager” comes into play. It enables users to manage, package, and deploy applications in a Kubernetes environment with ease. This blog post will explore the advantages of a centralized helm charts system, in which common helm charts are centrally managed, with actual values externalized and used to render the manifest.

There are several methods to manage the source files of Helm charts like:

  • have single repository with all chart sources stored in a dedicated git repository (centralized helm charts)
  • keep your chart sources with your application code in same git repository.

At project44, we had been using the second method and we were producing new manifest version on each application change even if there were no changes in chart files. Additionally, most of our applications of same type had no differences between their template files, other than application specific values and secrets files,​ so we decided to implement a variation on the centralized helm charts method — we centralized archetypal (different type of templates for different applications) helm charts with externalized application specific files.

Why Centralized archetypal Helm Charts?

When working with Kubernetes applications, organizations often face challenges managing helm charts for multiple applications or microservices. This can lead to duplication and inconsistency, making it harder to scale and maintain applications in the long run. The centralized helm charts system aims to solve these problems by providing a single source of truth for helm charts, ensuring consistency across applications and simplifying maintenance.

We decided to use native features incorporated into helm already as a first-class feature for our centralized helm charts repo (CHC). When building the manifest from a chart, helm accepts a values file(s) which injects its values into the namespace, overriding any existing values with the same name in the chart. Helm then accepts command line values which take precedence over both chart values and values file(s) values in the same way. Using this built-in feature, we can create a single centralized helm chart for each manifest archetype. I will explain in an example below how this works.

For example we added horizontal pod autoscaling template and define autoscale object in values.yaml like this:

autoscale:
minReplicas: 1
maxReplicas: 2

but in production we want these values to be higher by default for anyone who is using specific this archetype so we will add to values-production.yaml:

autoscale:
minReplicas: 3
maxReplicas: 5

also for our foo-service we want to set maxReplicas to 6 and we will define externalized values-production.yaml:

autoscale:
maxReplicas: 6

then when we want to render manifest command like this will be executed:

helm template foo-service --values helm-build/foo-service/values.yaml --values helm-build/foo-service/values.production.yaml --values helm-build/foo-service/project-externalized-values/values.yaml --values helm-build/foo-service/project-externalized-values/values.production.yaml helm-build/foo-service

and the produced manifest will contain horizontal pod autoscaler defined like this:

apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: foo-service
namespace: production
labels:
....
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: foo-service
minReplicas: 3
maxReplicas: 6
targetCPUUtilizationPercentage: ...

One key aspect of our method is externalizing values for each application or environment. This approach ensures that the actual values used during deployment are decoupled from the common chart templates. Consequently, managing values separately from the charts streamlines the process of updating and deploying applications.

With this approach we wanted to:

  • implement the shared helm charts with defaults so engineers will only need to specify values they explicitly want to override
  • remove duplicated chart content that was copy pasted from one repo to another because that’s how it was done before
  • provide a way to update configurations efficiently
  • provide a easier way to upgrade Kubernetes and replace deprecated APIs
  • provide an easier way to upgrade Helm to newer version
  • provide a clean separation of application code vs. application config​
  • fix incorrectly configured objects like Pod Disruption Budgets, ingresses, etc

Implementing a Centralized Helm Charts System

There are a few key steps to successfully implementing a centralized helm charts system:

  • Create a central repository: Establish a central repository to store and manage all common helm charts. This could be a private Git repository or a dedicated Helm chart repository. This repository should contain common chart templates.
  • Define common chart templates: Create base helm chart templates that are applicable across multiple applications or environments, and include common configurations, such as resources, deployments, and services.
  • Externalize values: For each application or environment, create separate values files that include all the specific configurations required for that application or environment.
  • Integrate with CI/CD pipelines: Integrate your centralized helm charts system with your CI/CD pipeline, ensuring that the manifest is rendered at deploy time, with the appropriate values files for each application or environment — CI/CD pipeline uses the templates from centralized helm chart repository and injects the application specific values files to produce the fully generated manifest file for the application.

The following diagram shows an example CI/CD pipeline:

  • CI pipeline checkouts the centralized helm chart and application repositories and copies the application specific archetype chart and application externalized values files to helm build folder
  • After that helm commands will be executed to produce manifest file which will be stored in Google Cloud Storage (GCS) so it can be used within our CD pipeline. We are persisting manifest files in GCS to allow developers to quickly roll back

While there are lot of benefits of CHC archetypes with externalized values, there are also some disadvantages of this approach. As we try to create a CHC repo with sane defaults that all of our teams can use, the templates get more and more packed with logic which increases the complexity. While this comes in handy as developers can use the defaults or override it if needed, it makes our charts open to bugs and logic regressions. To avoid this, we are looking into leveraging unit tests like we would for application code, ​but that’s topic for another blog!

Conclusion

By implementing a centralized helm charts system, organizations can improve the consistency, reusability, and security of their Kubernetes applications, while simplifying management and maintenance. Externalizing further enhances the flexibility of Kubernetes deployments, enabling organizations to scale their applications more effectively. Embracing centralized helm charts is a crucial step towards optimizing your Kubernetes applications and streamlining your deployment process. This approach was helpful for our DevOps, Developer Experience and Site Reliability engineering teams so they could easily upgrade Kubernetes and API versions and to add new capabilities.

--

--