Builtin Quality for Helm Charts: unit testing to the rescue!

As we dive more and more in helm charts and we become true bulwarks of yaml writing (YamlOps engineers!), sometimes there is a need to use some template logic to introduce different behaviors in our helm charts; when it occurs, we have more options to customize helm charts (and often reuse them) but still, we have another problem: with more powers, comes more bugs!

To avoid some bugs on the template logic or even have guarantees that the behavior will not be changed by accident (regression) in the future, we can leverage unit tests as we do in software (a good point here: as stated in the SRE Book from Google Team, “solving operations problems with software has always been central to DevOps” comes to mind as I type).

But Wait! You are talking about unittests and testcases but some of us are not developers per se; we just use yamls to describe our infrastructure!

In this case, my consecrated fellow, trust me: when you put even a simple if somewhere in your code or in your YAML template, brace ourselves: eventually — at some point of the future, it will bite you back!


To leverage the concept of testing, we are going to hypothesize a scenario: a company that wants to add support for a CNCF Service Mesh called Linkerd. The linkerd is not the topic that will be discussed here, but some conditionals and template logic that we are going to use is part of our day-to-day routines on maintaining helm charts: nested ifs, ranges, ORs, and so on.

Taking the above scenario as our start point is stated that Linkerd support is achieved by making the following changes:

  1. a serviceAccount have the attribute automountServiceAccountToken defined as true, which is the default in Kubernetes, but in our pseudo company we set it to false by default for security reasons (you really should do it :D);
  2. the deployment that will be part of the service mesh, have the label linkerd.io/inject: enabled , which we don't have in any of our resources, yet.

With that in mind, we go to:

  1. Create a new helm chart;
  2. Change the Service account to accommodate scenarios with and without linkerd support.

To do the HandsOn Topic you going to need the following software:


Well, we start by creating a new helm chart with the command helm create medium , which creates a mediumfolder with the following structure:

Thinking on item 1, we need to make changes to ourtemplates/serviceaccount.yamlwhich is untouched/vanilla at this point:

The basic logic that comes in the default template is very simple: it will render the resource only if .Values.serviceAccount.create is validated as a true value.

As stated before, we need to do something to maintain the behavior expected(automountServiceAccountToken: false) and provide means to activate the token when using linkerd. We can achieve the behavior above with the following YAML:

Now we inserted a “linkerd_support” value that will act as a switch for the configuration: if true mount service account token otherwise does not mount. Time to test, then:

Very nice, it works as intended: if serviceAccount.createand linkerd_support are true, then a service account with automountServiceAccountToken:true will be created. At this point, we could start by changing the deployment to support item #2, but we going to stop here and think about some points:

What gives you guarantees that in the future the conditional expressed here will keep working as intended or that a change will not inject some sort of regression on this behaviour?

That every person that changes the helm chart will run the helm template and check for this (and others) results?

If you came this far in this article, maybe you’re thinking the same as me: unit tests and continuous testing for the rescue!

Time to write some tests

Before we can dive into the tests we need some other tools than helm3 installed:

With all tooling installed, it's time to work.

First, create a folder called tests inside your helm chart root folder; then, inside this new folder, create a file called linkerd_support_test.yaml, with the following content:

We start by defining the suite name which is the test case Name and the templates that will compose the test; in this case, serviceaccount.yaml — the template that we changed earlier.

Before we can test the changes linked with linkerd (Uh-Oh! Bad joke), we going to take into consideration some knowledge spread by Uncle bob and the boy scout rule: “Always leave the campground cleaner than you found it”.

If we take a look at serviceAccount, the first if of the code can be tested and that will be our first test.

For that, we going through writing tests by using AAA pattern as much as possible. Let's see it in action:

With the test above, we are doing the following:

  1. Defining/Arranging some values for the test (which translates roughly to helm template . --set serviceAccount.create=true command);
  2. Making some assertions that:

a. There is a document rendered;

b. The Kind of the resource is a ServiceAccount.

3. That the automountServiceAccountTokenis false by default when serviceAccountis enabled but linkerd is not.

Let's run the tests with helm unittest command:

Good, our first test is working.

Time to create another test, this time we gonna test that automountServiceAccountToken is activated when linkerd_support is true by creating another test (right after the first, the test section expect a list of tests):

If we run helm unittest . command again, it works:

A good point here is: we need to set both serviceAccount and linkerd_support to true to comply with the scenario that we described earlier.

Some questions that occur at this time:

What happens if serviceAccount.enabled is false but linkerd_support is set to true?

Should we separate the logic for linkerd support in a manner which it doesn’t interact with other resource logic like the one in the service account?

Should we separate a resource of each type, augumented or not with linkerd needs?

What if we add support for other service meshes, which often include their own labels and other changes on resources? More “elifs”?

Whatever is the answer to the questions above, you can now use the tests to prove each of these scenarios.

Additionally, if you have helm and helm-unittest installed in your CI environment, then you can run it continuously with your pipelines by using the same commands that we used during the article.

Final Considerations

As we need to add more checks or behaviors to our helm templates, we should do it more securely by writing unit tests, and maybe we can avoid some nasty bugs here and there.

The situation presented in the scenario can happen at any point of your helm template; in truth, more checks that you put into your yamls— nesting ifs, ranges, and/or/xor and some other TonyHawk Moves ™️, higher is the chance that you’ll hit an unpredicted behavior.

Some final thoughts about this article:

  1. If you are nesting too many ifs or some other conditionals, maybe its time to write a function inside your helpers.tpl file with Single Responsibility in mind to take out complexity;
  2. What to test and what not? I think the same rule for code can be considered for helm charts: write tests for what makes you more comfortable in terms of security to change or refactor a helm chart; avoid testing third-party code (like the other functions inside helpers.tplfile) unless you make changes to them.

That’s All Folks!

Home Exercise: can you implement item #2 of our scenario covered with unittests? Can you do it in a TDD way?

Father, CNCF Adopter, Python Developer, Lean Practitioner, Husband, Biker.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store