How Tenable Uses Helm to Template a Microservice Stack

Jonathan Lynch
Tenable TechBlog
Published in
7 min readOct 11, 2018

Tenable uses a complex Ansible-based deployment system to give its developers maximum flexibility in maintaining all the Kubernetes objects needed to run their cloud platform, Tenable.io. This high-touch and feedback-heavy model was becoming burdensome as our sites continued to increase in size and number, so we needed to find another way. Helm was the solution we chose.

Update: You can find the followup to this article here.

Quick intro to Helm

Because Kubernetes objects are largely boilerplate, using a templating language to produce them makes sense. Helm provides a templating language based on Go templates, with the added benefit of being able to directly insert the templated output into a running Kubernetes cluster. When you package a Helm template together with a values file to feed into that template, it’s called a Helm Chart. A simple Helm Chart might work like this:

As you can see, the template inserted the image version from the values.yaml. A typical chart has multiple templates reading in from a single top-level values.yaml. For example, a webapp chart might generate a Deployment with mysql and httpd containers, a couple ConfigMaps to configure them, and an Ingress for external access. Each one of those Kubernetes objects would have a corresponding template under the templates folder, reading in key configuration items from the values.yaml (ssl certificates, mysql credentials, listen port, etc).

Tenable’s usage of Helm Charts is far from typical, though. Helm actually contains a powerful templating language that allows for logic, looping, functions, and more. In order to manage a fleet of microservices without copious amounts of duplication, we had to template our templates. We did this by taking advantage of a Helm convention called helper templates and turning it up to 11.

Easy example: Horizontal Pod Autoscaling

At the top level of our “vuln-management” chart, we have the following templates:

Each of these represents one of the Kubernetes objects we use, while _helpers.tpl contains other shared functions. The leading underscores instruct helm to process this file without rendering it. To use one of one the simplest of these as an example, we have the HPA template:

Like in the intro, we’ve defined a Kubernetes object template that expects to be driven by a stanza in values.yaml, an example of which you can see below. Above, Helm functions are in white, Helm variables are in blue, and normal text is in orange.

To really make this powerful, we take advantage of another Helm feature called subcharts. The top-level is just a meta chart that contains other charts that all inherit the same set of templates. This way, the Helm portion of each microservice is reduced to little more than a values.yaml inheriting the shared structure of the design. Take a look at this sample directory structure, trimmed to include a couple subcharts that use the autoscale function:

The top-level Chart.yaml defines this chart as the “vuln-management” chart, which delivers and maintains all the configuration necessary for Tenable’s flagship SaaS product. Beneath the charts/ subdirectory is a flat hierarchy that defines each of the microservices as discrete units, each with their own Chart.yaml that provides metadata and values.yaml that provides configuration. The top-level values.yaml provides product-wide defaults that can be overridden on a per-microservice (or per-customer) basis. Finally, if you look at the contents of templates/autoscale.yaml, you’d see just a single line:

When rendered, this instructs Helm to execute the deployment.autoscale template we defined in the top-level templates, passing in the values from this subchart.

Advanced example: Deployment volumeMounts

The HPA was the simplest example, but Deployments are where we really take advantage of Helm’s advanced features. Our actual _deployment.tpl is over 300 lines (and growing), but I want to share an example that shows off how much Helm can simplify writing Kubernetes yaml files.

The Elasticsearch Deployment we run in our stack needs two volumeMounts defined: a ConfigMap so we can establish its configuration, and a HostPath so we can persist its data across Pod restarts. The same section of Helm code handles both of these. First I’ll show you the code as a whole, and then break it down into parts.

First, take a look at the very first line and see that we support mounting Volumes (directories), ConfigMaps, or Secrets. If a particular chart needs none of these, then this entire section is silently left out. Assuming at least one of them appears, however, processing begins. You may also have noticed in this line that Helm uses prefix notation. Like Lisp, comparators are just another function preceding the arguments being prepared. The next interesting bit we have is:

Here, we check the values.yaml for a “volumes” key. If found, we use the range function to loop through it as a list. Assigning each list item to the $volume variable means we can access its keys like so:

In the spirit of abstracting away as much boilerplate as possible, we got a bit fancy with the name. The name is merely a symbol for linking the volume to its matching volumeMount, so why even require the chart maintainer to specify it? By prepending the chart name and replacing all dots and slashes in the directory path with dashes, we are able to automatically generate a consistent, unique, Kubernetes-friendly name for the volume mount. We use this same trick to derive the name for the ConfigMaps and Secrets as well.

Now that we have the Deployment looking to mount these internal paths, we have to create matching volume definitions to tell Kubernetes where to fetch the resources from.

We use the exact same function to create the name, and then insert the hostPath defined in the list item. The above section contains another interesting trick: If the hostPath is omitted, then this volume will automatically be considered a PVC instead, dynamically allocating persistent storage as EBS volumes and mounting it to the container. In this way, we provide a unified syntax for directory mounts rather than clutter the values files with nearly identical stanzas.

The subsequent ConfigMap section follows the same tune, although we have to change the syntax slightly now that we’re extracting a filename from the chart’s ConfigMap:

Notice we use Helm’s {{ default }} function so that the chart maintainer doesn’t even need to specify file permissions unless they’re doing something unusual. We also set configMap.name to service.name so that all ConfigMaps for a given service can be grouped together under a single spec file. To match the above configMap declaration, our Elasticsearch chart’s templates/configMap.yaml contains merely:

All the ConfigMap preamble is pulled in by that one include, and the developer is free to specify as many files as they want. A convention built into this design is that the key in the ConfigMap must match the filename that will end up in the container, but we could always add a way to override this behavior if it ever becomes a problem. The YAGNI principle is well-applied here.

The Secret mounts behave slightly differently than the ConfigMap mounts. While our configuration files are always chart-specific, we have a few Secrets that need to be shared across several charts. As such, we require the secret name to be specified rather than having it be automatically generated.

Finally, all of this is driven by this simple little section in the chart’s values.yaml:

Notice how we’ve defined two separate volumes, and that a hostPath is only specified for the second one. As discussed earlier, this means that the data directory will be persisted as dynamically generated storage in our default StorageClass, while the logs directory will be exported out to the EC2 instance where it can be picked up by other logging tools.

It gets better

As an application developer, all I really want to do is say “I want this directory to be persistent” or “I need this much RAM” or “run this command when my Pod stops”. Helm lets us do all that with the ease of Docker’s command-line arguments, while also providing the manageability of Kubernetes, the consistency of templates, and the safety of a format that can easily be checked into version control. To the extent we’ve taken Helm, it works like an advanced version of Docker Compose where we can define and use our own API.

There’s so much more to cover, like how we package and deliver the charts, how we take into account customer-level variations, and how we orchestrate intra-cluster dependencies and enforce installation and startup order… but that’ll have to wait. If you’ve made it this far, dear reader, I thank you for your attention, and I hope you feel inspired to see what Helm can do for you!

--

--