Why I no longer use Terraform for Templating Kubernetes

Christopher Stobie
Jun 4 · 4 min read

One of the most important principles to being an engineer, is being able to admit when you are wrong. Well folks. I was wrong. Some of you may have read my previous blog post about Templating k8s with Terraform. Well since this time, I have come to understand the value of helm. If you recall this is a big transition from my earlier sentiments of “I have never understood the value of Helm”.

All that being said, I still have some big issues with Helm, Tiller is a mess, and fails more often than not (which makes me even more excited for Tillerless helm in the new versions). The main selling point for me for using helm was adding in helmfile. Helmfile is a “wrapper” so to speak, that runs helm, and injects variables into them by creating temporary values files out of your helmfile values (it does much more than this, this is over simplified). Now the biggest thing about this tool (in my opinion) is the ability to pass in environment variables to helmfile. Since we are able to inject environment variables (something helm doesn’t allow for some odd reason) we are able to write a single helmfile file, to call our helm charts, and define overrides within the environment per deployment.

Extremely professional photoshop skills

Let’s say you have a standard setup of 3 environments. Dev, stage, prod. Now you could manage 3 values.yaml files, and hard code all your relevant things in there per env. But this get’s annoying to manage, and is not overly ideal when you start scaling past 2–3 environments. This is where the value of helmfiles injection of environment variables comes into play. Instead of dev, stage, prod values.yaml files. You write your helm chart with a single values.yaml that defines some sane defaults, then in helmfile, you specify environment specific overrides as environment variables that looks something like this.

repositories:
      - name: "stable"
        url: "https://kubernetes-charts.storage.googleapis.com"
      - name: "incubator"
        url: "https://kubernetes-charts-incubator.storage.googleapis.com/"releases:
    #---------------------------------------------------#
    # ALB Ingress Controller
    #---------------------------------------------------#
    - name: "alb"
      namespace: "kube-system"
      chart: "incubator/aws-alb-ingress-controller"
      version: "0.1.7"
      wait: true
      values:
      - autoDiscoverAwsVpcID: true
      - autoDiscoverAwsRegion: true
      - rbac.create: true
      - podAnnotations:
          iam.amazonaws.com/role: {{ requiredEnv "kube2iam_default_role" }}

In this example helmfile, we call the helm chart “incubator/aws-alb-ingress-controller”. This helm deployment requires IAM permissions, and if you, like any sane person, are using something like Kube2iam or Kiam, this means you must pass the iam role to the ingress controller. Instead of hardcoding values, we set them in the environment and it is dynamically injected into the helmfile per deployment. You can use a tool like aws-env or chamber to inject things into your environment from SSM easily .So your workflow may look something like this.

eval $(AWS_ENV_PATH=/dev/us-east-1/terraform/application1/ AWS_REGION=us-east-1 aws-env --recursive)helmfile -f stable/alb-ingress.yaml sync

This will go to SSM, get all parameters stored at /dev/us-east-1/terraform/application1 and export them into your environment to be consumed by helmfile. If you are using something like Terraform (which you should be) you can easily create SSM parameters when creating other resources in Terraform. Suppose you need to reference subnets for your alb ingress controllers, when you create your networking layer in Terraform, you can create SSM parameters that container the list of subnets terraform created. Something like this.

resource "aws_ssm_parameter" "private_subnets" {
  name  = "${local.ssm_prefix}/private_subnets"
  type  = "String"
  value = "${join(",", module.vpc.private_subnets)}"
}resource "aws_ssm_parameter" "public_subnets" {
  name  = "${local.ssm_prefix}/public_subnets"
  type  = "String"
  value = "${join(",", module.vpc.public_subnets)}"
}resource "aws_ssm_parameter" "vpc_cidr" {
  name  = "${local.ssm_prefix}/vpc_cidr"
  type  = "String"
  value = "${module.vpc.vpc_cidr_block}"
}resource "aws_ssm_parameter" "vpc_id" {
  name  = "${local.ssm_prefix}/vpc_id"
  type  = "String"
  value = "${module.vpc.vpc_id}"
}

Now you have dynamic inject per environment that terraform creates into your helm charts for deployment. The overall flow looks something like this.

terraform -> ssm <- aws-env -> helmfile -> helm -> k8s

Our end result is zero hard coding. Everything is a parameter. Terraform creates anything of relevancy automatically when creating its resources. And helmfile consumes anything it needs to directly from SSM.

One complaint I have with this setup, is the inclusion of something like aws-env or chamber. This is not overly fun to type out everytime, so we submitted a PR to helmfile to add native inclusion of SSM parameters inline! You can view the PR here: https://github.com/roboll/helmfile/pull/573

In summary, this workflow keeps our setup very clean and DRY. We aren’t hard coding anything anywhere, we are dynamically injecting the proper variables into helm from SSM, and getting automated deployments without much headache. Once Tiller is out of the picture I may actually put all my eggs into the helm basket.

Also would like to make a big shout to the guys over at CloudPosse, specifically Erik, who provided a good deal of the inspiration for this setup.

Faun

The Must-Read Publication for Aspiring Developers & DevOps Enthusiasts

Christopher Stobie

Written by

Director of SRE @ Calm, Drone Lover, Fan of Star Wars References

Faun

Faun

The Must-Read Publication for Aspiring Developers & DevOps Enthusiasts