A more secure way to call kubectl from Terraform

Ian Tivey
Synechron
Published in
3 min readFeb 1, 2020

I’ve been a heavy user of Terraform to manage Azure Kubernetes Service (AKS) deployments over the past 12 months, which hasn’t always been plain sailing. The stringent security requirements of a bank to run workloads has resulted in a rather complicated pattern for deploying a compliant Kubernetes cluster, which includes configuring resources in both the Azure and Kubernetes control planes with a little bit of ARM templating and cli call-outs to fill in the gaps in Terraform.

The Kubernetes Terraform provider has received some much needed attention recently, but there are still certain resources — in particular the custom Azure resources for deploying things like managed pod identities — which require the use of kubectl with cluster-admin rights to deploy them. Additionally transposing Kubernetes YAML provided by Azure (similarly for AWS EKS) is a painful exercise and even more painful to maintain… much better to just curl down the latest YAML at deployment time, apply it and be done with it.

So how can we make calls out to kubectl from Terraform? This uses AKS as an example, but EKS is similar…

Obtain the cluster credentials

The AKS resource in the Azure Resource Manager Terraform Provider provides kubeconfig contents for the cluster admin as an attribute. You can grab it like this:

kube_admin_config = azurerm_kubernetes_cluster.your_cluster_name.kube_admin_config_raw

Passing cluster credentials into kubectl

The kubectl command has the --kubeconfig flag, allowing you to point it at a specific kubernetes configuration file, but it doesn’t have a means to provide credentials directly via the command line.

One option is to have Terraform spit out a file in a known location for kubectl to use, but leaving the cluster admin credentials lying around on disk isn’t particularly secure and we have to concern ourselves with cleaning up and handling all of the conditions in which the file could possibly be left behind to prevent credential leakage. If we can keep it in memory then we can avoid these concerns.

Luckily we can use a cool little feature in bash call process substitution to create a pseudo file from our raw kubeconfig contents. We also need to use some in-line base64 encoding/decoding to work around the line breaks in the kubeconfig contents.

So our command looks like this:

kubectl ${var.command} --kubeconfig <(echo ${base64encode(var.kube_admin_config)} | base64 --decode)

Calling kubectl from Terraform

We can use the in-built local-exec provisioner in Terraform to run a command on the local machine. Obviously you’ll need kubectl installed and in the $PATH for this to work.

resource "null_resource" "kubectl" {
provisioner "local-exec" {
command = "kubectl ${var.command} --kubeconfig <(echo $KUBECONFIG | base64 --decode)"
interpreter = ["/bin/bash", "-c"]
environment = {
KUBECONFIG = base64encode(var.kubeconfig)
}
}

A couple of things to note:

  • we have to set the interpreter to bash -c. The default linux interpreter calls bash via /bin/sh -c which doesn’t support process substitution.
  • passing the kubeconfig in as an environment variable means that the contents doesn’t get printed to stdout when running Terraform, thereby avoiding your cluster admin credentials from being leaked via log files.

Conclusion

And that’s it — a way to call kubectl from Terraform without leaking your credentials everywhere.

Calling kubectl from Terraform does mean that the Kubernetes resources it creates aren’t under the control of Terraform. For crafting your own deployments, persistent volumes, volume claims etc, I recommend using the native Kubernetes provider in Terraform as far as you can, which will give you finer control over the resources it creates. Another alternative is to use Helm and the Helm Terraform provider, which will allow Terraform to destroy Kubernetes resources you’ve deployed via Helm.

--

--