Bootstrapping Google Kubernetes Engine after creating it

Daniel Megyesi
Google Cloud - Community
4 min readAug 18, 2018

I came across a rather annoying problem: after I create my managed Kubernetes clusters on Google Cloud by Terraform, I would like to provision some default settings to them in the same run. So if I redeploy the entire cluster, I don’t want to run by hand kubectl apply or a different CI pipeline which will create my namespaces, apply my RBAC rules, etc.

This turned out to be a rather frustrating task, experimenting with local-exec, Terraform in Terraform provider, nesting gcloud commands to fetch GKE credentials, building custom kubeconfig files by hand, etc. Finally I found a pretty clean and nice solution, let me share it with you.

When you realize how simple is it actually! (Photo by Mubariz Mehdizadeh on Unsplash)

The solution was actually hidden in plain sight on the Terraform Google provider website, in a data source example.

Using the official K8s provider

In a nutshell:

  • you can dynamically refer your cluster’s IP address and CA certificate (and this will beautifully handle resource dependencies, e.g. creating your cluster first, before trying to apply things on it)
  • you can query for your gcloud OAuth token (this seemed to be the toughest one to invent in all the other solutions)
# Query my Terraform service account from GCP
data "google_client_config" "current" {}
provider "kubernetes" {
load_config_file = false
host = "https://${module.gke_cluster.endpoint}"
cluster_ca_certificate = "${base64decode(module.gke_cluster.cluster_ca_certificate)}"
token = "${data.google_client_config.current.access_token}"
}

Then you’re ready to provision resources!

resource "kubernetes_namespace" "test" {
metadata {
name = "test"
}
}

Multiple clusters in the same Terraform workflow

So what if you have multiple GKE clusters created? Provider aliases to the rescue!

provider "kubernetes" {
alias = "gke1"
load_config_file = false host = "https://${module.gke_cluster1.endpoint}"
cluster_ca_certificate = "${base64decode(module.gke_cluster1.cluster_ca_certificate)}"
token = "${data.google_client_config.current.access_token}"
}

And when you invoke a resource, you just add an extra provider line:

resource "kubernetes_namespace" "test" {
provider = "kubernetes.gke1"
metadata {
name = "test"
}
}

…and probably this is where you realize how unmaintained is the official Kubernetes provider.

It barely has any useful resources which you might want to provision. Just to name a few things: RoleBindings, ClusterRoles, Deployments, DaemonSets, Ingresses… basically only the important things missing! :D

Using the community-maintained K8s provider

Luckily, I bumped into an amazing fork of the official plugin: https://github.com/sl1pm4t/terraform-provider-kubernetes

The setup is rather tricky because you actually have to include the provider binary by hand, but still you need to download other plugins from the Internet. This works pretty flawlessly when you use custom plugins. But!

When the custom plugin has the same name as the official one…Terraform will just simply always download the official one and completely ignore your custom binary.

provider "kubernetes" {
# We use a custom plugin here because the official is very outdated. Please note the -custom suffix, it's important!
# Download from https://github.com/sl1pm4t/terraform-provider-kubernetes/releases and unzip to the current
# TF workspace under terraform.d/plugins/<your architecture> (darwin_amd64, linux_amd64)
version = "1.2.0-custom"

load_config_file = false
host = "https://${module.cluster_gke1.endpoint}"
cluster_ca_certificate = "${base64decode(module.cluster_gke1.cluster_ca_certificate)}"
token = "${data.google_client_config.current.access_token}"
}

The trick with plugin auto discovery

So the magic here is only to

  • download and unzip the custom binary
  • let Terraform auto-discover your plugin

If you follow the GitHub documentation and do a manual discovery by terraform init -plugin-dir=<custom plugin folder>, it will break everything which needs to download official providers, e.g. Google Cloud provider.

So rather what you can do: create a terraform.d/plugins/darwin_amd64/ (or linux_amd64) folder in your current Terraform working directory (don’t forget to put it in .gitignore) and extract there the plugin.

Now initializing the Kubernetes provider will just completely ignore your folder and download the official one…this is because they have the same name. Even if you put version ~> 1.12, as they both have the same release versioning now, will default to the official.

One last touch remaining: add the -custom suffix to your requested plugin version. This will ensure it falls back to your own binary. (On the contrary, disadvantage is that you will have to update the plugin by hand in the future.)

And you’re ready to do a lot more in Kubernetes, using Terraform:

resource "kubernetes_cluster_role_binding" "admin" {
# docs: https://github.com/sl1pm4t/terraform-provider-kubernetes/pull/39
metadata {
name = "tf-admin"
}
role_ref {
name = "cluster-admin"
kind = "ClusterRole"
}
subject {
kind = "User"
# this is actually my TF service account in GCP
name = "terraform@<my gcp project>.iam.gserviceaccount.com"
}
}

Well, one disadvantage with this provider at the time of writing this article: there is practically no documentation — either you’re gonna read the code for the syntax or browse the GitHub issues/pull requests. It’s all there, just need to do some extra rounds.

Hope you found this quick guide at least as useful as it was for me! Let me know below in the comments if you found any other solutions for dynamically sourcing Kubernetes credentials in the same run as creating the cluster itself.

--

--