Using destroy time action to make Terraform CLI call-outs more robust
Thanks to Atha Kouroussi for this tip in his response to my previous post — “A more secure way to call kubectl from Terraform”.
We use Azure Managed Disks to host data in our Kubernetes cluster but Terraform’s Kubernetes Provider is lacking support for them, which means we have to use the kubectl
trick described in the previous post to create PersistentVolume
and PersistentVolumeClaims
for them.
This was creating some issues destroying our cluster — the destruction of the managed disks would fail because Terraform wasn’t deleting the associated PV and PVCs it had created, so the disks were still bound to the PVs in the cluster.
This is where destroy time actions have proven to be a useful tool.
Here’s what our kubectl
Terraform module now looks like with a destroy time action defined:
resource "null_resource" "kubectl" {
provisioner "local-exec" {
command = "kubectl ${var.command} --kubeconfig <(echo $KUBECONFIG | base64 --decode)"
interpreter = ["/bin/bash", "-c"]
environment = {
KUBECONFIG = base64encode(var.kubeconfig)
}
}
provisioner "local-exec" {
when = "destroy"
command = "kubectl ${var.destroy_command} --kubeconfig <(echo $KUBECONFIG | base64 --decode)"
interpreter = ["/bin/bash", "-c"]
environment = {
KUBECONFIG = base64encode(var.kubeconfig)
}
}
}
…which we can call like this (in reality we use Terraform template files to create the Kubernetes YAML, which I’ve omitted here for clarity):
module "kube_pv" {
source = "../modules/kubectl"
command = "apply -f pv.yaml"
destroy_command = "delete -f pv.yaml"
kubeconfig = azurerm_kubernetes_cluster.k8s.kube_admin_config_raw
}
We might not need a destroy time action for all of the kubectl
commands we use. If you provide an empty string into the destroy_command
variable you’ll get kubectl
printing error messages in the terraform stdout logs and Terraform will finish with a non-zero exit code, screwing with your error handling.
To suppress this you can use Terraform’s “if/then/else” interpolation in the command field:
resource "null_resource" "kubectl" {
provisioner "local-exec" {
command = "kubectl ${var.command} --kubeconfig <(echo $KUBECONFIG | base64 --decode)"
interpreter = ["/bin/bash", "-c"]
environment = {
KUBECONFIG = base64encode(var.kubeconfig)
}
}
provisioner "local-exec" {
when = "destroy"
command = var.destroy_command == "" ? "echo no_destroy_command" : "kubectl ${var.destroy_command} --kubeconfig <(echo $KUBECONFIG | base64 --decode)"
interpreter = ["/bin/bash", "-c"]
environment = {
KUBECONFIG = base64encode(var.kubeconfig)
}
}
}
Now our cluster deletion works perfectly, without leaving a trace!