Terraform Automation With Argo on Kubernetes: Part1

Alexandre Le Mao
InsideBoard Tech Community
5 min readJul 1, 2020
AI Platform for change management

Background

At Insideboard we aim to automate everything from the infrastructure to the product. The infrastructure is fully cloud agnostic and we deliver dedicated cloud resources for all our customers. We recently moved all our Kubernetes jobs applications workflows from Stackstorm to Argo.

Why did we moved?

Simply because Argo is Kubernetes native. So it become very easy to manage all the workflows in a GitOps declarative way and use it as part of our micro-services or engines configuration with a tool like Argo CD !

After migrating our app-level jobs and workflows, the infrastructure team asked itself: Why keep two workflow engine systems, one for infrastructure and another for micro-services and engines?
So yes ! We moved every infrastructure workflow to Argo, from the cloud resources provisioning to Sensu configuration through Saltstack states and backups 😃

During this first part, we will show you the basics of how we automate Terraform with Argo. On a second part we will introduce Consul and Vault for security and Terraform templating 😄

This article is not intended to be a tutorial. We especially want to share our use case with you by showing you some provisioning mechanisms related to our infrastructure.

References

We will not go deep in the concepts or installation of all the tools as all docs are very well made. You will find some references below:

Build a Terraform Docker Image

At a very first stage you will need of course a Terraform docker image.

Please find below a sample Dockerfile:

You can easily adapt it for your use case or use it directly for testing :

docker pull descrepes/terraform:0.12.9-demo

Argo Workflow

Great !
Now we can start playing with Argo workflows and Terraform.

In this terraform WorkflowTemplate you will find 4 templates, plan, apply, approve and update :

  • The plan template take a git repository artifact containing all the Terraform scripts as input. Then the workflow will use the docker image previously build and generate a Terraform plan file. You can notice that we are making an output artifact with the entire directory containing everything.
  • The apply template take as input an artifact containing a Terraform plan file.
  • The approve template is only here to suspend the workflow. This will allow a user verification of what will be created, updated or destroyed in the plan.
  • The last template, update, will combine everything. The plan which will create a plan artifact that will be pass to the apply. The apply will wait for user interaction to resume the workflow.
Of course do not set any secrets in any git repository, that’s why you should read the part 2, we will introduce Vault 😄

Nice now you can submit Terraform workflows from the Argo UI or Argo cli tool.

Argo Events

Triggering a Terraform workflow from a cli or an UI is nice but you probably want to trigger your workflow when something has changed.

This is where Argo Events will help you. It is event-driven and will let you trigger workflows on events from a variety of sources.

What you need is an event source, a gateway and a sensor.

In our example, we will use Kafka as event source. Let’s see the k8s yaml declarations.

  • Kafka EventSource:
  • Kafka Gateway:

Line 9: kafka-event-source refers to the Kafka EventSource stated above.

Line 14: The URI refers to the sensor below.

  • Kafka Terraform Sensor:

Here we have only one step in the sensor. This step is using the template update from the WorkflowTemplate terraform we declared previously.

Once everything is deployed in your k8s cluster, just produce the following message to your terraform topic:

{"action": "update"}

If your Kafka cluster is deployed in k8s as well, you can trigger your Terraform workflow with a simple echo :

echo '{"action": "update"}' | kubectl exec -it cp-kafka-0 -c cp-kafka-broker -- /bin/bash /usr/bin/kafka-console-producer --broker-list 127.0.0.1:9092 --topic terraform

Let’s get deeper

Below is the picture of what we’ve done so far:

Now we will see how we provision our customers’ dedicated virtual machines with Terraform, Argo and a Kubernetes’s CRD.

In our case, we are using Saltstack to bootstrap our virtual machines. We will not detail why we do not use Terraform to provision them. It’s just simpler for us 😄

To do so, we will add new Argo resources (EventSource, Gateway and Sensor), add a Kubernetes Custom Resource Definition and modify our Terraform script.

  • K8S EventSource:
  • K8S vmpool CRD:
  • K8S Resource Gateway:
  • K8S Resource vmpool Sensor:

Looking at this Sensor, you’ll notice that it has 2 triggers, one for web and one for db. For each of these triggers, the above saltstack WorkflowTemplate is used during the salt-cloud and salt-highstate steps.

The last thing we need to do now is to update our Terraform code to add the creation of the vmpool Kubernetes resource. To do that, you can replace the repository https://github.com/descrepes/terraform-argo-demo.git with https://github.com/descrepes/terraform-argo-vmpool-demo.git in the terraform WorkFlowTemplate declared at the very beginning. Or simply use the code below:

This is where we are now:

All done !

Conclusion

In this first part we declared some very simple workflows and saw only a very small part of what you can do with Argo. In a production environment, you probably want to add metrology, notifications, monitoring, etc.

This is just the beginning 😅

In a previous post, i talked about Zenko and how it help us to become cloud agnostic. In the second part (Part2), which will be published in the coming weeks, we will introduce Consul and Vault to show you how we are using it with Argo and Terraform to provision Zenko resources.

--

--