Terraform Automation With Argo on Kubernetes: Part2 (Consul / Vault)

Alexandre Le Mao
InsideBoard Tech Community
7 min readSep 29, 2020
AI Platform for change management

At the end of the Part1, we ended up with the following workflow:

During this second part, we will at first introduce Vault and the agent sidecar injector to inject secrets in our pods. Then we will show you how we are using Consul to templatize our Terraform scripts.

References

Same as Part1, we will not go through the details on how to install Consul nor Vault. You will find some references below:

Vault and Agent Sidecar Injector

After 2 years running Kubernetes in production, we realized that one of the biggest constraint in terms of management is that of secrets. Vault has allowed us to efficiently manage all our secrets by allowing us to version and save them.

The Vault Agent Injector is provided by the vault-k8s project, which is included in the Vault Helm Chart. This functionality enable the alteration of your pod specifications to include Vault Agent containers.

One very nice feature with this Vault injection is that the Vault Agent containers will render Vault secrets using Consul Template markup.

We will see this templating feature later in our WorkflowTemplate resources.

Let’s start by adding what we need for the rest of this post:

  1. Secret engines

In order to store our secrets we will need to set up two KV V2 Secrets Engines

kubectl -n default port-forward svc/vault 8200:8200vault login -tls-skip-verify
vault secrets enable -tls-skip-verify -path=argo kv-v2
vault secrets enable -tls-skip-verify -path=customers kv-v2

All that’s left to do is add your secrets to vault.

Either via the Vault client:

kubectl -n default port-forward svc/vault 8200:8200vault login -tls-skip-verify
vault kv put -tls-skip-verify argo/zenko \
access_token=my-orbit-x-auth-token \
instance_id=my-zenko-instance-id
vault kv put -tls-skip-verify argo/ssh \
id_rsa=my-id-private-rsa \
id_rsa.pub=my-id-public-rsa
vault kv put -tls-skip-verify argo/cloudflare \
api_token=my-cf-api-token \
zone_id=my-cf-zone-id
vault kv put -tls-skip-verify argo/pingdom \
username=my-pingdom-username \
password=my-pingdom-password \
api_key=my-pingdom-api-key
vault kv put -tls-skip-verify argo/azure \
ACCESS_KEY=my-azure-access-key \
CLIENT_ID=my-azure-client-id \
CLIENT_SECRET=my-azure-client-secret \
SUBSCRIPTION_ID=my-azure-subscription-id \
TENANT_ID=my-azure-tenant-id

Or via the webui:

kubectl -n default port-forward svc/vault-ui 8200:8200

Then open https://127.0.0.1:8200/ui/

2. Policies

Let’s now add some policies on our secrets.

kubectl -n default port-forward svc/vault 8200:8200vault login -tls-skip-verifyvault policy write -tls-skip-verify zenko - <<EOF
path "auth/token/create"
{
capabilities = ["update"]
}
path "argo/data/zenko"
{
capabilities = ["read", "list"]
}
path "customers/data/zenko/accounts/*"
{
capabilities = ["create", "read", "update", "delete", "list", "sudo"]
}
path "customers/metadata/zenko/accounts/*"
{
capabilities = ["read", "list"]
}
path "customers/data/azure/accounts/*"
{
capabilities = ["read", "list"]
}
path "customers/metadata/azure/accounts/*"
{
capabilities = ["read", "list"]
}
EOF
vault policy write -tls-skip-verify saltstack - <<EOF
path "argo/data/ssh"
{
capabilities = ["read", "list"]
}
EOF

vault policy write -tls-skip-verify terraform - <<EOF
path "auth/token/create"
{
capabilities = ["update"]
}
path "argo/data/cloudflare"
{
capabilities = ["read", "list"]
}
path "argo/data/pingdom"
{
capabilities = ["read", "list"]
}
path "argo/data/azure"
{
capabilities = ["read", "list"]
}
path "customers/data/*"
{
capabilities = ["create", "read", "update", "delete", "list"]
}
EOF

3. Kubernetes authentication method

In order to be able to inject our Vault Agent containers into our pods, we need to set up the Kubernetes authentication method:

kubectl exec -n default -it vault-o  -- /bin/ashvault login
vault auth enable kubernetes
vault write auth/kubernetes/config token_reviewer_jwt="$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" kubernetes_host="https://$KUBERNETES_PORT_443_TCP_ADDR:443" kubernetes_ca_cert=@/var/run/secrets/kubernetes.io/serviceaccount/ca.crt

All that’s left is to add our Kubernetes roles 😄

kubectl -n default port-forward svc/vault 8200:8200vault login -tls-skip-verifyvault write -tls-skip-verify auth/kubernetes/role/zenko \
bound_service_account_names=argo \
bound_service_account_namespaces=argo \
policies=zenko \
ttl=1h
vault write -tls-skip-verify auth/kubernetes/role/saltstack \
bound_service_account_names=argo \
bound_service_account_namespaces=argo \
policies=saltstack \
ttl=1h
vault write -tls-skip-verify auth/kubernetes/role/terraform \
bound_service_account_names=argo \
bound_service_account_namespaces=argo \
policies=terraform \
ttl=1h

That’s it for this brief introduction on Vault injection. You will see later in this post how to use what we just set up.

Let’s introduce Consul first !

Consul

During the Part1 we provisioned cloud resources for a single client. But what if you have a thousand customers? You probably don’t want to put all your customers in your git repository.

You want to store them as parameters and not as code!

Let’s templatize our Terraform script 😃

To do so, we will store our customers in a Consul KV store and use consul-templaterb to render our Terraform scripts. We largely use consul-templaterb instead of consul-template because it is based on Embedded Ruby (ERB). This makes it easy to override the lack of GO templating features related to json, yaml and so on.

Terraform + consul-templaterb Docker Image

We’re just going to add consul-templaterb to our Terraform docker image from Part1.

Line 1: ruby image

Line 14: consul-templaterb installation

You can easily adapt it for your use case or use it directly for testing :

docker pull descrepes/terraform:0.12.9-demo-consul-templaterb

Terraform ERB template

Let’s take a look at the main ERB template below:

Line 9: We retrieve all keys under customers then we loop

Line 33: We store in Vault the Azure Storage Account credentials. We will use them later in our Argo workflow when provisioning the Zenko accounts.

Line 67: zenko-account Kubernetes CRD

Line 116: vmpool Kubernetes CRD

As you can see, it’s quite simple to add conditions and loops to your Terraform scripts with this method 😄

The whole code is available here. We will use this Github repository later in our Argo workflow.

KV Store

The next step is to populate your Consul KV store. If you deployed Consul in Kubernetes using the Hashicorp helm chart you just have to:

kubectl port-forward svc/consul-admin 8500:80

Next Open http://127.0.0.1:8500/ui/ then add some customers like this:

All Together !

Now that we’ve introduced Vault and Consul, it’s time to merge everything, Vault, Consul and what we’ve already done in Part1!

The same resources below, used in Part1, are also used for our final workflow:

Here are the Kubernetes resources to add in order to reach the final workflow:

  • Argo Resources EventSource:
  • K8S zenkoaccount CRD:
  • Argo Resource Gateway:
  • Argo zenkoaccount Sensor:
  • Argo zenko WorkflowTemplate:

Lines 69–76: We render a yaml file based on the Azure Storage Account credentials provisioned by Terraform and then used by the zenkocli to setup locations.

Lines 96-104: We templatize a .aws/credentials file based on the Vault customers/zenko/accounts/* secrets.

  • Argo saltstack WorkflowTemplate:

Lines 20-28 and 46–54: We render the Vault argo/ssh secret to “/home/myuser/.ssh/id_rsa” file to allow the user myuser to ssh to the salt server.

  • Argo terraform WorkflowTemplate:

Lines 62 and 117: We share the Vault token with all containers in the pod. That way we are able to set on lines 41 and 108 the VAULT_TOKEN environment variable used in the Terraform script.

There we are !

Provisioning two customers simultaneously looks like this 😅

Conclusion

This Terraform provisioning workflow is highly scalable as long as you are not rate limited by the Cloud providers.

Argo not only allows you to launch datascience pipelines but can also be used in your CI/CDs or in the provisioning of your infrastructure as we have just seen.

Today at InsideBoard, Argo is a central topic. It allows us to federate different teams (Data/Ops/Dev) around a single technology with which we can easily exchange and share.

This 2-part article is intended to share with you our Argo use case on how we use it to provision a small part of our infrastructure. Hope you found it interesting and that it gives you some ideas to share with the community 😄

You can contribute to the workflow catalog here.

Note: At the time of writing this post we were using Argo Events v.0.15.
You can easily adapt the Kubernetes resources related to Argo Events by following the Migration path for v0.17.0.

--

--