Continuous Delivery of HashiCorp Vault on Google Kubernetes Engine: Sandbox Development

Brett Curtis
Google Cloud - Community
3 min readSep 27, 2018

This is Part 5 of a series: Index

Overview:

One of the foundations to Continuous Delivery is Continuous Integration. One of the practices of Continuous Integration is frequent check-ins. Frequent check-ins do not happen on teams that are not confident in committing their code.

Imagine application developers transitioning to operational development or imagine a team of system administrators just starting to learn about IaC. This is a rather large change and most people are not comfortable learning out loud. This environment solves that problem giving developers a fully functioning environment they can code and test on. When I say “test on” I mean test infrastructure changes as well as application changes and even figure out how the application works.

This is all documented on my README.md on GitHub and will be maintained and updated going forward. This setup is for IaC development on Linux machine.

Install Google Cloud SDK:

curl https://sdk.cloud.google.com | bash
exec -l $SHELL
gcloud init
gcloud components install kubectl beta

Project Setup:

You will need your own sandbox project as described in the initial setup. Like the ops-tools-prod project mentioned there, this project is the same but for your own personal sandbox development of the IaC.

export project=ops-bcurtis-sb
gcloud config set project ${ptoject}

Create a Google Cloud Managed DNS Zone:

gcloud dns managed-zones create obs-lzy-sh \
--description="My Sandbox Domain" --dns-name="obs.lzy.sh"

Register that zone with the managed dns zone in ops-tools-prod:

gcloud dns record-sets list --zone obs-lzy-shNAME               TYPE  TTL    DATA
obs.lzy.sh. NS 21600 ns-cloud-a1.googledomains.com.,ns-cloud-a2.googledomains.com.,ns-cloud-a3.googledomains.com.,ns-cloud-a4.googledomains.com.
obs.lzy.sh. SOA 21600 ns-cloud-a1.googledomains.com. cloud-dns-hostmaster.google.com. 1 21600 3600 259200 300

This will show your NS record, grab the DATA and and create a NS record in the ops-tools-prod project lzy-sh manged cloud dns zone.

gcloud --project ops-tools-prod dns record-sets transaction \
start -z=lzy-sh
gcloud --project ops-tools-prod dns record-sets transaction \
add -z=lzy-sh --name="obs.lzy.sh." --type=NS --ttl=300 \
"ns-cloud-a1.googledomains.com." "ns-cloud-a2.googledomains.com." \
"ns-cloud-a3.googledomains.com." "ns-cloud-a4.googledomains.com."
gcloud --project ops-tools-prod dns record-sets transaction \
execute -z=lzy-sh

Create a Google Cloud Storage Bucket for Terraform remote state:

gsutil mb -p ${project} -c multi_regional -l US \
gs://${project}_tf_state

Also enable object versioning:

gsutil versioning set on gs://${project}_tf_state

Install Terraform:

curl -O https://releases.hashicorp.com/terraform/0.11.8/terraform_0.11.8_linux_amd64.zipsudo unzip terraform_0.11.8_linux_amd64.zip -d /usr/local/bin

Setup Google Application Default Credentials:

gcloud auth application-default login

Clone Project:

git clone git@github.com:lzysh/ops-gke-vault.git

Initialize Terraform:

cd ops-gke-vault/terraformterraform init -backend-config="bucket=${project}_tf_state" \
-backend-config="project=${project}"

NOTE: At this point you are setup to use remote state in Terraform.

Setup Variables:

Create a local.tfvars file and edit to fit you needs:

cp local.tfvars.EXAMPLE local.tfvars

NOTE: The folder_id variable will be the ID of the Sanbox folder your have the proper IAM roles set on.

Terraform Plan & Apply:

random=$RANDOMterraform plan -out="plan.out" -var-file="local.tfvars" \
-var="project=ops-vault-${random}-sb" \
-var="host=vault-${random}"
terraform apply "plan.out"

It will take about 5–10 minutes after terraform apply is complete for the Vault instance to be accessible. Ingress is doing its thing, DNS is being propagated and SSL certificates are being issued.

The URL and command to decrypt the root token are in the Terraform output.

Install Vault Locally

curl -O https://releases.hashicorp.com/vault/0.11.1/vault_0.11.1_linux_amd64.zipsudo unzip vault_0.11.1_linux_amd64.zip -d /usr/local/bin

Vault Testing Examples

export VAULT_ADDR="$(terraform output url)"export VAULT_SKIP_VERIFY=true (Use for testing only)export VAULT_TOKEN="$(decrypted token)"

Enable KV2:

vault kv enable-versioning secret

Put/Get Secret:

vault kv put secret/my_team/pre-prod/api_key \ key=QWsDEr876d6s4wLKcjfLPxxuyRTEvault kv get secret/my_team/pre-prod/api_key
====== Metadata ======
Key Value
--- -----
created_time 2018-09-16T04:04:50.14260161Z
deletion_time n/a
destroyed false
version 1
=== Data ===
Key Value
--- -----
key QWsDEr876d6s4wLKcjfLPxxuyRTE

Put/Get Multi Value Secret:

vault kv put secret/my_team/pre-prod/db_info url=foo.example.com:35533 db_name=users username=admin password=passw0rdvault kv get secret/my_team/pre-prod/db_info
====== Metadata ======
Key Value
--- -----
created_time 2018-09-16T04:09:55.452868097Z
deletion_time n/a
destroyed false
version 1
====== Data ======
Key Value
--- -----
db_name users
password passw0rd
url foo.example.com:35533
username admin

Terraform Destroy

terraform destroy -var-file="local.tfvars" \
-var="project=ops-vault-${random}-sb" \
-var="host=vault-${random}"

As a team of developers we can now work locally to try to clean up things like some of this null_resource local-exec code. You don’t need to know anything about Terraform, Vault, Kubernetes etc. Follow a README and go, this is your place to learn.

--

--

Brett Curtis
Google Cloud - Community

I drink coffee and do things with cloud infrastructure..