Continuous Delivery of HashiCorp Vault on Google Kubernetes Engine: Initial Setup

Brett Curtis
Google Cloud - Community
4 min readSep 27, 2018

This is Part 4 of a series: Index

Overview:

This is the actual “back-end” work and information about it to support Continuous Delivery of Vault. Nothing to do with Vault itself, just a way to setup a hierarchy and resources that will get you going.

Set up the Resource Hierarchy:

Create a Google Cloud Platform Resource Hierarchy that looks something like this:

Don’t forget the the project number and project ID are unique across Google Cloud Platform. The project names you choose can be the same but the ID wont line up and the ID is used for everything.

For the rest of this guide I will be using the above hierarchy so if you’re following along and decide to do something different just be mindful of your change and adjust accordingly.

Operations folder: Used for developers that will contribute to operations code and operational tools.

Shared Infrastructure folder: Used for resources shared across teams

  • Security folder: Used for security related projects
  • Testing folder: Used for testing resources
  • Tools folder: Used for projects that contain operational tooling resources
  • test-tools-prod project: Used for Google Cloud DNS testing resources — I hope this is a temporary project. Google doesn’t currently support Cloud DNS roles at a resource type lower than project. More details below.

Setup the ops-tools-prod project and resources:

In a bit more detail than above, this project will run Google Cloud DNS where I will synchronize Kubernetes ingress resources using external-dns for production environments. It will run Google Cloud Storage for Terraform remote state. It will also run Google Container Registry for ‘local’ storage of images and for container analysis.

export project=ops-tools-prod
gcloud config set project ${project}

Create a Google Cloud Managed DNS Zone:

gcloud dns managed-zones create lzy-sh \
--description="My Default Domain" --dns-name="lzy.sh"

Register that zone with your domain name registrar:

gcloud dns record-sets list --zone lzy-shNAME          TYPE  TTL    DATA
lzy.sh. NS 21600 ns-cloud-c1.googledomains.com.,ns-cloud-c2.googledomains.com.,ns-cloud-c3.googledomains.com.,ns-cloud-c4.googledomains.com.
lzy.sh. SOA 21600 ns-cloud-c1.googledomains.com. cloud-dns-hostmaster.google.com.

That shows your NS record, grab the DATA and and create a NS record on your registrar. From this point on Google Cloud DNS will manage lzy.sh for me.

In other projects that need it (as you’ll see later on) I create a sub-domain zone and add that NS to my top level zone lzy-sh managed by Google Cloud DNS in this project.

Create two Google Cloud Storage Buckets for Terraform remote state:

gsutil mb -p ${project} -c multi_regional -l US \
gs://${project}_tf_state
gsutil mb -p ${project} -c multi_regional -l US \
gs://${project}-pre-prod_tf_state

One is for production Terraform state and the other is for all pre-production Terraform state. Pre-production service account used for automation do not have access to production buckets.

Also enable object versioning:

gsutil versioning set on gs://${project}_tf_stategsutil versioning set on gs://${project}-pre-prod_tf_state

Create two Google Cloud Service Accounts for automation:

gcloud iam service-accounts create pre-prod-terraform \
--display-name "Pre-production Terraform"
gcloud iam service-accounts create terraform \
--display-name "Production Terraform"

Create the two service account keys and save in a safe place. They will be used for automation later:

gcloud iam service-accounts keys create \
~/pre-prod-terraform-key.json --iam-account \
pre-prod-terraform@${project}.iam.gserviceaccount.com
gcloud iam service-accounts keys create \
~/prod-terraform-key.json --iam-account \
terraform@${project}.iam.gserviceaccount.com

Grant IAM Roles for service accounts:

gcloud projects add-iam-policy-binding ${project} --member \
serviceAccount:terraform@${project}.iam.gserviceaccount.com \
--role roles/resourcemanager.projectIamAdmin
gcloud projects add-iam-policy-binding test-tools-prod --member \
serviceAccount:pre-prod-terraform@${project}.iam.gserviceaccount.com \
--role roles/resourcemanager.projectIamAdmin

Setup the test-tools-prod project and resources:

Like the ops-tools-prod project above, this project will run Google Cloud DNS where I will synchronize Kubernetes ingress resources using external-dns for pre-production environments. The reason I did this is because the underlying default compute service account for Kubernetes needs to have DNS Admin role and like I noted above the lowest resource type for that role is project. I do not want to give pre-production service accounts the ability to edit production dns records. You probably see how that could be bad..

export project=test-tools-prod
gcloud config set project ${project}

Create a Google Cloud Managed DNS Zone:

gcloud dns managed-zones create test-lzy-sh \
--description="My Test Domain" --dns-name="test.lzy.sh"

Register that zone with the managed dns zone in ops-tools-prod:

gcloud dns record-sets list --zone test-lzy-shNAME          TYPE  TTL    DATA
test.lzy.sh. NS 21600 ns-cloud-d1.googledomains.com.,ns-cloud-d2.googledomains.com.,ns-cloud-d3.googledomains.com.,ns-cloud-d4.googledomains.com.
test.lzy.sh. SOA 21600 ns-cloud-d1.googledomains.com. cloud-dns-hostmaster.google.com. 1 21600 3600 259200 300

This will show your NS record, grab the DATA and and create a NS record in the ops-tools-prod project lzy-sh manged cloud dns zone.

gcloud --project ops-tools-prod dns record-sets transaction \
start -z=lzy-sh
gcloud --project ops-tools-prod dns record-sets transaction \
add -z=lzy-sh --name="test.lzy.sh." --type=NS --ttl=300 \
"ns-cloud-d1.googledomains.com." "ns-cloud-d2.googledomains.com." \
"ns-cloud-d3.googledomains.com." "ns-cloud-d4.googledomains.com."
gcloud --project ops-tools-prod dns record-sets transaction \
execute -z=lzy-sh

Conclusion:

Now we have a resource hierarchy with projects and resources that support ‘safe’ development and automation from a sandbox environment all the way through production.

Part 5 ->

--

--

Brett Curtis
Google Cloud - Community

I drink coffee and do things with cloud infrastructure..