Continuous Delivery of HashiCorp Vault on Google Kubernetes Engine: Introduction

Brett Curtis
Google Cloud - Community
3 min readSep 27, 2018

This is Part 1 of a series: Index

Overview:

You know the saying “If it was easy everyone would do it” well yeah, they do. It’s often easy to get a system running. Sometimes too easy, especially in the cloud. That being said if careful thought isn’t put into how you deliver things in the cloud you will end up with a bunch of complex and unsupportable environments that ‘Bob’ who left the company a year ago built. A few years out your production issues will grow just as fast as your services can scale.

What I aim to do here is talk about a way to support a production system using some of the things I’ve learned over the past year working on Google Cloud Platform, Continuous Delivery and learning from people way smarter than me .

I’m lucky enough to be part of a product team that believes in Continuous Delivery, Continuous Improvement and the idea that developers should be doing operations and testing. While I am operationally focused I consider myself a developer on a team of developers. Currently I’m the only development resource focused primarily on operational aspects of the Google Cloud Platform. I attempt to incorporate architecture, IaC, automation, testing, security, backup, recovery & observability into the operational delivery pipeline to the best of my ability.

You will see things in this series of posts that are not great, they may even be flat out wrong. There may be a reason they were done that way, there might not be. That’s OK and I’ll leave you with this quote:

Continuous delivery is not magic. It’s about continuous, daily improvement — the constant discipline of pursuing higher performance by following the heuristic “if it hurts, do it more often, and bring the pain forward.”

It’s not surprising I like this quote seeing I’m a crazy ass CrossFitter and GRT.

HashiCorp Vault:

Custom developed applications following Continuous Delivery practices need to produce an immutable artifact which can be deployed to various environments. This obviously can’t be done nor would it be a good idea if we baked configuration and secrets into the artifact. Instead these things need to be supplied into to the artifact at run time. This is where Vault comes into play and why I’m building a Vault infrastructure on Google Kubernetes Engine on Google Cloud Platform.

Google Kubernetes Engine is a managed, production-ready environment for deploying containerized applications.

HashiCorp Vault secures, stores, and tightly controls access to tokens, passwords, certificates, API keys, and other secrets in modern computing.

I’ve created a fork of Seth Vargo’s GitHub repo vault-on-gke which is actually a fork of Kelsey Hightower’s GitHub repo vault-on-google-kubernetes-engine.

Some of the major differences in my fork is I’ll be using external-dns to synchronize Kubernetes ingress resources with Google Cloud DNS as well as cert-manager to automate the management and issuance of TLS certificates from Let’s Encrypt. There are obviously some trade-offs here and you’ll have to decide if they make sense and are secure enough for your use case.

A bunch of smaller things as well like:

While this post is focused on Vault, the ideas and code here are reusable for this type of architecture.

Part 2 ->

--

--

Brett Curtis
Google Cloud - Community

I drink coffee and do things with cloud infrastructure..