Managing API keys and other secrets across a build and deployment pipeline can be a pain. For example, in production, you want to use the production credit card gateway and use your production credentials for New Relic. In staging, you want to use the test gateway and use a different set of credentials for New Relic. Dev and demo servers have yet more different configurations. Keeping track of all of it can be a pain.
I have managed this complexity with Chef scripts and data bags, but even then I messed it up sometimes. Also, if an API key needed updating, we would rerun chef cook against the server which sometimes required taking the server down. This solution was effective, but it seemed inelegant. I have been trying to figure out if there is a better way to manage secrets, at least on GCP, since I started at Google.
Each Google Compute Engine instance has access to metadata stored on a metadata server. Metadata is stored as key:value pairs and is located at a specific URL. There’s some default metadata that every instance gets (host name, startup and shutdown scripts, and instance ID). Users can also specify custom metadata that an instance may access. Startup scripts or the application can retrieve this information to set environment variables, connect to external services, and otherwise configure the instance.
Metadata is set on the instance or project level. Instances made from the same template (like an auto-scaling managed instance group) will have the same custom metadata. Metadata can be set and viewed with
gcloud, via a language specific API, or with the web console. You can also update the metadata, and the value is propagated out to all instances nearly instantly.
Structuring Your Data
The metadata service is an alternative to other means of managing config and secrets. It supports both single values and directory structures. One possible way to organize your data is to have a directory for each environment (staging, production, demo) and then have the same set of keys (with environment specific values) in each directory. Instance specific metadata can be used to figure out which set of project-wide keys the instance should access. That is just one suggestion. There are many other ways to structure data that will work well for your project.
From a running instance, you access the metadata server by making a request to
http://metadata.google.internal/computeMetadata/v1/. You need to include the “Metadata-Flavor: Google” header in the request. Here’s an example using curl:
curl -H "Metadata-Flavor: Google" http://metadata.google.internal/computeMetadata/v1/
And, here is an example in Ruby using
Setting and Viewing from the Web
You can view, set, and update project-wide metadata from the web at https://console.cloud.google.com/compute/metadata. You can update instance metadata from the instance page under Compute Engine. https://console.cloud.google.com/compute/instances.
In my experience, changing an API Key or another config option often involves restarting the process. Sometimes changes even require provisioning an entirely new VM. Requiring a restart is okay when your system can handle it without customer impact, but that is not always realistic. As much as we wish it were otherwise sticky sessions and holding connections open (like with web sockets) are common with web applications.
The metadata server has a wait-for-change feature so that your app can be notified when a value changes and is able to respond without a restart. More information on waiting for updates and using ETags is available in the documentation, but I will explain the basics here.
To use the wait-for-change feature, you make a normal request and append
?wait_for_change=true onto the query string. When the value of the key changes the request returns the new value. Here’s an example.
curl -H "Metadata-Flavor: Google" "http://metadata.google.internal/computeMetadata/v1/project/attributes/my_key?wait_for_change=true"
There are some limitations of the metadata service. When it comes to storing secrets, the biggest issue is that metadata is visible in plain text to any person or machine authenticated in your project. In most scenarios this is fine but this might be an issue for some applications.
Also, custom metadata is limited to 32,768 bytes of key:value data. This means it isn’t a good solution for storing startup scripts, large JSON blobs, or serialized objects. If you need more space than the metadata server allows you can use Google Cloud Storage to store the information. Then you can store the path to your bucket in the metadata service.
The metadata service has several other useful features that I wasn’t able to get into here, including maintenance notifications. To find out more about the metadata service and different ways to interact with it check out the docs.
Originally published at www.thagomizer.com on October 20, 2016.