Thanks for the feedback Cobus!
Replacing every instance in our fleet to do this instead of changing the config via Chef seems excessive.
I used to feel that redeploying all servers was too painful of a process for a minor change. However, one of the most important lessons of modern DevOps, to quote Continuous Delivery, is “if it hurts, do it more frequently.”
If you force yourself to do deployments very frequently (e.g. 10 times per day), you’ll find the only way to keep up is with significant automation. Setting up this automation for immutable infrastructure takes a fair amount of work, but once you have it in place, your infrastructure is easier to understand and maintain.
We’ve put in that work to get zero-downtime rolling deployments for Auto Scaling Groups and ECS Services in our Infrastructure Packages, so that to update all the instances in your fleet is fairly effortless: you just commit changes to your application code (optionally with a tag like “release-stage”) and a CI job does the rest. Under the hood, the CI job runs your tests, builds an immutable artifact of your app (e.g. Docker image or AMI) with a new version number, and deploys that artifact by automatically updating your Terraform code and running “terraform apply”.
We use this same process for changes small and large. It’s ensures 100% of the information about your infrastructure is stored in version control (i.e. either in the code itself or the commit history). At a glance, you can figure out what’s deployed now and what was deployed in the past. And since all your app code changes are packaged in versioned artifacts, and all your infrastructure code changes are packaged in versioned modules, you can use the exact same process to roll back to any previous version of your app your infrastructure with ease.