Continuous Delivery Best Practices For Infrastructure As Code
In this video on my YouTube channel, I explain that to release software smoothly, avoiding time wasted troubleshooting infrastructure issues — you might consider automating your infrastructure as code.
To achieve continuous delivery requires thinking about how technologies like chef, puppet, ansible, docker and the like might serve this need for your team and organization.
The first step is to determine an approach for using images. If on premise, you might create one that can be used when initializing a virtual machine. In the cloud, this image might be used somewhere like docker hub, or in windows azure or amazon web services (an ec2 image).
Jez Humble refers to computing resources and environments that have not had their infrastructure controlled as “works of art”. I love this term, it accurately describes the confusion around how a node of infrastructure got into a state.
To avoid this, we should use infrastructure provisioning tools against the computing nodes we “stand up” from an image. These tools will run a series of steps against a node to put it in a given state.
An important thing to consider when embarking on the journey towards infrastructure automation is whether the company has the discipline to not make manual changes. If this is a new concept, a tool like puppet, ansible, chef, or the like can help by checking to see whether a node is in a given state and only applying the necessary changes.
These checks come with additional processing power (and hence time) however, so in a company that’s more mature in how it uses infrastructure as code and automated deployment, it may make more sense to use something that doesn’t perform these checks — instead assuming nodes are already in a state.
A common practice is A/B releasing, which can be used to switch between an active and passive set of nodes in production, or any other environment. This makes deployment faster, and also allows rolling back a problematic deployment easier.
A/B releasing is different than A/B testing — the former helps with deployment, the latter with validating that changes had the business impact we theorized. I’ll talk about A/B testing more in a future video.
I recommend in this video to use powershell, bash, or a similar technology as the “trigger” of any automated deployment or infrastructure provisioning process. This avoids vendor lock-in and provides the most future-proof flexibility in combining the many tools modern vendors can use to make changes as the product evolves. Though there are fantastic tools such as terraform that have a wide reach, able to make changes in cloud AND on-premise environments in both amazon web services (AWS) and windows azure — I’ve yet to find one tool that can do everything.
Regardless of the technologies you choose to use, select a language or syntax that will be most comfortable for your team’s existing skillset. For example, if those who will automate infrastructure are object-orientated programmers — a tool like C#, ruby, or python may be sufficient. On a team with a more “traditional operations” set of skills — bash or powershell is probably going to be more approachable.
Watch my video on Isolating Changes From Your Customers here.
Watch my video on Configuration Management here.
Watch my video on Continuous Delivery here.
Originally published at Jayme Edwards.