In a previous blog post, “There is no cloud, It’s just someone else’s computer”, we talked about how using the Cloud and DevOps creates efficiencies in the seven stages of the software delivery process, ultimately reducing the time from Asks-to-Gets.

In this second post we’re going to focus on the most important of these stages, one that is at the core of what we do in a DevOps team, namely provisioning.

What do we mean by Provisioning?

The provisioning tool creates the network and servers that we need to run our applications. Servers without an application aren’t much use, so we can use the provisioning tool to initiate the app’s deployment onto the server. Once the application is deployed then we’ll probably want to ensure that it is running as expected — we use the provisioning tool to install the monitoring and logging services that are required to maintain our application.

Provisioning our infrastructure using Infrastructure-as-Code best practices gives us complete transparency of the current state of our system. This in itself provides a high level of confidence in how secure the provisioned infrastructure is.

The fact that we write our infrastructure as code also means that we’ll be building, testing and then turning that code into a releasable package. Our infrastructure goes through the same release process as the applications that run on it! As you can see, provisioning really is at the heart of DevOps.

What are we Provisioning?

A greenfield project typically starts with nothing. First we provision the network in which our servers will run. This includes virtual subnets, routing, firewalls, VPNs and access lists:

Next we provision the data stores (file storage, databases) that our applications will use. These can either be native cloud services which offer a managed solution; or custom, bespoke database deployments:

After this we provision the cloud servers themselves, including any required load balancing and rules which determine how the servers can scale with load:

The final part of the server provisioning includes a hook into the method that will configure the servers in the way that we want and ultimately deploy the required version of the application that we want to run.

How do we Provision?

There are a few ways that we can provision Cloud servers…

1. Use the Cloud provider’s interface.

Easy to get goingCan’t automate

Pros Cons Low barrier to entry Slow to action Hard to repeat consistently

Examples: AWS’s web console.

2. Use the Cloud provider’s command line tools.

Pros Cons Quicker than using the web interface Creating multiple resources still requires manual effort More consistent than using the web interface

Examples: AWS CLI, gcloud compute.

3. Automate the use of the command line tools in a script.

Pros Cons Infrastructure is defined in code Effort required to write and maintain the script Quicker than using the command line tools on their own Careful consideration is needed over resource dependencies and creation order More consistent than using the command line tools on their own Resource IDs and other system state needs to be maintained

Examples: Bash, Powershell, Python

4. Use the Cloud provider’s native resource templating method.

Pros Cons Fully automated creation, updates and deletion of resources Templating language is specific to that Cloud provider Can be parameterised to create multiple environments from a single template definition Lots of code duplication as each resource needs to be explicitly declared No facility to create modules of reusable code

Examples: AWS CloudFormation, Google Cloud Deployment Manager.

5. Write a custom application that automatically generates the Cloud provider’s native resource templates.

Pros Cons Users can be presented with a simple interface into which they can request the creation of groups of resources Assumptions about how groups of resources are created is written into the code — this limits flexibility Resource groups can be customised as required Writing and maintaining the application is a lot of work Simple to create multiple environments Each resource type for each Cloud provider that you want to use needs to be written and tested Assumptions about how groups of resources are created is written into the code — reducing duplication when defining resources Code needs to be maintained to work with future updates to Cloud APIs and client libraries

Examples: Troposphere (Python), SparkleFormation (Ruby)

6. Use a Third Party tool.

Pros Cons No overhead in writing custom code that needs to be tested and maintained Can be less succinct compared to a custom solution (Option 5) that allows assumptions to be made in the code Built-in support for multiple Cloud providers Modules of reusable code in the third party templating language can be defined Modules can be used to create multiple applications and environments Low duplication of resource definitions System state is handled natively Automatic handling of dependencies and timing between creation Management (updates and deletion) of resources is handled automatically

Examples: Terraform

So which is the best choice?

The ordering of the options above is no accident, these are the natural progression in how people usually provision Cloud servers. Starting with the web GUI, progressing to using the command line tools, then putting those commands into a script before moving to using the cloud provider’s native resource templating method (e.g. CloudFormation) and thinking about the best way to write and organise your templates.

This naturally leads to thinking about how you can reduce all the repetition and potential errors, and so people look to automate the creation of these templates by writing a custom application. If you have a fairly tight set of use cases that isn’t going to change much, and if you just want to churn out lots and lots of those use cases AND if you have the developer resource to maintain that code, then perhaps this is fine for your company, but this is generally the wrong choice.

As much as you may enjoy writing custom provisioning code, in our opinion your time is better spent utilising a tool that already has integrations into all the major Cloud providers. A tool that is well supported in both the community and (optionally) commercially, has a very active open source codebase and is well documented.

That tool is Terraform.

Terraform is a relatively new tool (first released July 2014) written by HashiCorp to solve many of the cons described in options 1–5 above. It is open source and has over 1000 contributors.

Terraform defines a number of ‘providers’ e.g AWS. From each provider you can define the resources that you want to create e.g from the ‘AWS’ provider you can define an ‘EC2’ resource. Additionally, it’s declarative — you specify what you want your infrastructure to look like, and terraform gets you there. Unlike a script, running Terraform twice won’t change things that are already how you want them.

Using Terraform to do the heavy lifting of provisioning allows you to spend more time writing code that adds value to the core application of your business.

HashiCorp themselves write and support the ‘big’ providers, Amazon Web Services, Google Compute Engine, Microsoft Azure and Oracle Public Cloud. Additionally there are over 60 providers that are written and maintained by the community. Testing standards are the same for both sets of provider which ensures the stability of the project.

At Naimuri we’re big fans of the HashiCorp approach to the design and purpose of their products. They’re very transparent about what the product does and does-not-do compared to other rival products. They are typically written to have a fairly limited scope and therefore specialise in what they are designed to do. This allows the users of their products to easily pick and choose what they want to use and does not force them into using or configuring features that they have no use for.

Originally published at Naimuri.