How To Use Terraform and Remote State with S3

Hootsuite Engineering
Hootsuite Engineering
5 min readMay 10, 2016

Terraform is a tool developed by HashiCorp that allows you to build your infrastructure using code. At Hootsuite, we are using Terraform to build new infrastructure in an auditable and maintainable way.

Terraform makes spinning up infrastructure less painful and making changes less scary. By describing infrastructure as code, spinning a new server turns into submitting a pull request and rolling back to a previous state of infrastructure becomes as easy as reverting a commit. Terraform is not limited to AWS, it can provision a whole suite of AWS products and can integrate with a growing list of providers including Digital Ocean, OpenStack and more.

Using Terraform as a systems developer is a good start for remodelling your infrastructure as code. However, to really scale you need to be able to have multiple people work on your terraform stacks. This problem is solved by Terraform’s remote state. Remote state allows you to store the state file for a stack in some third party storage provider so it can be shared across developers. This is in contrast to what you would have if you did not use remote state, that is, each developer with their own statefile. As you could imagine, this would get really messy as people start to clobber each other’s changes.

Terraform provides users with a couple of options when it comes to remote state backends including: S3, Consul and HTTP. S3 is a particularly interesting backend to use since you can version the contents of buckets. Conceivably, then you could also version control states of your infrastructure.

Using Remote State on S3

To setup remote state using S3 you need to first have a bucket that can store your statefiles and AWS CLI tools set up. This includes making sure you have your access key, secret access key and default region set. Then, configuring remote state is as easy as running:

terraform remote config -backend=S3 -backend-config="bucket=<bucket>" -backend-config="key=<path to file>"

This will setup S3 as your remote storage provider, and store remote states in the bucket you specify. Locally, this will put a terraform.tfstate file in the .terraform directory which will have details about remote state.

"remote": {
"type": "s3",
"config": {
bucket = <bucket>
key = <path to statefile>
}
}

If you already have a local statefile you will probably want to push it up to S3. So, run:

terraform remote push

Now whenever you run a `terraform plan` or `terraform apply` the remote state will be pulled down to your local machine and you (probably) will not clobber another developer’s changes. Finally when you apply a change the resulting changes state will be uploaded to the remote server.

To pull changes from the remote state you can simply run:

terraform remote pull
Remote State Diagram

Remote state diagram

Learnings

Keep your remote states small

Don’t keep the state of your infrastructure in one giant statefile. This will slow down development time since only one person should be changing a statefile at a time. It also creates a tight coupling between all the parts of your infrastructure. At Hootsuite we are separating our state files by environment, service and project. So, each project within a service will have a statefile per environment. Whenever changes are made to a project’s infrastructure we can guarantee that only that project in a given environment will be modified. For projects that depend on other stacks, use outputs and include the statefile of the dependencies. There is some more interesting discussion and opinions linked below.

Use remote states opposed to local states

Local states that are stored in git are OK if you a small (less than 3 people) team. But this doesn’t scale as you grow the number of contributors. Using remote states is a best practice and ensures that there is one source of truth for the state of your infrastructure.

Lock remote states

Don’t have multiple people working on the same stack at the same time. This is a recipe for disaster. When one person is working on a stack it is guaranteed that the state will not change between planning and applying a change. However, when a second person is also working on the stack this is not guaranteed. The result is a mangled statefile and two sets of changes that don’t work. This can be avoided by using processes as simple as letting your team know you are making a change and they should not be touching the stack. Hashicorp takes a more rigorous approach to this problem with Atlas, which allows you to version control and lock states.

Have a process and use tools

Just like you probably have an organization wide standard process for building applications or modifying infrastructure, you should have a standardized process for Terraform. When starting with a small part of infrastructure and a small team it is probably OK to make everyone do their own config and run Terraform locally. However, as the team and infrastructure managed by terraform grows then managing stacks becomes more difficult. Using tools that allow you to setup remote states, to plan, and then to apply that plan with one command and no overhead of knowing all the details of the other terraform stacks makes terraform more accessible to other teams and helps relieve the worry that something will break.

References

About the Author

Sophia Castellarin

Sophia is a co-op on the Operations Team. When not working on things, you can find Sophia Tim Tam slamin’ or being completely and genuinely lost. If you are ever in the neighbourhood, feel free to visit me on github at https://github.com/soapy1/.

--

--