How to manage your GitHub Organization with Terraform Part II — Automate Terraform

Sören Martius
Terramate Blog
Published in
9 min readMar 29, 2020

--

This is Part II series of articles on “How to manage your GitHub Organization with Terraform”. In Part I, we covered the basics of Terraform, Github and how to manage GitHub organizations and their resources with Terraform.

In this article, you will learn how to automate the terraform plan and apply commands inside your CI and how to deploy changes following the GitHub flow. Also, you will learn about remote state and state locking and how to accelerate Amazon S3 and DynamoDB to implement both mechanisms.

We will use Semaphore as our main CI and CD server, but you can easily use this guide with other providers also because we run everything inside docker containers without coupling any of the executing logic to the CI server.

Here is a brief overview of what we will cover:

  1. Design a Pipeline for running Terraform in Automation
  2. Set up Terraform Remote State and State Locking with Amazon S3
  3. Abstract logic with Docker and GNU Make
  4. Create a GitHub Repository for versioning the Code
  5. Implement the Pipeline in Semaphore
  6. Conclusion

If you’d like to skip this guide and review the final result straight away, we uploaded a working example that manages a real organization with real resources we’ve created for this guide.

Design a Pipeline for running Terraform in Automation

The GitHub flow is a lightweight, branch-based workflow that supports teams and projects where deployments are made regularly.

GitHub flow with Terraform

Whenever you are working on a project, you’re going to have a bunch of different features in progress at any given time. Some of which are ready to go, and others which are not. Branching exists to help you collaboratively manage this workflow.

The same can be applied when managing GitHub organizations through code. To make usage of the GitHub flow, we agree on the following process:

  1. Whenever you’d like to apply a change to your GitHub Organization and its resources, you create a new branch from master ( e.G. git branch -b add-new-repository ).
  2. Whenever you add a commit to the newly created branch, a CI server runs terraform plan on the code changes.
  3. Once the pull request is under review, the reviewers can easily review the output of terraform plan and suggest changes.
  4. When everything looks good, the author of the pull request should trigger the promotion which will run terraform apply and merge the branch back to master.

Anything in the master branch should always be deployable and reflect the actual state of your GitHub Organization. Anything in the master branch should always be deployable and reflect the actual state of your GitHub Organization. Because of this, your new branch must be created off of master when working on a feature or a fix.

Next, we will start implementing the flow.

Set up Terraform Remote State and State Locking with Amazon S3

In Part I of this series, we’ve only used Terraform in our local environment. Since your goal is to automate Terraform and ideally run all operations inside our CI pipeline, we need to move the state file from our local environment to a remote location.

We made good experience with storing the state in Amazon S3 but Terraform has integrations for a broad set of remote backends. Please notice that Amazon S3 is offering a free tier for S3. By the time of writing they allow you to store 5GB in S3 without charge for 12 months for new accounts.

If you work with a remote state, you risk multiple processes attempting to make changes to the same file simultaneously. We need to provide a mechanism that will “lock” the state if it’s currently in use by another user. We can accomplish this by creating a DynamoDB table for Terraform to use.

If you don’t have an account for Amazon Web Services (AWS) yet, now is a good time to open a new one.

Once you have your AWS account in place, we can start to span up the necessary resources we need to store the state and the state locks. Let’s create a file aws.tf with the following content.

The aws.tf file contains the Terraform resources for creating the S3 bucket, DynamoDB table, IAM user and policies.

Also, we need to configure the provider and Terraform requirements. In order to accomplish that please create the file provider.tf with the following content.

The provider.tf file contains the requirements for Terraform and the AWS provider.

Let’s take a look at both files.
In aws.tf we create a private S3 Bucket for storing Terraform's state and a DynamoDB table for writing the state locks. Also, we create a new IAM user terraform-ci that will be used inside our CI pipeline and follows the standard security advice of granting the least privilege that is recommended by AWS. Since we are managing both, the S3 bucket and DynamoDb table through Terraform, the IAM user needs to have full privileges on these resources. To achieve that we have to create IAM policies that allow the user to manage both resources, the policies and also himself. If you decide to set up the resources without Terraform, the minimal permissions that are needed for the user to manage both resources can be found in Terraform's documentation.

In order to use the user inside our CI pipeline, we need to create access credentials. Terraform’s AWS Provider provides the iam_access_key resources to provision credentials but for the sake of security, we will create the credentials through the GUI and avoid storing them in Terraform’s state file.

If you’ve just created a new AWS account, please create a new IAM admin user with some access credentials. Don’t use your root account since it’s considered to be unsafe and therefore a bad practice.

To get the Terraform AWS Provider working, you need to provide the credentials via the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables.

export AWS_ACCESS_KEY_ID=XXXXX
export AWS_SECRET_ACCESS_XXXXX

Tip: If you are working with multiple AWS accounts or you would like to store the credentials safely in a keychain, we would suggest you take a look at aws-vault.

Next, please run terraform init to initialize your Terraform environment and terraform apply to deploy the resources.

Run terraform init and apply to create the environment and its resources.

That’s it! Terraform created the required resources for you. Now it’s time to update the provider.tf file and enable the S3 remote backend.

The updated provider.tf file with the backend definition to use S3 as the remote state location and DynamoDB state locks.

To migrate state from your local environment to the remote backend you should run terraform init again.

Run terraform init to move the local state to S3 and to enable the state locks.

Abstract common logic with Docker and GNU Make

We believe that a CI server shouldn’t own logic. Ideally, it triggers a prepared set of instructions based on constraints. In our infrastructure as code projects, we typically work with a task manager to unify the logic for the commonly used tasks. There are tons of great task managers out there but we always tend to use GNU Make which is a great, battle-tested choice that doesn’t add any overhead to our projects.

Let’s create the Makefile with the following content.

We use GNU Make for abstracting common tasks as make targets.

The Makefile offers targets for each task we need in order to automate Terraform in our Pipeline. You might have noticed that instead of calling the terraform binary directly, we work with a Docker container instead. This saves us some time and effort and means, that the CI server only needs to have Docker installed. Each task simply spans up a container of our build-tools image. The image is just a lean Alpine Linux based image that has some tools such as Terraform, Packer and Go pre-installed.

Using Docker and GNU Make makes it very easy to decouple the necessary logic from our pipeline. All the CI now needs to do is to call the Makefile targets and to provide the necessary arguments.

Create a GitHub Repository for versioning the Code

Let’s create a new repository inside our GitHub organization to version our code. Please create the file repository.tf with the following content.

Create a repository for our GitHub organization as code.

In this example, we are using the terraform-github-repository open-source Terraform module which helps you to quickly create repositories following best practices. Please see the README.md for details.

Please note that for the sake of demonstration we set the repositories visibility to public. If you create the repository for your own organization, you will most likely want to create a private repository.

Also, it is a good practice to create a new GitHub machine user and use it instead of your personal one for automating the deployments in your CI pipeline. To be able to communicate with the GitHub API, we need to issue a personal access token.

export GITHUB_TOKEN=XXXXX

Let’s run terraform init again to download the required module and terraform apply to create the repository.

Hurray! Terraform created the new repository for us.

Let’s initialize Git in our working directory, add the created repository as a remote and push our code.

git init
git remote add origin git@github.com:github-terraform-example/iac-github.git
git add aws.tf provider.tf repository.tf Makefile
git commit -m "configure remote backend, add github iac repository and Makefile"
git push origin master

Implement the Pipeline in Semaphore

Now that we took care of all the requirements we can finally implement our CI pipeline. We’ve chosen Semaphore as our CI/CD server but it is easy to replicate the next steps with any major CI / CD provider.

If you don’t have an account with semaphore yet, please register a new one and create your desired organization.

Create your organization in SemaphoreCI

Also, make sure that you authorized Semaphore as an OAuth App in your GitHub account so it has the necessary permissions to communicate with your GitHub organization and its resources.

Next, we create some access credentials for our terraform-ci IAM user. Please login to the AWS GUI and issue the credentials.

Let’s add the credentials as a secret to our semaphore organization so we can use them in our pipeline. We also need to add the personal access token of our GitHub machine user.

Add your AWS access credentials as a secret in SemaphoreCI.

We also need to create another secret for the personal access token of our GitHub machine user.

Add your GitHub access token as a secret in SemaphoreCI.

In order to make semaphore building our repository, we need to add it as a project in Semaphore. Semaphore will add a webhook to our repository to receive notifications for every change that will occur. Semaphore will listen to changes and trigger a new build once we commit changes to our repository.

Add the iac-github repository as a new project in SemaphoreCI.

The last step is to add the pipeline configuration to our codebase. Please create the directory .semaphoreci and add the two files semaphore.yml and deploy.yml with the following content inside the directory.

Add a file semaphore.yml that contains our main pipeline.
Add a file deploy.yml that contains the deployment pipeline for running terraform apply.

That’s it! Semaphore will now trigger new builds on every commit. It will automatically deploy commits to master but also grants your team to trigger promotions manually.

SemaphoreCI Pipeline Overview

You can now start to add new repositories, teams, and members to your codebase. For example, if you would like to add a branch protection rule to the master branch, just open a new branch and alter the repository.tf file.

Conclusion

In this article, you learned how to automate the deployment of your GitHub infrastructure as code with Semaphore. You also learned how to migrate the local state to a remote location and how to apply state locks.

You are now ready to go to start managing your own GitHub organization through code. We hope that this article helps you to get started quickly.

If you need help or more information, don’t hesitate to send us an email at hello@mineiros.io

--

--

Sören Martius
Terramate Blog

I like simplicity, pragmatism and common sense while bridging business, product and technology.