How to continuously deploy a static website in style using GitHub and AWS

Kyle Galbraith
11 min readApr 22, 2018

--

In this post we are going to learn how to use AWS CodePipeline and CodeDeploy to automatically retrieve the source code for a static website from GitHub and deploy that website onto S3.

We will configure the deployment to happen on any new commit to our master branch.

We begin by creating a code pipeline with a source stage linked to our GitHub repository. When a new commit is pushed to our master branch, the pipeline automatically checks out the latest code. We can then trigger a build step in our pipeline. This step can install dependencies, run tests, and package our site for deployment. Our final step then deploys our static website to our S3 static website bucket.

Now that we know what we are going to build, let’s jump in and learn even more by building this out.

AWS CodePipeline Prerequisites

To stand up an AWS CodePipeline in our account that communicates with our GitHub repository, there are some prerequisites that you need to take care of.

  1. You should have an AWS account already setup.
  2. You should have CLI access configured for your account.
  3. Your static website should already be hosted out of AWS S3. If not, check out this link.

Configuring GitHub and AWS Communication

In order for AWS to be able to poll for changes to our master branch in GitHub, we need to be able to generate an access token for our GitHub repository. We can generate a personal access token by completing the following steps from within GitHub.

  1. While logged into GitHub, click your profile photo in the top right, then click Settings.
  2. On the left, click Developer settings.
  3. On the left, click Personal access tokens.
  4. Click Generate new token and enter AWSCodePipeline for the name.
  5. For permissions, select repo.
  6. Click Generate token.
  7. Copy the token somewhere so we can use it later.

Creating Our CodePipeline By Hand

The first thing we need to provision is CodePipeline. Our pipeline is going to consist of two stages, a Source stage connected to GitHub, and a Build stage that deploys our static website.

Let’s go ahead and create our CodePipeline via the AWS Console:

  1. Navigate to CodePipeline in the AWS Console.
  2. Click Create Pipeline.
  3. Enter a name for your Pipeline.
  4. Select GitHub as the source provider.
  5. Click Connect to GitHub. This will open a separate window where you sign into your GitHub account. Once signed in, you must grant repo access to AWS CodePipeline. This is the communication link between your GitHub repo and CodePipeline.
  6. Select the repository you want to use in this Pipeline.
  7. Enter master, or your default branch, in the Branch input.
  8. Click Next.
  9. For the Build provider we are going to choose AWS CodeBuild.
  10. Select Create a new build project.
  11. Enter a name for your Build project.
  12. For the Environment image we will use an image provided by AWS CodeBuild.
  13. Select Ubuntu as the operating system.
  14. Select Node.js as the Runtime and nodejs:6.3.1 as the Version.
  15. Leave Build specification as the buildspec.yml option.
  16. In the CodeBuild service role section, we want to create a new service role.
  17. Enter a name for the service role CodeBuild will use.
  18. Leave the rest of the values at their default settings.
  19. Click Save build project and then click Next.
  20. For Deployment provider we want No deployment.
  21. Click Next.

Who likes clicking buttons? Not me.

That was a lot of button clicking right? Could you do that again without looking at all 21 steps? I know I couldn’t.

Good news! There is a far better way of creating and managing your code pipelines, or any AWS infrastructure for that matter. You may have heard of the term Infrastructure-as-Code, and it is pretty much exactly as it sounds. Represent your infrastructure as code so that you can create, maintain, and destroy it without ever opening a GUI.

There is nothing wrong with starting with the GUI if you’re new to AWS, or any new cloud provider. But we want to aim for automation as we scale.

There are many tools out there that make this very easy to do. AWS provides CloudFormation, which allows you to define your resources inside of JSON or YAML templates.

CloudFormation is great but there are other tools out there as well. One I have been using a lot recently is Terraform. It is cloud provider agnostic and supports a variety of providers via community developed modules.

For this blog post, I put together a quick Terraform template that provisions our AWS infrastructure.

Let’s take a quick journey through what this template is doing.

At the top, we are defining the variables to be passed into the template. To provision the resources, we need to pass in the following:

  • name of our pipeline
  • our GitHub username
  • our GitHub token from earlier
  • the GitHub repository we want to link to our pipeline

Than we specify that we want to use AWS as our provider with the region passed in as a variable. As we will see in a minute, this loads a provider from Terraform that supports most AWS resources. You can checkout what AWS resources Terraform supports here.

The next set of resources we are creating are for our AWS CodePipeline.

  • We create an S3 bucket that will hold the artifacts/outputs from each stage in our pipeline.
  • We need to create an IAM policy that allows CodePipeline to assume a role we create here in our template, codepipeline_role. That role has a policy attached to it, attach_codepipeline_policy. The policy grants access to AWS services that we need to call during an invocation of our pipeline.
  • We configure the resources needed in order for CodeBuild to work as expected. We define an assume role policy that allows CodeBuild to assume a role and access services via the codebuild_policy.
  • We create our actual CodeBuild project, build_project, that runs the build stage of our CodePipeline. Notice here we specify the source to be codepipeline and our buildspec to be buildspec.yml.

To provision our CodePipeline, we assign the artifact store to be the S3 bucket we provisioned earlier. The role is the codepipeline role that we defined earlier as well. The Source stage uses the GitHub provider and the token we generated over on GitHub. We want to send the output of this stage to a label called code via the output_artifacts property.

The last stage in our CodePipeline resource is the Build stage. Here the provider is CodeBuild and we have defined our input_artifacts to be the code output_artifacts from our Source stage. Then we specify the ProjectName for the CodeBuild project that will be responsible for executing the Build stage.

Everything we need to provision for our continuous deployment pipeline is in this template. If you are just looking to get up and running with AWS, then configuring this by hand might be faster than writing your first Terraform template. But in the long run, defining your infrastructure as code has massive benefits:

  • Your infrastructure definition lives in source control. It can be iterated on as code would be.
  • Your infrastructure is now repeatable. If you need to move it to another AWS region, you can run the template in that region.
  • You can quickly make changes by changing the template and applying updates.

Now that we know what this template is doing and we have Terraform installed, we can run this template from the command line.

First we run terraform init from the directory where our template lives. This pulls in the dependencies Terraform needs to run the template.

Once our Terraform template has been initialized we can use the “plan” command in order to see what exactly is going to be created.

Notice everything in green? These are the resources that will be created. If there were resources in yellow, these would be resources that were going to be updated. Then if there were resources in red, they would be deleted.

You can then run the apply command to actually create everything in the template. There is a confirmation prompt, simply type in yes.

Once the template has completed, we should see that all of our AWS resources have been created to support our continuous deployment pipeline.

Wait, what did we just do?

That is a lot of steps and quite a bit of infrastructure we stood up in AWS. So let’s walk through at a high level what we just created.

At the top of our pipeline we have the Source, in our case this is our GitHub repository. We configured our pipeline to periodically poll the master branch of our repository. If there are new commits in the master branch, then the pipeline activates to kick off a new build process. This is what is often referred to as a trigger for our build pipeline.

When a new build starts, our pipeline checks out the latest commits from the master branch. Once the changes are checked out, they are passed to the next step in our pipeline, the Build step. For this step, we are using another AWS service, CodeBuild. We have configured our CodeBuild project to use a Node.js image provided by Amazon. This image comes with Node.js installed already so the build machine that builds our repository has access to it.

But how does AWS CodeBuild know how to build our repository? That is where the buildspec.yml comes in. This is a special file that we will put at the root of our repository. In it we configure different phases of the build process like pre_install, build, and post_build. For our use case we are just going to configure the build process in the buildspec file. This will consist of copying the contents from our Source to our S3 website bucket, effectively deploying our static website.

Let’s jump over to our static website repository and configure our buildspec file.

Setting up our buildspec file

We are going to begin by adding a buildspec.yml file to the root of our static website.

This file is going to be the template that AWS CodeBuild uses to build, and in our case, deploy our static website. It consists of pre-install, install, build, and post build stages. For our use case, we are going to leverage the build stage.

What we are doing in the above build specification is pretty straightforward, it is just one line in fact. We are taking the contents of our static website and copying it to the S3 bucket that hosts our site via the AWS command line interface. For the other steps, we are echoing out which step ran in our build process.

Of course we could do even more here if we had a need to do so. For instance, if we needed to run a build process in our package.json, then the build and post_build steps would look like this:

Now we are running npm build inside our build step and saving the s3 sync command for our post build step. Our buildspec is giving us the ability to script not only deployments of our site, but how they are built and tested as well. We could also leverage the other stages like install to add any dependencies our build process needs.

For now let’s stick with our original buildspec file that is copying our static site to S3. Make sure that it is at the root of your repository, as this is where CodeBuild will look for it. Check it into your repository so we can trigger our CodePipeline.

Triggering our CodePipeline

Earlier we linked the Source stage of our CodePipeline to the GitHub repository of our static website. It is configured to watch for changes in our master branch. So, any new changes pushed to that branch trigger a new CodePipeline run. As we just checked in our buildspec.yml file, we should now see an invocation of our CodePipeline running.

We see here that our Source stage has been invoked due to the new changes in the master branch of our repository. This completed and sent its artifacts to the Build stage. The build stage took those artifacts and ran the buildspec.yml file to deploy them to our S3 bucket.

If we were to click on the details link on our DeployToS3 stage, we can see the logs that our build process is outputting.

Once our DeployToS3 stage has succeeded we should then be able to reload our static website in the browser and see our changes.

Bam! We have continuous deployment

Channeling my inner Emeril here, we now have continuous deployment for our static website. With any new commit to our master branch, a new CodePipeline run is triggered. This checks out the latest code from GitHub and passes it to CodeBuild. Our build project then executes what is in our buildspec file.

Currently, our buildspec file is just copying the contents of our static website to our S3 bucket. But, we could extend this to do more things. We could run npm tasks to build our site or run tests. If we are also using CloudFront in front of our static website we can issue an invalidation request when we deploy our new site.

There is so many things you can learn by diving in and actually using AWS. A static website might seem like a simple use case, but it is awesome for learning a wide variety of things.

Hungry to learn more Amazon Web Services?

I have been using AWS for over six years now and I am always learning new services and new ways to use existing ones. It is a massive platform with a lot of documentation. But, there are times when that documentation can feel like a massive sea of information. To the point to where you get lost in it.

Inspired by this problem, I recently released an ebook and video course that cuts through the sea of information. It focuses on hosting, securing, and deploying static websites on AWS. The goal is to learn services related to this problem as you are using them. If you have been wanting to learn AWS, but you’re not sure where to start, then check out my course.

👏 If you enjoyed this, don’t forget to offer claps to show your support!

--

--