Deploying Static Website With Terraform Using Amazon S3, CloudFront, and Route 53

Nick Sanders
6 min readJan 26, 2023

--

Link to repository

In this article, I will be teaching you how to use Terraform to deploy a static website using Amazon Web Services. As described by the official website, “Terraform is an open-source infrastructure as code software tool that enables you to safely and predictably create, change, and improve infrastructure”.

Prerequisites for this tutorial:

  1. AWS account with credit card on file (This tutorial uses AWS Free Tier so you shouldn’t incur charges.)
  2. Terraform (Click for instructions on how to install Terraform on your OS.)
  3. Code editor (I recommend Visual Studio Code)

Registering a domain

There are many domain services to choose from such as GoDaddy.com, Namecheap.com, and the AWS service, Route 53. For this tutorial, I will be registering a domain with Route 53 with the name “nicksands.link”. To register your domain:

  1. Log in to your AWS account and navigate to the Route 53 console.
  2. On the left hand side, click “Registered Domains” under “Domains”.
  3. Click “Register Domain” and search for your desired domain name and complete the checkout process.

If you encounter problems, visit this link for the AWS documentation further explaining the process.

Getting Started

Open your terminal and run the following commands:

mkdir terraform-site
cd terraform-site
touch providers.tf
touch variables.tf
touch terraform.tfvars
touch s3.tf
touch cloudfront.tf
touch acm.tf
touch route53.tf
touch set_env_vars.sh
touch .gitignore
touch policy.json
code .

The “code . ” command should open up VSCode and populate it with the files in your current directory. If not, open it from the applications page. Once in VSCode, click the “set_env_vars.sh” file and import your AWS credentials like this:

export AWS_ACCESS_KEY_ID='your_access_key'
export AWS_SECRET_ACCESS_KEY='your_secret_access_key'
export AWS_DEFAULT_REGION='us-east-1'

Make sure the credentials are within quotations. Next, open the “.gitignore” file and paste the following:

set_env_vars.sh
**/.terraform
*.backup

The “.gitignore” file will specify what files for Git to ignore when pushing to a repository, keeping your credentials safe.

Specifying providers

The provider is used to interface with the AWS API. Since we are using AWS, we will specify AWS as our provider. Terraform has a required providers block but we will add another to directly connect to AWS.

terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
}

provider "aws" {
region = "us-east-1"
}

Creating S3 buckets

First, we need to specify the variables we will use in the named “variables.tf”. The two variables used will be “domain_name” and “bucket_name”.

variable "domain_name" {
type = string
description = "Domain name of the website"
}

variable "bucket_name" {
type = string
description = "Name of the bucket"
}

Navigate to the “terraform.tfvars” to give the variables a value. For this tutorial, both values will be equal to “nicksands.link”.

domain_name = "nicksands.link"
bucket_name = "nicksands.link"

We need to create two S3 buckets, the “root” bucket and the “www” bucket. The “root” bucket name will be equal to the variable we set earlier, “bucket_name”. To use a variable, add “var.” in front of the variable name (var.bucket_name). For the “www” bucket, we’ll add “www” in front of the bucket_name variable. For the “root” bucket, we will set the “acl” value to “public-read” and attach a .json policy allowing anyone to view the contents inside.

resource "aws_s3_bucket" "root_bucket" {
bucket = var.bucket_name
acl = "public-read"
policy = file("policy.json")
}

resource "aws_s3_bucket" "www_bucket" {
bucket = "www.${var.bucket_name}"
}

In the file named “policy.json”, import the following code:

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::nicksands.net/*"
]
}
]
}

Next, we must configure web hosting for the “root” bucket and specify the index and error document. It is required by AWS that they are named “index.html” and “404.html”, respectively. On the “www” bucket, we must enable redirects to our “root” bucket.

The final result should look like this:

resource "aws_s3_bucket" "root_bucket" {
bucket = var.bucket_name
acl = "public-read"
policy = file("policy.json")

website {
index_document = "index.html"
error_document = "404.html"
}
}

resource "aws_s3_bucket" "www_bucket" {
bucket = "www.${var.bucket_name}"

website {
redirect_all_requests_to = var.domain_name
}
}

Creating CloudFront Distributions

In this section, we will create two CloudFront distributions that will route to our “root” and “www” buckets. CloudFront is used to serve static content to end users quickly using Points of Presence (PoP).

In our distribution, we’ll set the origin equal to the bucket’s endpoint and set the origin_id equal to “S3-.${var.bucket_name}” for the “root” bucket and “S3-www.${var.bucket_name}” for the “www” bucket. Next, we’ll create a custom origin configuration that specifies the HTTP port at port 80 and the HTTPS port at port 443. We’ll set the origin_protocol_policy to “http-only” and the origin_ssl_protocol to “TLSv1.2”. Next, set the “enabled” field to “true” which accepts end users requests for content. Our aliases should be equal to our domain names, “var.domain_name” for the “root” distribution and “www.${var.domain_name}” for the “www” distribution. Both should be contained in square brackets.

For our default_cache_behavior, the “GET” and “HEAD” methods should be allowed with the target_origin_id equal to the origin_id of the distribution. The forwarded_values are set to “true” which is how CloudFront handles query strings. The viewer_protocol_policy is set to “allow-all” to allow users to access all files at the origin. The viewer_certificate sets the SSL configuration for the distribution. The ssl_support_method should be equal to “sni-only” and the minimum_protocol_version should be equal to “TLSv1.1_2016”

The final result should look something like this:

resource "aws_cloudfront_distribution" "root_distribution" {
origin {
domain_name = aws_s3_bucket.root_bucket.website_endpoint
origin_id = "S3-.${var.bucket_name}"
custom_origin_config {
http_port = 80
https_port = 443
origin_protocol_policy = "http-only"
origin_ssl_protocols = ["TLSv1.2"]
}
}

enabled = true
is_ipv6_enabled = true

aliases = [var.domain_name]

default_cache_behavior {
allowed_methods = ["GET", "HEAD"]
cached_methods = ["GET", "HEAD"]
target_origin_id = "S3-.${var.bucket_name}"

forwarded_values {
query_string = true

cookies {
forward = "none"
}

headers = ["Origin"]
}

viewer_protocol_policy = "allow-all"
min_ttl = 0
default_ttl = 0
max_ttl = 0
}

restrictions {
geo_restriction {
restriction_type = "none"
}
}

viewer_certificate {
acm_certificate_arn = aws_acm_certificate_validation.cert_validation.certificate_arn
ssl_support_method = "sni-only"
minimum_protocol_version = "TLSv1.2_2021"
}
}

resource "aws_cloudfront_distribution" "www_distribution" {
origin {
domain_name = aws_s3_bucket.www_bucket.website_endpoint
origin_id = "S3-www.${var.bucket_name}"

custom_origin_config {
http_port = 80
https_port = 443
origin_protocol_policy = "http-only"
origin_ssl_protocols = ["TLSv1.2"]
}
}

enabled = true
is_ipv6_enabled = true
default_root_object = "index.html"

aliases = ["www.${var.domain_name}"]

custom_error_response {
error_caching_min_ttl = 0
error_code = 404
response_code = 200
response_page_path = "/404.html"
}

default_cache_behavior {
allowed_methods = ["GET", "HEAD"]
cached_methods = ["GET", "HEAD"]
target_origin_id = "S3-www.${var.bucket_name}"

forwarded_values {
query_string = false

cookies {
forward = "none"
}
}

viewer_protocol_policy = "redirect-to-https"
min_ttl = 0
default_ttl = 0
max_ttl = 0
compress = true
}

restrictions {
geo_restriction {
restriction_type = "none"
}
}

viewer_certificate {
acm_certificate_arn = aws_acm_certificate_validation.cert_validation.certificate_arn
ssl_support_method = "sni-only"
minimum_protocol_version = "TLSv1.1_2016"
}
}

Creating a SSL certificate

The ACM certificate resource allows requesting and management of certificates from the Amazon Certificate Manager. This a two-step process: we first create the certificate, then we create the certificate validator. You can change the validation method from “EMAIL” to “DNS” if you so choose. I prefer email validation as it is usually quicker.

resource "aws_acm_certificate" "ssl_certificate" {
provider = aws.acm_provider
domain_name = var.domain_name
subject_alternative_names = ["www.${var.domain_name}"]
validation_method = "EMAIL"

lifecycle {
create_before_destroy = true
}
}

resource "aws_acm_certificate_validation" "cert_validation" {
provider = aws.acm_provider
certificate_arn = aws_acm_certificate.ssl_certificate.arn
}

Go back to the providers.tf file and add the provider needed for the ACM certificate.

provider "aws" {
alias = "acm_provider"
region = "us-east-1"
}

Creating Route 53 Hosted Zones and Records

We will create one hosted zone with the name equal to our domain name. Next, we will create two records, a “root” record and “www” record. Notice a pattern? The zone_id for the route will equal the ID of the zone we just created. The name is equal to the domain_name variable. For the “www” route, the name is equal to domain_name with prefix “www” added to the front. Under “alias”, the name and zone_id are equal to those of their respected CloudFront distributions.

Your code should look similar to this:

resource "aws_route53_zone" "main" {
name = var.domain_name
}

resource "aws_route53_record" "root" {
zone_id = aws_route53_zone.main.zone_id
name = var.domain_name
type = "A"

alias {
name = aws_cloudfront_distribution.root_distribution.domain_name
zone_id = aws_cloudfront_distribution.root_distribution.hosted_zone_id
evaluate_target_health = false
}
}

resource "aws_route53_record" "www" {
zone_id = aws_route53_zone.main.zone_id
name = "www.${var.domain_name}"
type = "A"

alias {
name = aws_cloudfront_distribution.www_distribution.domain_name
zone_id = aws_cloudfront_distribution.www_distribution.hosted_zone_id
evaluate_target_health = false
}
}

Congratulations! You created a website using Terraform and AWS services! To make sure your code is neat, run the command “terraform fmt”. Next, run “terraform plan”. This will give you a breakdown of all the resources that will be created. Finally, run “terraform apply — auto-approve” to build your infrastructure.

Coming soon: Automating Terraform deployment with GitHub Actions.

--

--