Hosting a Static Website on AWS S3 using Terraform

Frankpromiseedah
6 min readDec 17, 2023

--

In this guide, we’ll explore deploying static websites efficiently on AWS using Terraform.

I’ll walk you through the process of launching a static website by uploading its content to an S3 bucket and configuring the bucket to host your site. Before diving in, let’s clarify a few key terms and concepts.

What is Terraform?

Terraform stands as an open-source Infrastructure as Code (IAC) platform designed to construct, oversee, and deploy production-ready environments. Utilizing declarative configuration files, Terraform codifies cloud APIs. It efficiently manages various third-party services alongside custom, in-house solutions.

What is Amazon S3?

Amazon S3, known as Simple Storage Service, is an Amazon Web Services offering providing web-based object storage. Acting as a repository, it allows uploading files and directories, enabling storage and retrieval of diverse data types such as documents, photos, and videos.

Interacting with S3 for data upload and retrieval is straightforward through the AWS SDK. S3 seamlessly accommodates various programming languages, allowing integration into your existing tech stack effortlessly.

Buckets

Within S3, files find their place within buckets, akin to folders on your computer.

Each bucket should have a distinct, non-reusable name. This uniqueness proves crucial for resource identification and linking domain names for static website hosting.

There exist no constraints on the number of files within a bucket. Additionally, buckets offer added functionalities like version control and policies.

Multiple buckets can serve a single application uniquely. For instance, a medical records app might utilize two buckets: one for private client data and another publicly accessible bucket holding whitepapers.

S3 operates as an object-based storage service, treating each file as an object. These objects hold metadata such as names, sizes, dates, and other associated details.

S3 Storage Types

S3 has three storage classes based on general use cases.

S3 Standard

Upon initiating usage of S3, you’ll automatically enter the S3 Standard storage plan, representing the default option. This class boasts remarkable performance, durability, and availability.

Opt for S3 Standard when handling data that requires frequent access.

S3 Infrequent Access

S3 Infrequent Access presents a cost-effective alternative to the standard plan for data storage. It’s suitable for less frequently accessed data.

Utilize S3-IA for scenarios like backups and disaster recovery, where data access occurs infrequently.

Glacier

Glacier stands as the most budget-friendly storage within S3, specifically tailored for archival purposes. Retrieving data from Glacier might not be as swift as Standard or S3-IA, yet it’s an excellent choice for long-term data archival.

Beyond selecting from these three storage classes, you can establish lifecycle policies within S3. This allows automatic transitioning of files to S3-IA or Glacier based on specified time periods.

Why Use S3?

It is affordable

S3 is exceptionally cost-effective compared to alternative storage solutions, offering pay-as-you-go pricing with no upfront expenses or setup hassles — just plug-and-play.

Furthermore, S3 provides a Free tier that includes 5GB storage, 20,000 GET Requests, 2,000 PUT, COPY, POST, or LIST Requests, and 15GB Data Transfer per month for the first year.

By utilizing S3, you can steer clear of paying for unnecessary space or bandwidth.

It is scalable

S3 seamlessly adapts to your application’s growth. As you pay solely for what you utilize, there’s no cap on the data stored in S3.

This scalability proves invaluable in various scenarios, particularly unexpected spikes in user growth. With S3, there’s no need to purchase additional space — it accommodates your needs effortlessly.

Versioning advantage

Versioning involves retaining multiple file copies and monitoring their alterations over time, proving beneficial, notably in managing sensitive data.

Enabling versioning within S3 also facilitates recovering inadvertently deleted files. However, it’s crucial to note that enabling versioning stores numerous document duplicates, which may impact pricing and read/write requests.

When incorporating versioning into your application, consider these implications.

By default, S3 has versioning disabled, but you can activate it via the AWS Console.

Durability

Data durability often remains undervalued, yet it stands as a critical aspect, especially considering prevalent data loss incidents in various companies, a pivotal consideration while constructing enterprise software.

S3 boasts a robust storage infrastructure ensuring high durability. Employing redundant data storage across multiple facilities, S3 safeguards data against system failures. Additionally, it conducts routine data integrity checks to ensure data integrity.

With a 99.999999999% durability rate (referred to as 9s durability) and 99.99% object availability within a year, S3 stands as a reliable choice for data protection and accessibility.

S3 Use Cases

Static website hosting

Leveraging S3 as a static website hosting platform allows showcasing information distinctly from dynamic websites, which handle user input processing.

With the rise of single-page applications (SPAs), S3 becomes an ideal host, often without associated costs. Frameworks like React and Angular enable user input processing within the browser, empowering the creation of SPAs that interact with third-party APIs, perfectly suitable for hosting on S3.

Furthermore, S3 offers excellent routing support, enabling the use of custom domains, and adding a personalized touch to your hosted content.

Analytics
Performing queries directly on S3 data eliminates the need to relocate your data to an analytics platform, positioning S3 as an ideal solution for robust analytics applications.

S3 provides various options such as S3 Select, Amazon Athena, and Amazon Redshift Spectrum. These services can also be combined with AWS Lambda, enabling dynamic data processing seamlessly.

File sharing

Utilizing Amazon S3 as an economical file-sharing solution is a viable option. By setting up custom permissions using adaptable security policies, S3 buckets can be configured to accommodate diverse customer needs. Additionally, S3 provides transfer acceleration to expedite large file transfers across longer distances.

Prerequisites

  • AWS account
  • A purchased domain
  • Terraform installed

Setting up our Terraform components

providers.tf

Terraform relies on plugins known as providers to engage with remote systems. While our focus is currently on AWS, it’s important to note that Terraform can seamlessly interact with various other cloud services like Azure and Google Cloud.

terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "4.2.0"
}
}
}
provider "aws" {
access_key = var.access_key
secret_key = var.secret_key
region = var.region
}

Here, we specify the Terraform version along with the AWS provider version to ensure operational continuity despite potential future breaking changes in Terraform or the AWS provider. This configuration precedes setting up the primary aws provider utilized across most of our components.

variable.tf

variable "domain_name" {
type = string
description = "Name of the domain"
}
variable "bucket_name" {
type = string
description = "Name of the bucket."
}
variable "region" {
type = string
default = "us-west-2"
}
variable "access_key" {
type = string
}
variable "secret_key" {
type = string
}

terraform-dev.tfvars

this is another variable file you usually would not put in github or any other source control. replace values with your own domain name and bucket name

domain_name = "example.com"
bucket_name = "example.com"

s3-bucket.tf

resource "aws_s3_bucket" "bucket-1" {
bucket = "www.${var.bucket_name}"
}
data "aws_s3_bucket" "selected-bucket" {
bucket = aws_s3_bucket.bucket-1.bucket
}

s3-acl.tf

resource "aws_s3_bucket_acl" "bucket-acl" {
bucket = data.aws_s3_bucket.selected-bucket.id
acl = "public-read"
depends_on = [aws_s3_bucket_ownership_controls.s3_bucket_acl_ownership]
}

s3-versioning.tf

resource "aws_s3_bucket_versioning" "versioning_example" {
bucket = data.aws_s3_bucket.selected-bucket.id
versioning_configuration {
status = "Enabled"
}
}

ss3-bucket-policy.tf

We set up a policy that gives read access to the bucket.


resource "aws_s3_bucket_ownership_controls" "s3_bucket_acl_ownership" {
bucket = data.aws_s3_bucket.selected-bucket.id
rule {
object_ownership = "BucketOwnerPreferred"
}
depends_on = [aws_s3_bucket_public_access_block.example]
}

resource "aws_s3_bucket_public_access_block" "example" {
bucket = data.aws_s3_bucket.selected-bucket.id

block_public_acls = false
block_public_policy = false
ignore_public_acls = false
restrict_public_buckets = false
}

resource "aws_s3_bucket_policy" "bucket-policy" {
bucket = data.aws_s3_bucket.selected-bucket.id
policy = data.aws_iam_policy_document.iam-policy-1.json
}
data "aws_iam_policy_document" "iam-policy-1" {
statement {
sid = "AllowPublicRead"
effect = "Allow"
resources = [
"arn:aws:s3:::www.${var.bucket_name}",
"arn:aws:s3:::www.${var.bucket_name}/*",
]
actions = ["S3:GetObject"]
principals {
type = "*"
identifiers = ["*"]
}
}

depends_on = [aws_s3_bucket_public_access_block.example]
}

s3-website
Now we set the website configuration that the bucket is going to fetch from as our index page.

resource "aws_s3_bucket_website_configuration" "website-config" {
bucket = data.aws_s3_bucket.selected-bucket.bucket
index_document {
suffix = "index.html"
}
error_document {
key = "404.jpeg"
}
# IF you want to use the routing rule
routing_rule {
condition {
key_prefix_equals = "/abc"
}
redirect {
replace_key_prefix_with = "comming-soon.jpeg"
}
}
}

s3-object-upload.tf

We now upload index and some images to our website

resource "aws_s3_object" "object-upload-html" {
for_each = fileset("uploads/", "*.html")
bucket = data.aws_s3_bucket.selected-bucket.bucket
key = each.value
source = "uploads/${each.value}"
content_type = "text/html"
etag = filemd5("uploads/${each.value}")
acl = "public-read"
}
resource "aws_s3_object" "object-upload-jpg" {
for_each = fileset("uploads/", "*.jpeg")
bucket = data.aws_s3_bucket.selected-bucket.bucket
key = each.value
source = "uploads/${each.value}"
content_type = "image/jpeg"
etag = filemd5("uploads/${each.value}")
acl = "public-read"
}

To run terraform

terraform init

terraform plan -var-file terraform-dev.tfvars

terraform apply -var.file terraform-dev.tfvars

Reference

https://github.com/Frankpromise/aws-solutions-architect

--

--