Hosting a HTTPS static website on Amazon S3 w/ CloudFront and Route 53

If your deadline is tight, you have no experience with AWS, and your Dev Ops was hit by a bus, then this article is for you!

Matthew Manuel
9 min readOct 25, 2017

I need to quickly configure a static website ready for production with minimal hassle and tinkering with managing a server. I’ve decided to use a combination of Amazon S3 and CloudFront due to the ease of configuration, or so I’ve been told.

After following a few guides, I was able to get it quickly up and running. However there were a few gotcha’s I’ve faced along the way. So I decided to write this guide for myself in the intent you have no experience with AWS and to hopefully be clear of any confusions I’ve faced for what is a simple process with alot of moving parts.

There are no requirements besides:

  1. An Amazon AWS account. It’s free.
  2. A domain name you’ve purchased from Amazon Route 53. If you own the domain from somewhere else (i.e crazydomains), you can transfer it to Route 53 for a small fee.
  3. AWS CLI installed on your terminal.

We’ll be covering Amazon S3, Amazon Route 53 and Amazon CloudFront with SSL configuration. How it works is that when a user visits a domain and makes a request, Route 53 will redirect that request to our CloudFront distribution which is a cached copied of our website that is originally stored in an Amazon S3 bucket.

If none of that make sense, I’ll explain how each service works more in detail.

Note: We’ll be using example.com as our custom domain name for demonstration’s sake.

Amazon S3

Amazon S3 is a web storage service which provides low latency data storage infrastructure at very low costs for developers and a cost & time effective solution for cloud storage.

It also integrates nicely with Amazon CloudFront, a service that’ll provide low latency access to our site around the world and scalability for free (to an extent).

Let’s start with creating a new bucket. In the AWS Dashboard, goto S3 ( in Storage services) and click on Create Bucket.

We will be prompted to enter our bucket name and region, in which you can just pick the closest region to you. We’ll leave the properties and permission levels to default as it is for now.

Keep in mind that bucket name has a global namespace, and you may have heard from somewhere that your bucket name must be the same as your domain name in order for static hosting to work. The official AWS documentation says

The bucket names must match the names of the website that you are hosting. For example, to host your example.com website on Amazon S3, you would create a bucket named example.com. To host a website underwww.example.com, you would name the bucket www.example.com. In this example, your website will support requests from both example.com and www.example.com.

This means you will be unable to link the custom domain with your Amazon S3 bucket if the name is already taken.

This is however a non-issue as we will integrate it with Amazon CloudFront, which can be configured to use an S3 bucket of any name. With CloudFront, users that visit our domain will directly fetch data from the CloudFront distribution which in turn caches contents from our S3 bucket.

After the bucket is created, let’s sync our website through the AWS CLI where /build is your folder containing the root index.html file and example.com is the name of your S3 bucket.

aws s3 sync build/ s3://example.com

We will then need to configure our bucket so it is publicly available for our CloudFront to access. We can do this pasting the following policy statement into the Bucket Policy.

{
"Version": “2012-10-17",
"Statement": [
{
"Sid": "IPAllow",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::example.com/*"
}]
}

What this policy does is allow (“Effect”: “Allow”) everyone (“Principal”: “*”) to get all objects from our S3 bucket (“Action”: “s3:GetObject”) named example.com (“Resource”: “arn:aws:s3:::example.com/*”).

We’ll also need to enable static website hosting under properties.

Leave the default index document and error document to index.html. Ensure there is a index.html file in your root folder unless you have another need.

Take note of the endpoint, which we’ll need to configure in CloudFront. If your S3 bucket is configured properly, users can now visit your endpoint and view your website over the web.

We’re done with S3 configuration!

Amazon CloudFront

Amazon CloudFront act as our content delivery network for our Amazon S3 bucket. It will cache contents around edges of datacenters around the globe and acts as a global load balancer, ensuring users are accessing our website in datacenters most closest to them with the lowest latency. It essentially makes our website scalable and fast in most places worldwide at virtually no cost (to an extent).

In the AWS Dashboard, Goto CloudFront which is located under Network & Content Deliver. We will create a new CloudFront distribution and select on Get Started on Web Distribution.

We will prompted to input our CloudFront Distribution settings. For now we only want to input our Origin Domain Name and our Alternate Domain Name.

Origin Domain Name: Here we will link our CloudFront Distribution with our Amazon S3 bucket.

Double clicking the textfield will prepopulate it with Amazon S3 links, however it may be incorrectly populated. You want to use the actual endpoint shown in Static Website Hosting of your S3 bucket but without the http prefix i.e xxxxx.s3-website-eu-west-1.amazonaws.com.

Alternate Domain Names (CNAMEs): Input the custom domain name that you own i.e example.com

Click on Create Distribution to finish the creation. It’ll take a couple of mins before it’s ready to be used.

Once completed, visit your s3 Static Website Endpoint to check if data is being pulled through CloudFront instead of Amazon S3. You can verify by checking Usage under Reports and Analytics to see if any data is there once you’ve visited the website.

We are done with most of the configuration of CloudFront as the default settings suit 99% of our use cases. We’ll return to this for the final configuration with SSL.

Amazon Route 53 configuration

Amazon Route 53 is used to register domain names and route internet traffic of our domain name to our resources which would be either our CloudFront or our S3 bucket. If you bought your domain from another site, you can transfer domain ownership to Route 53 for a small fee.

As Amazon CloudFront is already configured to cache contents from our S3 Bucket, we only need to configure our domain on Route 53 to point to our CloudFront web distribution.

If you haven’t already, create a new Hosted Zone in Route 53 with your domain name. Leave the default to Public Hosted Zone.

Once created, your domain in Hosted Zones and click on Create Record Set. We will be creating an an Address record (A — IPv4 address) type record set with an Alias set to yes.

For your name, leave it blank as we are creating the record set for the main domain name. Input your CloudFront distribution domain name ( it should look like xxxxxxxx.cloudfront.net ) into your Alias Target field and leave the the default values of Routing Policy and Evaluate Target Health to default values of Simple and No. The Alias Hosted Zone ID’s should autofill if configured correctly.

After creating the Creating Record set, we are done with Amazon Route 53 configuration.

SSL Certificate Configuration

We’ll need to make our website SSL certified, which ensures data between your browser and the server is encrypted so intruders cannot modify or read the contents of the packets sent from your computer to the server. It’s also a requirement for modern browser features such as web push notifications. Search engines will also penalise your SEO rankings if it isn’t HTTPS certified.

This requires a certificate which can be produced for free using AWS Certification Manager. This initially didn’t work for me out of the box, so I ended up configuring it with a paid certificate which was quite cumbersome to prepare and import into AWS. The free certificate did work after revisiting it, I believe you have to wait for a while in order for the SSL certificate to work.

If you must configure it with a paid certificate that you’ve bought, I saved my notes on a gist. But I can’t think of any single reason why you would (let me know if you disagree)and I suggest that you don’t.

To request a certificate , just click on the ‘Request or Import a certificate with ASM’ within the Edit Distribution in CloudFront of Domain and it will take you directly to certificate creation.

Hereon the process of Requesting a Certificate is straightforward. Just add all the domains you want to attach to the certificate, including any subdomains such as www.example.com if you wish.

Then click on Review and Request and it’s just a matter of verifying you own the domain by clicking on the email that sent’s out to the domain name owner.

After that’s done, just select verified SSL in your CloudFront distribution.

Select the uploaded Custom SSL Certificate in your CloudFront’s distribution settings.

Once set, you should be able to visit your domain using the https prefix i.e https://example.com and your site should appear. It’ll appear with a secure icon on Google Chrome.

Other options

You have done the bulk of the work in making your site secure. Perhaps we want to tweak the behaviour such that such as to redirect all HTTP requests HTTP to HTTPS.

We can do that by changing the Viewer Protocol Policy to Redirect HTTP to HTTPS in the Origin Domain Settings.

Now every-time we visit the domain on normal HTTP, we’ll be redirected to the HTTPS version of the website.

There’s other neat things we can do to such as redirect subdomain URL’s into our main domain. In our example we want to redirect all www domains into our main SSL certified domain name.

The quickest solution is to create an S3 bucket with the subdomain as the name, and set the Static website hosting properties of the website.

I haven’t figured out the alternative if the bucket name is already taken along with some other gaps.

In Summary

There’s still quite a few gaps that I need to figure out. The need for this blog was initially stemmed from getting lost with the process particularly configuring an unbundled paid certificate instead of the much easier path of using Amazon’s ACM.

But if you got this far, you’ve probably done enough to satisfy your project and buy yourself time to figure out the kinks.

If you were stuck, or have any questions, comment below!

--

--