Adding Amazon’s Cloudfront CDN to your Wordpress site for free

Bill Chen
8 min readOct 2, 2022

--

In the last article, we stood up a Wordpress site backed by a cheap EC2 instance. This article describes how to stand up CloudFront, Amazon’s Content Distribution Network, to cache your static content on edge nodes closer to your website viewers, for free.

Jan 29, 2023 update: added Cloudfront behavior to handle requests to /wp-json/* so that posts could be saved

July 2, 2023 update: added S3 logging configuration and log retention policy

Why use CloudFront? CloudFront caches your website’s static content on servers that are closer to your viewers. This makes your website load faster since all images, css, and Javascript files don’t need to travel as far. For example, suppose your website is hosted in an East Coast data center. A viewer in London could load your website from data cached in an Europe data center; that’s a lot closer than going across the pond!

In addition, CloudFront offers nice features like increased reliability and availability due to objects cached in multiple places, redirecting http to https, compressing your content, and effectively acting as a proxy layer between viewers and your servers.

Here’s a high level diagram of what we’ll build:

CloudFront Architecture (tinyurl.com/ysr55wz4)

In a nutshell, the browser calls Route 53 to get the IP address for our website, example.com. Instead of the Elastic IP of our spot instance (per our previous article), Route 53 returns the closest CloudFront IP. When the browser makes an https request to that IP, the edge server either returns the cached website or calls our spot instance to get the contents and caches it.

Getting baseline performance

As a baseline, let’s take a performance test to my EC2 instance. I hosted my instance in AWS’ us-east-1 region, on the East Coast. Using https://tools.pingdom.com/ and setting the test from Sydney, Australia, I get about 3.79s load time:

It takes 3.79s to load a page from Sydney, Australia
From Virginia to Sydney is a long ways to travel! (image generated from www.openstreetmap.org)

In the next sections, we’ll add CloudFront edge servers to reduce that load time.

Before that though, we need to understand 2 important things: Wordpress redirects and TLS certificates.

Prep work 1: Wordpress redirects

Before configuring our CloudFront distribution, it’s important to note Wordpress redirect behavior. When a viewer makes an http(s) request to a Wordpress homepage, the header contains the Host domain name of the request. Wordpress checks ifHostmatches Wordpress’ Site Address. If they match, groovy. If not, Wordpress will return a redirect response (a 301 status code), like below. The reason is to support Wordpress permalinks even if the site changes domain names.

HTTP/2 301
x-powered-by: PHP/7.3.22
content-type: text/html; charset=UTF-8
x-ua-compatible: IE=edge
x-redirect-by: WordPress
location: https://example.com/
content-length: 0
date: Wed, 10 Aug 2022 14:55:39 GMT
server: lighttpd/1.4.64

But with Cloudfront, this can be a problem. The redirect will cause the browser to make a new request to the new location. This is important because CloudFront forwards requests to our Wordpress server using the host name of the edge server, which causes an endless redirect:

Wordpress redirect resulting in a loop (tinyurl.com/mryp9ue8)

We’ll avoid the redirect problem by configuring our CloudFront distribution, to forward the Host header.

Prep work 2: updating our TLS certificate for CloudFront

In the previous article we used Let’s Encrypt as a Certificate Authority to provide us signed cert, which we use to encrypt https connections to our spot instance.

When we add CloudFront between our viewers and our origin (aka spot instance), we’ll encrypt between the Browser (aka viewer) <-> CloudFront edge server, and between the CloudFront edge server <-> our spot instance:

Encrypted connections between Browser, CloudFront, and EC2

If we were to configure CloudFront to use the same Let’s Encrypt-signed cert in our spot instance, CloudFront would return 502, since, according to the CloudFront TLS requirement:

One of the domain names in the certificate must match the domain name that you specify for Origin Domain Name. If no domain name matches, CloudFront returns HTTP status code 502 (Bad Gateway) to the viewer.

So, we need to add an alternative name to the certificate. One obvious domain name is the Elastic IP domain name we setup in the last article. However, Let’s Encrypt won’t add EC2 domain names since EC2 domain names are ephemeral.

Instead we’ll add a subdomain in Route 53 as an A record to the Elastic IP of our EC2 instance, for example internal.example.com.

Now, we can add internal.example.com as an alternative name in the certificate. From the last article, we use certbot, like so:

$ certbot certonly --webroot -d "example.com,internal.example.com"

Don’t forget to update the merged.pemthat’s used by lighttpd:

cat /etc/letsencrypt/live/example.com/privkey.pem /etc/letsencrypt/live/example.com/cert.pem > /etc/letsencrypt/live/example.com/merged.pem

Restart lighttpd ( doas rc-service lighttpd restart) and you should be able to test your cert has your subdomain with

curl -i https://internal.example.com

Now, we import this new cert into ACM (AWS Certificate Manager) so that CloudWatch can use it when encrypting calls between the viewer and our EC2 instance. This AWS article describes how to import certs. For the Let’s Encrypt cert, paste the contents of cert.pem, privkey.pem , and fullchain.pem into “Certificate body”, “Certificate private key”, and “Certificate chain”, respectively, below. Note that you’re sharing your private key to AWS in order for AWS resources (e.g. CloudFront) to encrypt data. This is necessary, but something to keep in mind.

Importing the Let’s Encrypt cert into ACM

OK! We’re now ready to create our CloudFront distribution!

Alternatively, if we had used a Load Balancer (ala this reference architecture), we could have used AWS Certificate Manager (ACM) to generate a cert, instead of importing the Let’s Encrypt cert. The generated cert (and private key) can be shared directly with the Load Balancer. However, adding a Load Balancer increases our costs (for potential additional availability). It’s all about tradeoffs and what costs vs. availability vs. expected load you’re comfortable with.

Create a Cloudfront distribution

Now that ACM has our certificate, we can create a Cloudfront distribution. In the Cloudfront service on the AWS console, click “Create Distribution”. Set the origin domain as internal.example.com, the one you just created. Set the origin protocol policy to “HTTPS only”, to enforce only https between CloudFront and my origin (aka EC2 instance).

For the viewer behavior, set “Redirect http to https”. Allow all HTTP methods (GET, HEAD, OPTIONS, PUT, POST, PATCH, DELETE). For the Cache settings, leave the defaults. We’ll update these later.

Now tap the “Create distribution” button to create your CloudFront distribution.

Create Cache and Origin request policies

If you click on the Behaviors tab for your Distribution, you’ll see a default behavior, for all paths. We’re going update these Behaviors so that static content (e.g. images, css, etc.) is cached longer, while dynamic content (e.g. admin pages, Homepage’s news feed, etc.) is cached shorter, or not at all:

The behaviors we want to define for our Distribution

Notice the Cache and Origin request policies. Cache policies define how long requests are cached in CloudFront. Intuitively, we want static content (e.g. images, css, etc.) to be cached longer, while dynamic content (e.g. admin pages, Homepage’s news feed, etc.) to be cached shorter.

Origin request policies specify the values in viewer requests that CloudFront includes in origin requests (e.g. HTTP headers, URL query strings, and cookies). This is where we’ll ensure that the viewer’s Host in the request header is passed to our origin.

Create the following Cache policy:

Now create the following Origin policies:

Then, create Behaviors for referencing each policy. The end result is that list of Behaviors, shown earlier, but I’ll show it again here:

The behaviors we want to define for our Distribution

Note, that Managed-CachingOptimized is the AWS managed caching policy, so no need to create a custom one.

As a reference, AWS defines the CloudFront policies that Wordpress needs in this article. This IN-based support link also seems to have a lot of info for configuring CloudFront for Wordpress

Update Route 53

Last but not least, we need to update our DNS A alias record for example.com to route to our CloudFront Distribution’s domain name. Note, Route53 lets you create aliases for top-level domains and doesn’t charge for queries to CloudFront! And that’s it!

Results

Finally, let’s run those performance tests again and compare the results:

Before

It takes 3.79s to load a page from Sydney, Australia

After

We can see that load time decreased by about 90% (from 3.79 to 0.368s). Hopefully this helps you achieve similar improvements!

Bonus: Logging

Well, this shouldn’t be an afterthought, but a necessity: log your Cloudfront requests! This will help you understand how users, good or malicious, use your CDN. For example, are there users trying to brute force guess your admin password? Well, enabling Cloudfront logging will give you more insight into where those requests are coming from, IPs, etc:

Enable Standard logging to an S3 bucket in the distribution’s configuration

If you’re worried about s3 costs, then also set a retention policy to delete logs after a period of time. Typically, 3 months is a good length of time to enable you to check what happened, but not retain it forever:

Create a lifecycle rule to autodelete objects (logs) after 3 months

--

--

Bill Chen

I'm a parent and engineering manager. I've coached senior engineers at tech companies, like Apple and Tesla, as well as aspiring coders, 8 years and older.