Speed-up DevOps with CloudFront Functions

Štefan Pe
Slido developers blog
6 min readAug 11, 2022

A sneak peak into a future of — Infrastructure as a dynamic Code

Disclaimer: All examples are shown as Terraform code

In the last few months AWS introduced a lot of new features for their edge network services, such as TLSv1.3, Server Timing Headers, True IP connection port, ECDSA certificate for viewer connections, CloudFront Functions and many more. In this post, I would like to take you through an example of a pretty handy use-case of the last item in the list - CloudFront Functions.

We will be delivering multiple static app versions with CloudFront Functions.

CloudFront Functions

If you happen to be here, I suspect you already know what CloudFront is, but for those that don’t - AWS CloudFront is a content delivery network (CDN) service built for high performance, security, and developer convenience — that’s what AWS says - in other words it is a network of servers sprinkled around the world to make the content of webpages you (or your customers) open in your browser load way faster, sometimes cutting thousands of kilometers (and mili-seconds) of unnecessary latency.

What Cloudfront does is that as soon as it receives a request the first time - it loads it from origin and saves it to its cache for a configurable amount of time - even though we are explaining a very complex mechanism in the simplest possible form - and that’s why when another user fetches the same page, it loads way faster, as it is served from CloudFront server’s endpoint cache.

CloudFront Functions are ideal for lightweight processing of web requests.

AWS CloudFront Functions adds some computing power to the edge, this means that instead of simply serving a request CloudFront can add some magic into the processing, too.

Here’s what CloudFront Functions can be used for:

  • Cache-key manipulations and normalization
  • HTTP header manipulation
  • Access authorization
  • URL rewrites and redirects

Let’s have a look into the latter, URL rewrites and redirects with minor manipulation of HTTP headers and let’s get the party started.

This means that by running:

curl -i https://example.com/?feature=feature-name

you can get a different response and HTTP headers than you would get by running:

curl -i https://example.com/

CloudFront Functions will take the URL, and, thanks to the tiny piece of code will adjust the URL accordingly and add/modify HTTP headers - once this is done, it will pass the modified request to CloudFront which will serve it from cache or a particular origin.

Real use-case scenario

Tame staging environments with CloudFront Functions

Simplified architecture — to serve purpose of this example — to serve a static code.

Developers develop their features in a local environment, which many times is tightly connected with the company’s dev environment. Once they are done, the next step is to lean it on some functional API and pass it on to testers - at this point without merging the code into the main development branch.

Let’s begin with staging environment number one. In case you have more developers working on the same project, developing several features at once, your staging environment can look like this:

Developer and tester queue to use the same staging environment to prove/test their feature.

If you have a number of developers working on and trying their new features or bug fixes on staging environments, you would probably like to have as many free environments as possible. This a common scenario, particularly in the front-end world. You, of course, can serve them with as many environments as they request, but this will create additional overhead for your teams and will also take some precious resources from you (time = money).

Many environments to serve developers and testers needs.

But what if you could let all your developers share a single environment?

…and differentiate between branches with one query parameter? Yes, one tiny query parameter can offload your team and solve one unpleasant bottleneck. And here is where the AWS CloudFront Functions can be used, as mentioned above I deeply believe that this pretty small and simple feature from AWS can have many more use-cases than one might think at first [1].

One environment to rule them all — thanks to CloudFront Functions.

So let’s begin. First, create your CloudFront distribution [2] and focus on function_association and aws_cloudfront_response_headers_policy.this.id parts, since the rest is common configuration for CloudFront distribution:

resource "aws_cloudfront_distribution" "this" {
enabled = true
is_ipv6_enabled = true
aliases = var.aliases
price_class = var.price_class
wait_for_deployment = false
tags = var.tags
origin {
domain_name = "foo.s3.amazonaws.com"
origin_id = "s3"
origin_path = "/application"
s3_origin_config {
origin_access_identity = "origin-access-identity/cloudfront/${var.cf_oai_id}"
}
}
default_cache_behavior {
target_origin_id = "s3"
allowed_methods = ["GET", "HEAD"]
cached_methods = ["GET", "HEAD"]
viewer_protocol_policy = "redirect-to-https"
compress = true
cache_policy_id = var.cache_policy.managed_caching_optimized
realtime_log_config_arn = var.realtime_log_config_arn
response_headers_policy_id = aws_cloudfront_response_headers_policy.this.id
function_association {
event_type = "viewer-request"
function_arn = aws_cloudfront_function.this.arn
}
viewer_certificate {
ssl_support_method = "sni-only"
minimum_protocol_version = var.min_protocol_version
acm_certificate_arn = var.certificate_arn
}
restrictions {
geo_restriction {
restriction_type = "none"
}
}
logging_config {
bucket = var.log_bucket
prefix = var.log_prefix
}
}

Then, create an aws_cloudfront_function [3] resource within Terraform:

resource "aws_cloudfront_function" "this" {
name = var.aliases[0]
runtime = "cloudfront-js-1.0"
code = templatefile("${path.module}/request.js")
}

and create the actual function (request.js), where we send requests without ?feature= query parameter into the /host/ folder and requests with the query parameter ?feature= into a folder within the S3 bucket called /features/.

This means all requests will go through this logic and end up serving an /index.html file. For special cases, like .css, .png or similar we apply the endsWith(suffix) function and serve the actual uri of the original request:

function handler(event) {
var request = event.request;
var s3_folder = '/host/'+request.headers.host.value;

if ("feature" in request.querystring) {
var feature_name = decodeURIComponent(request.querystring.team.value);
s3_folder = '/feature/'+feature_name;
}

if (['.css', '.js', '.map', '.png', '.svg', '.txt', 'config.html'].some(suffix => request.uri.endsWith(suffix))) {
request.uri = s3_folder+request.uri;
} else {
request.uri = s3_folder+'/index.html';
}
return request;
}

Then create an aws_cloudfront_response_headers_policy [4]

resource "aws_cloudfront_response_headers_policy" "this" {
name = var.aliases[0]
security_headers_config {
strict_transport_security {
access_control_max_age_sec = 31536000
include_subdomains = true
override = false
preload = true
}
}
}

There are some prerequisites - you or your CI/CD should upload all the necessary static files (index.html, or so) onto S3. The file-tree of the actual S3 bucket should (in this case) looks like this:

static-files-bucket (s3 bucket)  └── application/
├── host/
│ ├── example.com/
│ └── ...
├── feature/
│ ├── green-feature/
│ ├── blue-feature/
│ ├── yellow-feature/
│ └── ...
└── static/
├── <static-000>.js
├── <static-001>.js
└── <static-002>.js
└── ...
├── application-2/
...

The query parameter ?feature is arbitrary and can be replaced with anything that suits your needs, it can be ?team,?branch, ?date, etc.

Thanks to this you are able to scale your free environments without killing your infrastructure, DevOps or SRE teams, all your developers can play without ever running out of staging environments.

In this way your Infrastructure can become an important and dynamic component in your toolset.

We all know that there are many ways to deliver this and accommodate the needs of staging environments (some create one CloudFront distribution for each new build and destroy it once there is no need for it, some do it in a completely different way), this is just one of those.

Fresh, simple and handy.

Do you have more ideas what to use CloudFront Functions for?

Do you approach staging environments in a different manner? Share it with us!

--

--