Using lambda@Edge for Server-Side Rendering

Mohamed Elfiky
Apr 21 · 5 min read

At MEDWING we recently launched 2 new initiatives - wirwollenhelfen and wirbrauchenhilfe - as our response to the Corona-Crisis.

Both of these initiatives had a web presence - a few static pages and a form. These pages were built using react and hosted on S3 and Cloudfront. However, we needed to add Server-Side Rendering to improve SEO performance.

We first thought of using Next.js as it is a popular framework. However, it is a big framework with a lot of dependencies, which is a significant overhead for the simplicity of the project. Also, React.js already supports SSR by default.

So we decided not to go with the Next.js framework. Additionally, we wanted to test Lambda@Edge in this project, for better scalability and for avoiding server provisioning, enabling auto-scaling, simplifying maintenance, and reducing cost.

We used terraform for automating infrastructure creation at the beginning of the project because it helps us to track the infrastructure we use for each project in AWS and store the infrastructure changes in our version control. So we continued using terraform for implementing the serverless architecture.

Thanks to AWS Lambda Edge, we were able to create a lambda function associated with each CloudFront Edge that hosts the static content to handle the SSR.

But how?

Lambda@Edge lets you run Node.js and Python Lambda functions to customize content that CloudFront delivers, executing the functions in AWS locations closer to the user. The functions run in response to CloudFront events. You can use Lambda functions to change CloudFront requests and responses at the following points:

  • After CloudFront receives a request from a viewer (viewer request)
  • Before CloudFront forwards the request to the origin (origin request)
  • After CloudFront receives the response from the origin (origin response)
  • Before CloudFront forwards the response to the viewer (viewer response)
Lambda Edge diagram
Lambda Edge diagram

Each event has specific use cases, and you can check the AWS docs for a better understanding of the use cases and limitations.

For our use case, the origin response event fits better because the website has static content, and it’s fine to cache the content for a while and save the lambda calls time/cost, and also for better loading performance.

So we started using Lambda edge in server-side rendering. But we needed to handle some use cases.

  • Rendering static assets like images, SVG, JS, CSS directly from S3 without calling lambda.
  • Configuring Lambda@edge CloudFront request structure to be compatible with Koa HTTP Library.
  • Optimizing WebPack server-build to use the existing client-build instead of building the client code again with the server build.

Rendering static assets:

Although we configured the koa router to serve static assets, we need to make sure that all assets requests go directly to S3, because:

  • Server-side rendering is useless in static asset requests.
  • S3 should control the caching headers for assets, which are being set by our CI in the deployment step.

Fortunately, CloudFront gives us the ability to configure the cache behaviors based on path patterns.

So we used this feature to differentiate between a page request that requires SSR and an asset request that goes directly to S3.

For example:

CloudFront Distribution Terraform Config

In the previous snippet, we configured all requests with the pattern to go to S3 directly, and all other requests go to lambda for server-side rendering.

Side Note:

It’ s recommended to enable CloudFront compression for better performance because it decreases asset size significantly, which leads to better page speed.


Lambda Edge and KOA server request mapping

There are some differences between lambda edge request format and Koa request format, here it comes the advantage of using serverless-http, because it maps the HTTP request form lambda event format to many NodeJs frameworks format including Koa.

Unfortunately, serverless-http does not support the lambda edge format so far, but it’s easy to fix.

We need to re-format the input to the serverless-http handler to include CloudFront request params, as follows:

Lambda Handler

Side Note:

In the lambda code, we used GZIP compression on the response body for better performance and also to avoid hitting the limitation of edge response size, which is 1MB.
BTW, Our current response (body and headers) size after compression is ~4KB, which is fine.


Optimizing WebPack server-build

We are using WebPack to build the server code before deploying it to the lambda function, because:

  • We are using ECS6 in the server code, and we need to bundle it to VanillaJS to run on lambda as one file, without having to upload babel and all the node modules to lambda for running the code.
  • We need to decrease the deployment time because each deployment CF deploys to all edges across the supported regions, so reducing the code size leads to faster data transferring between the edges.

The problem here is we are using react scripts to build the react application, and this react-build is synced to s3 for static assets caching.
When react scripts build the project, it generates a hash in each file name, for browser’s cache invalidation on each new build; by default, the Hash is random. However, you can configure WebPack to generate it based on file content, not randomly.

In the server, we must use the same generated assets from the client-build because the URLs of the assets must be the same both in server-generated HTML and in the client rendered HTML.
We cannot use react-scripts to build the server code because it’s built to generate content to be executed by the browser, not the server.
The server should have a different entry point, and it should be a js file.

TL;DR:
We implemented the server WebPack config to use the generated client-build from React scripts.

SSR server WebPack config

In the previous snippet, we configure WebPack to load the static assets from client-build by using the client manifest that react-scripts generate along with the build, which is a mapping between the original assets file names and the generated compressed assets file names.

By using this approach, we deploy to the lambda one file (~1.3 MB) that contains the server code along with the client build, and all the files are compressed and minified.


In the end, we were pleased with the performance results of the Lambda@Edge and its high availability.

wirwollenhelfen website availability report

MEDWING Engineering, Product & Design

Find out what’s happening under scalpel of our tech team

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store