Rendering Angular Apps on the Server: A Realist’s Guide

Chris Gilardi
Vidaloop
Published in
24 min readApr 8, 2021

The Introduction

There comes a time in many apps’ lives where the status quo simply isn’t enough to cater to the ever-changing demands of web features. Rich link previews, immersive interactive features and blazing-fast sites have become expected by users. On top of this, good Search Engine Optimization (SEO) is vital to having a website that is discoverable and appealing, as well as looking substantially more professional. We all want what’s best for our sites, but sometimes change can be scary. What if I mess it up? What if my choices are wrong? What if my changes accidentally summon a man in an inflatable bald-eagle suit who just won’t leave me alone, no matter how many times I tell him this isn’t his nest?! Luckily for you, I’ve made all of those mistakes so you won’t have to.

My name is Chris; I’m an early-career software developer at Vidaloop, a civic technology startup in San Diego, California. My area of expertise is in Web/Full-Stack Development and I have been given many opportunities to help move our products forward in meaningful ways. Today, we’re going to be focusing on just one of the company’s products — Voterly. Voterly is a large-scale database of American politicians focused heavily on creating a friendly and easy-to-use environment for regular people to educate themselves about their representatives. With over 150,000 politicians and an ever-increasing amount of data about them, Voterly is well on its way to achieving its goals of a better-informed voting populace.

But enough of the sappy stuff. As you may have guessed based on this article’s title, we’re going to be focusing on moving from a (relatively) vanilla Angular, client-side rendered application to an Angular Universal-powered server-side rendered one. I will go over some of the issues that I ran into when performing this migration, give a high-level overview of relevant pieces of Voterly’s architecture, and review my own methodology for taking on projects like this.

The Starting Point

So that’s Voterly from 10,000 feet, but to truly understand the changes made in this article, it’s also helpful to get some context regarding our infrastructure, as it will come into play later. As mentioned above, Voterly currently runs on a mostly-vanilla Angular 9-based build and deploy pipeline. Hosted in AWS, the site’s files are uploaded to an S3 bucket. This bucket sits behind a CloudFront distribution which handles cache policies, caching and content distribution for the site and allows initial JS bundle delivery to be very fast since CloudFront caches are multi-AZ. All of this sits behind a Route 53 simple routing policy that routes all traffic to the CloudFront distribution.

This architecture has worked well for Voterly for the last few years, and has allowed our development team to quickly and (relatively) painlessly add new features to Voterly that leverage many different features of Angular. On top of this, we have some custom pieces of code surrounding configuration, authentication and some development-related workflows. As our data size and application complexity grows, however, we have shifted our focus a bit to improving the site’s Search Engine Optimization (SEO) and performance. We decided the best path forward for Voterly was to pursue a migration to server-side rendering (SSR).

The Reasoning

To begin, I should probably clarify the difference between client (CSR) and server side rendering. As far as web technologies are concerned, for all intents and purposes, SSR has been the norm since the beginning. Only with the dawn of more powerful personal computers and devices has the prevalence of JavaScript client-side code increased, finally culminating in sites that can be rendered 100% on-client by simply delivering and running a javascript bundle. Route changes are mostly symbolic: They change the URL in the bar, but behind-the-scenes, the browser is using JavaScript to render the next page. This paradigm, in conjunction with reactive-style programming has led to a plethora of client-side frameworks: most notably Angular, ReactJS and VueJS. Conversely, with a fully server-side rendered application, every route change or request to the server will be fully “rendered” to HTML on every request.

As is always the case with choosing a technology for any project or application in Software Development, there are pros and cons to each approach. Let’s start with the cons:

Cons

  1. Slightly slower initial server response time
  2. More complicated backend infrastructure
  3. Requires an active server to handle requests
  4. In some circumstances, code may need to be shared or duplicated to support dynamic page elements

While these cons are by all means important points to consider, for our purposes (being a business with multiple developers) we don’t mind the slightly higher complexity nor the running active server as much as someone developing a personal project may. For us, the con to look out for would be the increased server latency when a cache miss occurs. We will come back to this later. In our view, here are the benefits we are looking to reap:

Pros

  1. Improved First Contentful Paint (FCP) and Largest Contentful Paint (LCP) (page load performance metrics)
  2. Dynamic link previews & improved SEO
  3. JavaScript is not required to view page content

But wait! I thought you said initial server response time is slower? I did! And it’s true. In case of a cache miss, the backend server will need to render HTML which will take longer than simply returning a file to the client. However, once the response is rendered and sent to the client, it is (basically) immediately ready for the browser to render with little to no JavaScript parsing or execution required. In comparison to the CSR version, this process is much faster on more devices, leading to an overall noticeable improvement in page load times.

With the cons out of the way then, that leaves our other two pros, which go hand-in-hand to create a much more search and web-friendly site. To see where we’re coming from, imagine a friend sent you a link from Reddit or Twitter; On every link, the preview just had the site logo and its name, and maybe a short blurb about their site as a whole. Here’s our example:

Screenshot of a Slack Message that contains a basic link preview for Voterly. No dynamic content is found.
Boring link previews are boring.

Boring, right? And it gives you no ideas about what you’re about to see when you click the link. For a data-dense site like Voterly, this is somewhat unacceptable and is the requirement that prompted us to start the project. But that’s not SSR’s only advantage for Voterly! We also will finally get proper web indexing from search engines besides Google. Luckily for us, Google runs page JavaScript and renders the page itself for CSR applications, so we’ve always had rich search results on Google. The other search engine providers do not do this, so for results in Bing, for example look like this:

This isn’t the most helpful link preview in the world…

It works, but it’s not ideal and definitely makes us look less professional. Furthermore, dynamic link previews can help drive conversions, and generally give the user a better experience, especially when sharing pages.

The Research

So we know why we want SSR, great! Now we get to the hard part. How do we actually make this happen? If we were building the site from scratch, this question would be much harder to answer. Since we already have a large application written, we are greatly limited in our options. This makes our choice much simpler. The only reasonable option for our current architecture is Angular Universal. Angular Universal is a great choice for us as it allows for minimal development changes while still maintaining feature parity (or as close as possible) between the two client architectures. For us, this means the largest or most complicated changes to the client will be related to its infrastructure, and less so to its logic or implementation. For us, this means modifying our current S3-backed architecture.

As mentioned earlier, the largest backend change stems from the fact that SSR solutions/Angular Universal rely on an active server backend. In Angular Universal’s case, we need a Node.js environment. Luckily for us, there are dozens of services offering Node.js runtimes for our code. For some background, most of Voterly’s backend services run on the AWS Lambda serverless compute environment. This is great for supporting the wildly different feature sets of each of our services and allows us to deal with traffic spikes with ease. However, there is one problem with this approach as it relates to Angular Universal: It’s stateless. For a backend API whose main purpose is interfacing with third party services, our backend database and other backend processes, this is great and allows for extremely fast compute and elasticity. For an Express/Angular Universal server, however, not so much. To get better performance out of Angular Universal, it should be run on a server that is always active. Spinning up a single instance of the client will take longer and use more resources than a (by comparison) simpler REST API. Because of this, we decided not to use Lambda for this version of the client. Instead, we opted to go for AWS’s Elastic Beanstalk service. This service offers elasticity in a manner configurable by the user, but is better suited for long-running or (semi-)stateful applications (perhaps with an in-memory cache). In theory this should give us increased performance, though at somewhat increased cost.

The Plan

So now we’ve picked our software stack and infrastructure stack. Now we have to figure out how to use these technologies. My preferred method for doing this kind of research is a little-known search engine by the name of Google. If you’ve never heard of it, it has thousands of pages about anything you could ever think of, which is great when we’re researching software topics! And by itself, Google can get you very far. For this project, Google led me to the following resources:

Resources

Using the above resources to do my initial research, as well as using them as integral pieces of development support, helped speed up the process immensely. I’ve also needed to use these resources to become the de facto expert on Angular-related SSR.

Soon comes the moment you’ve all been waiting for: the implementation. But first, let’s quickly go over the brief plan that I wrote before starting the transition:

  1. Run ng add @nguniversal/express-engine (the install command).
  2. Make small configuration changes.
  3. Make larger, sweeping codebase changes that allow the client to be rendered on both the server and browser.
  4. Update or replace pieces of infrastructure that need to be changed to facilitate SSR.

This plan served as my general guide in this project, but is really only useful at a conceptual level. Now we’ll dive into the nitty-gritty of implementation.

The Implementation

WARNING: If you were expecting a cookie-cutter tutorial about transitioning from CSR to SSR, this isn’t it. While this will give you the idea of how it’s done, I’ll also dig into issues I hit along the way. This is intended to give you a more realistic view of what you’ll run into during this process.

General guidance for my implementation was taken directly from the Angular Universal installation guide. I will be reiterating the most important parts here as well.

Installing Angular Universal

Since we are using Node.js with Express as our backend server, I opted to use the Express Engine, built to serve as the interface between Angular and Express (there are more engines built by the Angular Universal team as well). According to the Angular Universal installation guide, this should be as easy as simply running this command:

ng add @nguniversal/express-engine

Great! Let’s run it! And…

Terminal output with a failing result after running the install command

Well, that’s not great, but it does give us one hint to the problem. See how /src/app is shown multiple times in the file path? That file doesn’t exist at that path in our project, so this looks an awful lot like a configuration issue. For reference, here is the path to our app.module.ts file:

src/
index.html
main.ts
...
app/
app.component.ts
app.module.ts
...

Well that looks an awful lot like the Angular example project’s structure, so it must not be that. After doing some research on this issue, I decided to check out our project’s angular.json file. While examining the file, I noticed one value that looked suspiciously similar to our error above:

{
"projects": {
...
"user-client": {
...
"root": "src/app"
}
}
}

Let’s try changing root to "" and re-run the installation command:

Terminal output with a successful result after running the install command
LETS GOOOOOOOOO

That looks more like it! Now that it’s installed, we’re going to set that value back to src/app, but we’ll keep it in mind in case we see any errors that look suspiciously like this one. Let’s take a look at what changed after running that command:

Files Added

  • server.ts — Acts as the active server for the express site. It handles all requests and routes them as necessary.
  • src/main.server.ts — Sets up Angular environment and imports required Angular Universal packages.
  • src/tsconfig.server.json — Specifies the TS compiler options that are only applied to server-side compilation.
  • src/app/app.server.module.ts — Holds server-specific imports and injections. Extends app.module.ts. Useful for injecting data or code that only the server should be able to access.

Files Modified

  • src/main.ts — The call to platformBrowserDynamic().bootstrapModule(AppModule) is wrapped in an event listener that holds its execution until the browser sends a DOMContentLoaded event, forcing application bootstrapping to wait until after the content has completed its first render.
  • angular.json — Adds different run/build configurations that are SSR-specific. Also will update the build directory to dist/<app-name>/browser, so keep that in mind if scripts are pointing to a different directory, or change it back.
  • package.json
    Adds the following dependencies:
    - @angular/platform-server
    - @nguniversal/express-engine
    - express
    - (dev) @nguniversal/builders
    - (dev) @types/express

Running the Development Server

As well as the new dependencies found in package.json, the install command also adds some new scripts, suffixed by :ssr. To run the Angular development server in SSR mode, run the dev:ssr NPM script. Let’s do that now:

Angular compiler output, terminated with error: this.debug is not a function
Can’t anything ever just work?!?!

Hey, nobody ever said this was going to be easy. So let’s take a step back and look at that error message: “this.debug is not a function.” Immediately, this error looks suspicious. It isn’t very descriptive and there is no stack trace attached to it. On the brink of madness, I scrolled up my terminal window a little bit, and…

Angular and SASS compiler output, indicating that the sass partial file “variables” could not be found during compilation.
Everyone’s favorite programming language — CSS (Sass)

Sure enough, there it is: SassError: Can't find stylesheet to import. And look at that, it even tells us where the error can be found, how helpful! This is a good lesson for less-experienced programmers (like myself). Sometimes, the error message you see first is not the one that caused the issue. To that end, try your best to pinpoint the issue before starting a fix. In our case, this error was thrown because we have some style dependencies in a directory located outside our User Client’s, errors can be thrown for any number of reasons that could happen further up or down the chain. I was able to fix this error by adding the following JSON object under the options property for our project’s architect.server and architect.prerender objects:

{
"stylePreprocessorOptions": {
"includePaths": [
"../path/to/outside/directory/src/sass",
"src/sass"
]
}

This fixes the Sass error we saw. So let’s try running the dev server again:

Angular compiler output showing a successful build and bundle sizes.
Boo-yah!

Finally, we see what we expect to see! We now have a SSR Angular client up and running! Pat yourself on the back, you’ve made progress! Now all that’s left to do is navigate to the dev server URL and admire your work!

Screenshot of a Google Chrome window that shows an error: “ReferenceError: window is not defined” followed by a stack trace.
Oh no.

Oh no.

But are we really surprised? We knew there were going to be some issues with this transition, because Voterly uses raw DOM objects in various parts of the site. Indeed, if you view the stack trace, you see it is thrown from MetadataService.setStaticTags. We wrote this service, and the code in question looks like this:

private setStaticTags(): void {
this.setCDN('...');
this.setHost(window.location.origin);
}

As predicted, we see the window object access. Because Node.js does not implement the window object by default, it is not accessible on the server. This can cause seemingly-innocuous code to break our whole application. As you transition your Angular-based site from CSR to SSR, you will notice that many of the errors you encounter are caused (directly or indirectly) by accidental DOM access or manipulation while code is running on the server. Luckily for us, Angular gives us a few ways to remedy this issue.

Rectifying DOM Access Issues

As you now know, missing DOM object implementation is one of the main reasons for application-breaking issues related to Angular Universal. Here are the two methods given to us by the Angular team to fix such issues:

Platform ID

Angular exposes two important SSR-related functions to us: isPlatformBrowser(platformId: string)and isPlatformServer(platformId: string). Each of these takes one string as an argument. As the names suggest, these functions will return true or false depending on the current execution environment. But this begs the question, what value should you input into the functions? As usual, Angular’s got us covered. Angular also exposes an injection token called PLATFORM_ID that will be either "server" or "browser” depending on the execution environment. To use it from within Angular, simply inject it into any Angular type that can accept injected objects (Components, directives, services, guards, etc.), like so:

class MyComponent {
constructor(
@Inject(PLATFORM_ID) private platformId: string,
...
) { }
}

You will now be able to use platformId as input to isPlatformBrowser, for example. Now, when you’d like to use a DOM object like window or document, simply check platformId first:

public myFunction(): void {
if (isPlatformBrowser(this.platformId)) {
console.log(window.location.href);
document.getElementById(...);
} else {
// This will only run on the server side
console.log(process.env.HOSTNAME);
fs.existsSync(...);
}
}

I should note at this point that where possible, you should try to use Angular’s abstractions over DOM objects, rather than using them directly. These methods should only be used when there is no other viable option, when direct DOM manipulation is the only way something can be done.

Injected Window

A second option for ensuring that DOM accesses only happen at the right time is by injecting the objects themselves rather than a platform flag. Angular has an injection token built in for document so we’re lucky there. To use window, however, you’ll have to write your own InjectionToken (unless, of course, someone left you one to copy right below this paragraph).

import { DOCUMENT } from '@angular/common';
import { inject, InjectionToken } from '@angular/core';
export const WINDOW = new InjectionToken<Window>(
'An abstraction over the global window object',
{
factory: () => {
const { defaultView } = inject(DOCUMENT);

if (!defaultView) {
throw new Error('Window is not available');
}

return defaultView;
}
}
};

Using the above implementation, you’ll be able to inject the window object as an optional dependency anywhere it’s needed inside your Angular application. On the server, it will simply be null and on the client it will be the window object. This means you can inject it like so:

@Optional @Inject(WINDOW) private window: Window | null

It is important to inject it as @Optional so that Angular does not throw an error when the value is null. Further, adding null as a union type with Window ensures type safety during development (though neither Angular nor TypeScript will yell at you if you do not type it as a null union).

Window and Document are not the only objects which may cause these errors. Storage access (localStorage and sessionStorage) can also cause such errors. Indeed, any browser-specific APIs are at risk of causing these issues, so you must be careful to use those APIs only when appropriate by using the above methods. By being more accurate with your application’s DOM access, you will see far less errors, and the ones you do find will be much easier to diagnose and repair.

Checking Our Work

Let’s implement the above fixes in a few crucial areas and see if our site looks any better.

A screenshot of a Google Chrome window displaying Voterly’s home page, correctly rendered.

That’s what we like to see! After fixing some instances of non-SSR-compliant DOM access, the site loads correctly! Now we can check to make sure the site is actually being Server-Side Rendered. The reason we need to do this check is because, in case of some types of errors, the Angular server will actually revert some or all of the site back to CSR. This is great for users as it ensures the site doesn’t simply break, but it can be a little tricky for us developers. To check if we are being rendered server-side:

  1. In Chrome, open the web inspector by right-clicking anywhere on the page and selecting “Inspect,” the window will look like this:
A screenshot of the Chrome Web Inspector on the Elements tab.
The Chrome Web Inspector — It shows elements rendered via the server and JS

2. Select the Network tab and hard-reload the page using your system’s shortcut or clicking “Force Reload this Page” in the Google Chrome menu.

3. If your network requests are sorted by time (Waterfall view), select the first record in the list. The “Name” should be localhost, method: GET, and type: document. Once it is selected, in the panel that appears, select the Response tab.

4. Scroll down into the HTML body of the response. You should see somewhere the root element of your application. For us, it’s called voterly-root, but this will be different in other projects.

5. Examine this element. Angular doesn’t prettify its output, so it may all be on one or two lines. Check if the content inside your root element seems to match what’s on the page. In our case, it does not:

A screenshot of delivered application HTML showing that its content was not replaced correctly.
<router-outlet> should be replaced with our content

If the HTML inside the root element does seem to match what you expect, congrats! That means the page is being rendered Server-Side, and you are likely done with this guide (unless you want to see how we updated our infrastructure as well). Unfortunately for us, that’s not the case. So, let’s go check out our console and…

A screenshot of a code editor’s terminal displaying a long error message.
Phew! That’s a loooooooong error!

Yeah, that’s about what I expected. But this seems like another easy fix. The error seems to be navigator is not defined. So, for now, let’s just comment out its usages and see if that helps…

A screenshot of the Chrome Web Inspector showing the correct HTML rendered into the root component.
Wow! This bad boy can fit so much HTML in it!

Voila, after fixing that reference (and one more small window reference) we can see that Angular is successfully rendering HTML into our document. This shows the importance these small issues can have, and also the resilience of Angular to errors. When developing SSR, never ignore errors in your server’s console, as they can lead to unintended side effects, like only having a portion of your site rendered server-side.

The Deployment

As previously mentioned, to support our SSR effort, a rather large infrastructure change needs to take place. We appreciate the extra flexibility and performance that Amazon S3 and CloudFront offer our site, but those two pieces alone are not enough to fully support our site’s needs when server-side rendered. Effectively, the only change that is really happening is that we are adding a second destination from which users can request files. The reason to structure it like this is twofold. First, the application is still bootstrapped in JavaScript after it is loaded, meaning the client must be able to deliver the same client bundles as if it was purely CSR. This also saves some cost because our server does not have to handle each request for files. Second, in case of catastrophic errors that would otherwise break the server and stop the site from being served, we can switch over quickly to delivering only JS bundles from S3. This will lead to a worse experience for users with JavaScript turned off and will break the site’s dynamic link previews, etc., but will keep it running. A simplified overview of the architecture is as follows:

A diagram of the deployment architecture. CloudFront distribution uses a request router to route requests.
  1. A single CloudFront (CF) distribution accepts all traffic.
  2. Using behaviors, it is routed to either:
    a.) Elastic Beanstalk (EB) — All client routes (/*).
    b.) Amazon S3 — All client files (*.*).

Our Infrastructure with the Serverless Framework

Given that we at Vidaloop use the Serverless framework extensively and that it helps us deploy most of the services for Voterly, I didn’t want to break from the norm for this project. As such, I tried to encapsulate as much of the setup logic and deploy code into the Serverless ecosystem as possible. This led to a bit more work up front, but will hopefully result in a smoother deployment experience for future developers. I am not going to paste our entire serverless.yml, because it wouldn’t be of much use to anyone as it’s highly specific to our client. I will, however, post some of the more interesting bits/changes that I made. Any parts of it not posted here are mostly standard-fare setup and resource declarations and can be found in many different guides.

Deploying to Elastic Beanstalk

When deciding how to deploy our active server to Elastic Beanstalk, we have a few options. We can:

  1. Use a combination of EB CLI commands to upload and deploy our package files.
  2. Use a serverless plugin like serverless-plugin-elastic-beanstalk.
  3. Write our own custom script that integrates with the AWS SDK.

Because we only need this to work in one case, I actually decided to go with option #3 — writing our own deployment script. I then (basically) wrapped that script into a custom serverless plugin that runs after the CloudFormation stack has been updated and the application bundle has been uploaded. At a basic level, the script works like this:

  1. Retrieve the S3 location of the deployment artifact that serverless uploaded
    Note: You can get the serverless deployment bucket for your application by using the following code in your plugin: this.provider.getServerlessDeploymentBucketName() and get the deployment artifact directory name using this.serverless.service.package.artifactDirectoryName
  2. Create a new EB application version using the S3 location
  3. Deploy the newly-created version to all EB instances

Using this script, the developer needs to run no further commands to ensure that their Elastic Beanstalk environment is updated.

Syncing Deployment Files & Cache Policies

Since serverless is primarily used for deploying code to Lambda functions and is less suited (on its own) to interface with static sites hosted in S3, we decided to use the serverless-s3-sync plugin to automatically upload files to a predetermined bucket. Using this plugin also gave us the ability to specify Cache-Control data on objects in the bucket via the use of glob strings. Here is an example s3 configuration:

custom:
s3Sync:
- bucketName: ${self:custom.bucketName}
localDir: dist/<project-name>/browser
params:
- "/**/*.+(svg|png|jpg|jpeg|gif|ico)":
CacheControl: "public, max-age=31560000"
- "/*.*.js":
CacheControl: "max-age=31560000"

This is a lightweight, easy-to-understand and easily-changeable way to declare cache policies for certain files or file types. Note that this method does not work for declaring cache policies on user client routes. Those could be handled either by the EB server or via the use of a behavior in CloudFront, or a mix of both.

Route Caching Policies

Because the above caching setup only works for files in the S3 bucket, we also needed a separate way to handle cache lengths for the Angular routes themselves. Because of earlier parts of our setup, we already had injected the RESPONSE object that Express gets into our Angular application. This made it rather trivial to actually send the Cache-Control header back to the client.

// In our route listener (called on route changes/initial load)
this.setCacheControlHeader(page.cacheLength);
// implementation:
private setCacheControlHeader(cacheLength: CacheLength = VoterlyCacheLength.Medium): void {
if (this.platform.isServer) {
if (this.response) {
this.response.setHeader('Cache-Control', `max-age=${cacheLength.defaultSeconds}`);
}
}
}

To make the setup work, I also needed to add the cacheLength property to the route data for the specific route. This way, all route information (auth, caching, title, etc.) stays together and is easier to find. The following is an example of what such a structure might look like:

interface RouteData {
...
cacheLength: CacheLength;
...
}

CacheLength is defined like this:

interface CacheLength {
minSeconds: number;
defaultSeconds: number;
maxSeconds: number;
}

We defined it this way to potentially support a future requirement that certain cache strategies can support variable cache times in between a range (min/max) but for now, we are only using defaultSeconds . Here’s an example of our cache strategy:

const VoterlyCacheLength: {[key: string]: CacheLength} = {
None: {
minSeconds: 0,
defaultSeconds: 0,
maxSeconds: 0,
},
Medium: { ... },
}

If a route does not include a cacheLength property, its parent is used. If no ancestors of this route have the property, a default is supplied. This helps us avoid having to supply a cache strategy for every route, and allows us to easily supply a cache strategy for a group of routes while still being able to override it.

Creating an Origin Failover Group

To support the requirement that unserved requests to Elastic Beanstalk can fallback to the S3 version, we were required to make an Origin Fallback Group. These are basically just named groups of Origins (in this case origins with the IDs SERVER_ID and ASSETS_ID) where each origin is tried in the order they are declared. If one of the given FailoverCriteria.StatusCodes is returned by the first origin, the next in the chain is tried.

resources:
...
CloudFrontDistribution:
...
OriginGroups:
Quantity: 1
Items:
- Id: "FAILOVER_GROUP_ID"
Members:
Quantity: 2
Items:
- OriginId: "SERVER_ID"
- OriginId: "ASSETS_ID"
FailoverCriteria:
StatusCodes:
Items:
- 500
- 502
...

Using the failover group in a distribution is as easy as setting it to a cache behavior’s TargetOriginId, like so:

- TargetOriginId: "FAILOVER_GROUP_ID"
PathPattern: "*"

Using Lambda@Edge for Authenticated Routes

So far, even with our modifications, our setup and transition has gone relatively smoothly and hasn’t required a much larger overhaul to a single part of our client. Unfortunately, one place where SSR falls apart in our setup is with our authentication system. Before this transition, we used a JWT-based authentication flow and stored the tokens in the user’s local or session storage. This worked fine, because with CSR, JavaScript will be available to check authentication states and, for example, route the user to a different route if they are not logged in. With SSR, at the point where a user’s authentication state would need to be checked (i.e. before a route is served), we do not have access to the user’s localStorage. In fact, once we’re behind a caching layer (CloudFront) there will be basically no way to do server-side routing via Angular. But that’s one of our requirements, so what gives?

Well, it turns out that CloudFront distributions offer a great integration with AWS Lambda-based functions. This feature is called Lambda@Edge, and it offers the following bindings for each behavior in a distribution:

  1. Viewer Request: When CF receives a request from the user
  2. Origin Request: Runs between CF and the origin, when CF forwards a request (cache miss)
  3. Origin Response: Runs between CF and the origin, after the origin has handled the request (before saving in the cache)
  4. Viewer Response: Runs between CF and the user, when sending a response.

For Voterly, any authentication checks need to happen before the request hits the cache. This ensures that the check happens on every server request (i.e. first load of the site), and also allows us to cache the returned HTML by the user’s authentication state, still giving us good performance when server-side rendering authenticated page skeletons. For this, we decided to use the Viewer Request hook.

In our hook, we get blazing-fast token checks with our backend authentication server, so added latency is relatively minimal. We check if a route is authenticated by generating a route map from our Angular routes file adding a bit of metadata to a route that is authenticated at build time, and checking that mapping data vs the user’s authentication state on each server request. For example, one route’s mapping record might look like this:

{
"/feed": {
...
"auth": {
"isRequired": true
}
}
}

After checking the user’s various tokens, if they are a legitimate user and logged in, they will be allowed into any route where authentication is required, like the above. If they are not logged in, they will be redirected to either a supplied redirection route (in the route map) or the sign in page.

The Gotchas

ERROR: cannot bind to selected since it isn’t a known property of option

This error appeared to me when server-side rendering a page that had a select element on it. Though it may be a problem with one of our customizations to the base components, I was able to fix it by simply not including option elements on the server side. Though this may not be ideal, it got the job done. To do this, I wrote a directive called voterlyBrowser. It is implemented like so:

@Directive({ selector: '[voterlyBrowser]' })
export class BrowserDirective {
constructor(
@Inject(PLATFORM_ID) platformId: string,
templateRef: TemplateRef<any>,
viewContainerRef: ViewContainerRef,
) {
if (isPlatformBrowser(platformId)) {
viewContainerRef.createEmbeddedView(templateRef);
} else {
viewContainerRef.clear();
}
}
}

To use it, simply import the directive in the module you’d like to use it and add it as a structural directive to any elements you want to be rendered on the browser only. For example:

<div *voterlyBrowser class=”test-div”>...</div>

A Note on Elastic Beanstalk SolutionStackName

When creating an Elastic Beanstalk App via Serverless, you will be required to supply a SolutionStackName in your resources section under your Aws::ElasticBeanstalk::ConfigurationTemplate . It is formatted like this:

resources:
BeanstalkConfig:
Type: AWS::ElasticBeanstalk::ConfigurationTemplate
Properties:
...
SolutionStackName: "64bit Amazon Linux 2 v5.3.1 running Node.js 14"

Amazon has opted to go with this approach to specifying the type of operating system and environment you want to run on your EC2 instances. Whether or not you agree with this naming scheme, you’re stuck with it. An important note here is that deploying in a new environment with an “out-of-date” SolutionStackName can cause deployments to fail. We believe this is due to the fact that Amazon doesn’t seem to retain older versions unless deployed applications are using them. We ran into this issue when the newest environment was upgraded from v5.3.0 to v5.3.1 . In many cases, this will not cause issues, but it is something to look out for.

All possible (current) values of SolutionStackName can be found here.

The Conclusion

In this article, we’ve gone over a summary of how one might transition a large angular project from client to server-side rendering. Though we accomplished a lot here and were quite successful, there are still improvements to be made. For example, while this transition has drastically improved our Lighthouse scores (nearly 2x in some metrics) there is room for improvement. Better caching strategies that take device type and other user-specific parameters into account could ensure the site always stays speedy for users and delivers the best possible experience.

I hope this article has helped you learn more about the process of server-side rendering and how it relates to Angular, and that my approach in this tutorial (not sugar-coating every solution) gives you a more realistic picture of the work it takes to accomplish. I encourage you to ask questions in the comments if you are struggling with this process. I’d love to help out in any way I can! If you’d like to see more content like this in the future, let me know!

Now stop reading and start coding!

--

--