Cloud Deep Dive: Part 3 — The Extremely Scalable Pizza Menu with Serverless SSR

Pieter Raubenheimer
YLD Blog
Published in
4 min readJan 28, 2019

In Part 2, we continued to build out the Cloud Pizza Place — a virtual restaurant in the cloud, to see some more of the patterns and nuances of serverless computing.

Cloud managed services should enable scalability that is cost-effective and without operational burden. We want to satisfy high customer expectations by delivering our customer facing site with low latency, whatever the client, be it browser or search engine robot.

We’ve already covered Baking and Ingredient Supply, so today we’ll focus on the Menu:

Let them see pizzas

The narrative goes something like this:

  • Customers will eventually need to be able to order pizzas from the web, but before that, they will need to be able to view our menu.
  • We can also expect a surge of viewers every autumn when we release our limited edition white truffle pizzas. 👌
  • Most of our customers find our restaurant through search engines, and the menu needs to reflect what we currently have in stock.
  • Our research shows that our customers are expected to look at the menu multiple times before ordering even one pizza — we’d expect more views than orders.

The single page React app

Building a React single-page application (SPA) will give us the ability to create a highly interactive menu app in the browser.

Some search engines have trouble indexing SPA’s. Even the ones that do it well, like Google, would take into account the time to render the full page when they calculate the ranking of the page.

React allows for server-side rendering (SSR) of HTML using Node.js. Even though this makes it quicker to see the full site in the browser, particularly effective for mobile browser with flaky connections or limited processing power, it can often still be slow and resource-intensive (read: costly) on the server side.

We can overcome this, by not rendering the page on every request. It can be achieved by caching the render — storing a copy and returning that instead of re-rendering.

To cache or not to cache, what is the question?

CDNs (content distribution networks) are best known for serving static content from locations close to users (for speed), but can also act as a cache in front of dynamically rendered websites. AWS provides a CDN service, called CloudFront, that allows for caching of both static and dynamic content.

Our JavaScript code (and other client-side assets) can be considered static at runtime, so we can publish it using AWS S3. This would serve as one ‘origin’ for content served by the CDN.

We could set up another ‘origin’ for certain URL paths and point that to a Lambda performing the server-side rendering:

An option: Read-through caching by CloudFront

But what will our cache policy be — how would we expire the cache for dynamically rendered paths?

Fortunately, we’ve already built a Kinesis stream that publishes the stock levels as they change. We could use this stream to trigger a function that would submit an ‘Invalidation’ to CloudFront. Even though the first 1000 invalidation requests per month is free, there’s a fee of $0.005 per invalidation path submitted. This is not too bad, but we also need to consider that invalidation takes around 10–15 minutes.

Instead of using a CDN for caching our server-rendered pages, we’ll opt to just render the page in our Lambda function triggered by our Kinesis stream. The file can be sent to S3, where we publish our other static assets, ready to load through the CDN on the next request. We can set the Cache-Control or Expires header when we write the HTML to S3, so that only these paths will not be cached:

const stream = ReactDOMServer.renderToNodeStream(element)
const params = {
Bucket: 'bucket',
Key: 'key',
Body: stream,
CacheControl: 'max-age=0'
}
await s3.upload(params).promise()

All together now

We’ve implemented a model that performs asynchronous server-side pre-rendering. We can rest assured that it would be fast and extremely scalable.

Note that we have a fairly unique opportunity here in that all of our dynamic content is updated via a single Kinesis stream. It means that we don’t need to access a database or any other back-end systems to serve an individual user request.

As event-oriented cloud architectures become more commonplace, we could exploit such opportunities more and more to give a better user experience while reducing pressure on back-end systems.

--

--