Living on the Edge (Node)

Saher El-Neklawy
Whizardry
Published in
6 min readJan 6, 2019

--

A tale of how using a Content Delivery Network is essential from day one for dynamic content.

Whether you have heard of Content Delivery Networks (CDN) or not, there is one thing for sure, we all love it when our Time to First Byte (TTFB) is good and fast.

To fix this you go back to physics, to decrease the time, either make the speed faster or decrease the distance. But since you cannot beat the speed of light people go for the latter. Alternatives range between building your own data center or finding a closer server region offered by your cloud provider. Both of which add a significant cost burden.

The gold standard for usability says keep your interactions at 100ms and they feel instant. This notion moved onto web performance. We make it our mission to get out TTFB for web requests down to that 100ms mark. Let’s take the case of a network packet traveling from AWS us-east-1 to Dubai. Taking the best network conditions ever, a direct fiber optic cable between sever and client (browser or mobile app), would take around 40ms. That gives you 60ms to process the response and get it on the wire. Reaching that ideal connection scenario for all your users is a far fetched assumption, and realistically looking at the state of mobile network latencies, it will take much longer than the ideal 60ms for that packet to travel 12,000 km.

This is where CDNs come in to play. A CDN; as within its name; is a closed network distributed around the world. Each of these networks are managed by the prvoider ( AWS CloudFront, CloudFlare, Akamai, or Fastly, etc.). Their main purpose is to guarantee a high quality private network between your origin server (AWS us-east-1 in our example) and the closest edge node server (in our example Dubai) possible to the client. Thus any CDN grantees the following:

  1. Better network conditions with minimal relays (For our MENA region across the Atlantic Ocean).
  2. A closer server for the application clients to connect to.
  3. Minimal added cost without the need to worry about managing data centers.

For these reasons alone, any application aimed at the MENA region should consider building their apps with a CDN in mind.

But what do you need to consider when using a CDN and which provider to choose?

CDN design considerations

When using a CDN, one needs to think of it as a huge reverse proxy placed between various origins and the client. It can be thought of like an Nginx layer, and many CDNs actually are implemented with Nginx.

The life of a request goes a follows:

  1. Client/viewer requests a path from the closest edge node
  2. The edge node looks up the request path and matches it to an origin server
  3. The edge node compares the origin configuration with the HTTP cache headers to compare if the edge node should cache the request or not.
  4. If the request should be cached, and there is no cached version of the response in the CDN, the request is passed to the origin server. If there is as cached version in the CDN, that is used directly to respond.

At each of these steps, you can hook into events that the edge node triggers, discussed in the following section.

The last step here is the most critical aspect of the flow, because of the usual CDN deployment is around 200,000 nodes, CDNs are very aggressive in caching. You have to take care of:

  1. The cache key. This is the request path.
  2. Cache expiry and last modified HTTP headers from your origin server’s response to dynamically refresh the CDN cache.

Commons scenarios to configure the CDN’s cache are:

Cache forever

pro: Fast responses from the edge node directly.

con: The only way to refresh the cache is either to change the cache key by changing the request path, or a forced invalidation, which is quite expensive to perform.

usage: Static images, compiled JS and CSS assets (modern compilers have a new hash in the file name), and JSON responses that you do not expect to change.

HTTP headers updates

pro: Lots of control and flexibility on the cache invalidation.

con: The burden of cache management relies on the origin server, as it has to return the needed HTTP headers for the CDN to understand how to manage the cache. This extra computation may affect your origin server response times

usage: when your data varies based on the user requesting, including user data in one of the cache headers is useful.

Update cache based on time

pro: Need to manage keys and headers, the cache will expire on its own.

con: if there is a need to invalidate the cache before the time runs out, you are stuck, and have to resort to expensive forced invalidation means.

usage: If you are sure the content is time dependent, for example changes every specific interval, this is a good option.

Never Cache

pro: No worries about cache invalidation, with the benefit of improved networking between edge and origin servers.

con: Still need to round-trip between origin and edge.

usage: At first, when first trying CDNs with an API service, this is a good place to start, until one gains an understanding of how their caching layers should behave. In write heavy apps, this setting will prevail.

Choosing providers

This choice is based on the features you need, how easy is it to connect to your origin servers, and price.

Popular options are AWS CloudFront, CloudFlare, Akamai, or Fastly among others.

The features to consider are:

  1. Cache management (forced invalidation and header control)
  2. Edge node availability in the regions that matter to you
  3. Triggers or code execution at the edge node

Edge Node triggers

Code execution at the edge node is a very powerful feature with modern CDNs. It allows you to modify and play with the requests as you like, without even needing to contact the origin server. This code execution can happen at every step of interaction between the client edge and origin server.

For AWS’ CDN solution, CloudFront, this is called Lambda@Edge, which you can use to trickery like:

  • If you are building a data beacon, and will make all the processing async, you can have the edge node response directly to the client and en-queue the beacon data to the message queue. This is a highly scalable way of for analytics collection.
  • Build simple redirect logic directly on the edge node without the need to contact the origin. A case here when there is a need to redirect to a custom schema from a HTTP request to open in your mobile app.
  • Change the origin server to contact based on request headers and not the path. This is useful if you are serving different logic or content for different regions or languages.
  • If you are adventurous, you can develop your whole application on the CDN edge nodes directly! :)

Closing Remarks

Using a CDN is an all wins scenario, even if you will not cache anything, you will save your customers the network latency that you do not have a hand in controlling, and and frees you to spend your time on what you can control, namely; backend optimizations on your origin servers. Placing a CDN in your architecture can yield instant gains especially on mobile networks, and with the right caching configuration, you can go even further.

Want to help BulkWhiz solve tough problems and join our growing team? Email us on tech@bulkwhiz.com telling us a little about yourself.

--

--

Saher El-Neklawy
Whizardry

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++.+++++++++++++++++.++++++++++++++++++. — — — — . — — — — — — — .