Free (almost) Performance

Amit Narayanan
Mammoth Stories
Published in
5 min readMar 10, 2015

--

Speed (I mean the Web)

Almost any non-trivial web app stands to make gains of several orders of magnitude in performance by simply allowing a professional to take control of delivery of static assets. And by professional, I mean a Content Delivery Network, or CDN ☺.

CDNs allow requests for static content to be handled well before it reaches the app servers, freeing them up to handle more requests that they are better suited (and truly needed) for. In addition, good CDNs have edge locations all over the world, each with it’s own copy, making static asset delivery even quicker for users closer to those locations.

N.B. CDNs typically work well ONLY when asset filenames are fingerprinted (that funny hash value appended to the end of the filename). Once a file is cached, it will never hit the app server again. If not fingerprinted, that CSS change you make is never going to be seen by the user. Rails’ asset pipeline fingerprints automatically, but for frameworks that don’t, caveat emptor.

Content Delivery Networks (CDNs)

These things come in lots of shapes and sizes. Here are a few. Good ones, obviously have low latency, high throughput, more edge locations among others. Below’s a comparison. Google’s your best friend here for more such.

N.B. POP = Point Of Presence. More the better, usually.

A List of CDNs. Source: http://www.cdnplanet.com

Fig 1. List of CDNs. Source: http://www.cdnplanet.com/

AWS CloudFront

We already use AWS for so many things so it just made good sense to add one more AWS service to the kitty. And CloudFront, anecdotally at least, is well regarded, which sealed it.

There were only two key things that we needed to configure (I think most anyone will need to configure the first, while the second is largely optional), ORIGINs and CNAMEs.

ORIGIN

A newly provisioned CloudFront distribution starts out empty. When configured with an ORIGIN, the very first request for a static asset is passed thru’ to the app server by CloudFront and cached. All subsequent requests to this asset will now be handled by the CloudFront distribution directly, without needing to involve the app server anymore.

CNAME

CNAMEs just allow assets to be accessed through a custom domain instead of the CDN’s domain. This is more of a nice to have.

Here’s how this looks like

Fig 2. A screenshot of an AWS CloudFront Distribution

RAILS makes it a Cinch (Of Course)

There are two primary ways to integrate a CDN into a Rails app (Rails 4).

One way is to use a file storage service like AWS S3 to store asset files and configure the S3 bucket as the CDN’s ORIGIN. Asset_Sync is an excellent gem for Rails apps that intelligently syncs assets with an S3 bucket.

The other, even simpler, way is to just let the app server be the ORIGIN and handle the very first request for an asset (if you never update that asset, that is the only single time the app server will ever be involved in serving that request for the life of the app)

Typical apps have hundreds of assets and not all of them are updated on every build, so the app server will barely see a blip. And secondly, the build process won’t be tied to the availability of a file storage service. These combined with the minimal tradeoffs involved make this choice more than worth it.

Configuring the App

Rails makes this ridiculously simple. The only configuration that needs to be added is:

in production.rb and staging.rb (if you have a staging ENV, and you likely do)

N.B. https is not required, but for the love of all things good…

And voila, you should start seeing something like this (200s the first time around and 304s subsequently).

You can now issue GET requests for asset names directly against the CloudFront URL and see it being delivered. So doing this…

curl -I -H "Accept-Encoding: gzip, deflate" https://CLOUDFRONT_KEY.cloudfront.net/assets/YOUR_ASSET_NAME.css

…should net a response (redacted for brevity) like so:

HTTP/1.1 200 OK
Content-Type: text/css
Content-Length: 225578
Connection: keep-alive
Status: 200 OK
Strict-Transport-Security: max-age=31536000
Last-Modified: Tue, 10 Feb 2015 17:35:17 GMT
Cache-Control: public, max-age=2592000
Age: 196227
X-Cache: Hit from cloudfront

But wait a minute. Why doesn’t the response header have this

Content-Encoding: gzip

in it?

Tightening it Up…

So we can still tweak this a bit more. Long story short, GZIP is your friend. The CPU cycles it consumes to zip and unzip at both ends are trivial while gains in content size reduction and speed are outsized. But this post is already getting long, so I’ll write more about this in Part 2.

…Even more

Browsers load static assets via multiple threads in parallel. But most if not all, restrict number of simultaneous threads that hit a single domain. In some cases, multiple CNAMEs that all point to the same CDN distribution is a neat work around to eke out a bit more. Will add this to Part 2 as well.

I’m @amitnarayanan on Twitter.

Also, check out Mammoth, a communication and publishing tool I’m currently building.

On the web and in the AppStore.

--

--