Even more tips on improving web app performance

Wesley Andrade
Quick Code
Published in
3 min readDec 18, 2019

Client Side Caching

Loading relatively static content over and over again in order to maintain synchronicity with your database can be very expensive, even if done asynchronously. Rather than having the client download these large responses over and over again you could just tank the first roundtrip hit to the database and set a cache-expiry token with Cache-Control max-age set to some arbitrary length (usually 60–120 seconds). This will ensure that you aren't making the same costly call for at least <max-age> amount of time. Once that timer runs out the cache is invalidated and you’d have to make another call regardless of whether the data in the database has changed or not. But this is rather inefficient because you don’t need the whole response back if nothing is changed, right?

That’s where we introduce a hashcode or some sort of unique fingerprinting token to the mix. Use your favorite hashing algorithm here. So basically, here’s the flow:

Client sends a request → API sends back data with a hashcode and an expiry token → Client tries to send another request but current time is less than the expiry token time so request is not sent → Client tries sending another request after token expires → API processes request → if hashcode from the client (last result) matches the hashcode from the current response send back an empty response so download time is 0, else send response with updated hashcode.

Before/After

Server Side Caching

This section is near and dear to me since I’ve debated the pros and cons of different caching mechanisms and when to use them countless times… Redis, memcached and Hazelcast are top contenders for almost any application… but my award goes to Varnish cache, everything from installation to implementation into your stack is seamless.

Varnish basically acts as a massive key-value store for requests/responses, and can be combined with client side caching for a blazing fast UX. Varnish acts as a reverse proxy that sits in front of the API box. If an incoming request is already within the cache it gets returned immediately, no server processing time required (add hashcodes from client side caching here to invalidate cache when necessary). Make sure to use the unofficial PURGE http method to clear the cache for that request whenever you update/delete records, and then load the latest data into cache.

This carries a significant overhead of sending a PURGE request after updates/deletes as well as reloading data into cache, but trust me your users will thank you for it.

Just kidding, your users will more likely notice you taking out loading spinners and replacing them with skeletons

Analyze UI/API code for bottlenecks

Oof owieee… This one hurts to talk about because it takes the most effort :( You have to analyze every endpoint and optimize expensive operations/queries on the API. And then analyze your UI render time to figure out what needs to be virtualized or paginated… No tips or tricks here, just hard work.

Move to physicals

This is optional. If you share server space with a bunch of other apps/services that are stealing your precious compute time, consider renting out some dedicated server space like namecheap.com or heroku. I’m a cheap bastard and usually use the free plan for as long as I can. Just sayin’ but I got Heroku to serve ~150 concurrent users on one of my shitty services back when I was a broke college student… all for free.

--

--