Caching — Basic concept for next-level developers (cont.)
In this article, I continue to provide you the next 3 levels of caching illustrated by the implementation of a web-based application. This is the continuation of my previous post that you can read here.
Level 3: Web API with SPA
At this level, the application is extended to support mobile version and single page application (SPA). Now we need to optimize the APIs to make sure that it should take between 50ms — 100ms to respond to a request.
Let’s recap some basic characteristics of Web API and Web SPA.
- SPA contains all HTML, JS, CSS content and can deploy in a static manner
- Web APIs only deal (request and response) with data, thus the applications do not have to allocate resources to render HTML templates.
- There are likely more requests from client to server.
- Separating stateless and stateful components can facilitate the process of containerizing the application.
Now, we have actually improved caching by completely separating static data like HTML, JS, CSS from the data. We focus on the following items:
1. Caching data objects with 3rd storage
This is about optimizing the cache to data objects using Redis or Memcache by tracking the rate of hitting or missing cache data and updating API flows. I have already discussed this method in my previous post.
2. Caching static asset on CDN
This is done a bit easier than what we did with previous levels. This is because the current JS framework provides the ability to separate HTML, CSS, JS into static files which are easily cached on CDN.
3. Caching HTTP response from client-side
As clients have to send more requests to the server, it is important to take into account users’ experience by enabling cache at the client-side via HTTP headers. There are several approaches:
- Completely caching response at the client-side, no new request is required. This approach uses the Cache-control header or old-school ones like Pragma, Expires.
- In fact, the browser will create a new request as a cache at the client-side expires. Now it is the responsibility of headers like Etag, If-none-match, and Last-modified, If-Modified-Since to validate if data at server-side is not matching with that at client-side. If nothing changes, an empty Response with HTTP code 304 will be returned to save network resources.
Note 1: By default, browser prioritizes header caches and automatically caching without any specific configuration. In contrast, mobile applications do not cache unless there are settings for caching on HTTP request libraries.
Note 2: Caching HTTP response from client-side needs to be carefully performed. If there is something wrong, i.e. if you cache everything including what should not be cached, solving the issue, if any, is a non-trivial task. In the worst case, you have to tell every user to clear cache on their devices.
Level 4: Micro-services
At this level, we focus on optimizing the data processing components as the static asset has been already optimized from other levels. Some basic characteristics are:
- Multiple services with different databases
- Deployed as containers, thus using file to cache is not preferable.
- High performance, high workload
- Large amount of data and require to scale caching system.
Note that caching with micro-services can be a separate topic as it has to fulfill various requirements rather than caching.
As you can see, many points, e.g. caching data object, HTTP response, on this figure have been mentioned from previous levels. They are re-used at this level but a bit more advance.
1. Caching database index and hot data
Technically, this is not about caching but optimizing database. However, I mention here as it can be considered a part of caching. As a DevOps and system designer, you need to configure database to optimize index caching and hot data on RAM. It includes some tasks like:
- Optimizing index size
- Capacity planning
- Optimize database configuration
2. Caching data object
Even though this has been implemented from previous levels, it is a vital key to cache data objects for micro-service systems.
Here we mainly use 2 storage mechanisms: 3rd storage (Redis, Memcached) and memory (Hashmap, variable). Now it is very important to choose a suitable storage mechanism (yup, see you at next post for more details) and here are some points you may take into account:
- Thanks to the ability of clustering the deployment, 3rd storage is scalable and thus achieve a quite good performance in most of the cases.
- Memory cache is not as good as 3rd storage in terms of scaling and performance as it is applicable for each application instance. However, it is much faster than remote cache approaches like Redis. As a result, caching on memory is a solution for specific applications, i.e. delay-sensitive ones.
3. Caching API response using reverse proxy and CDN
Using reverse proxy and CDN to cache is a basic requirement at this level. However, in order to efficiently cache API response, you need to understand very well each endpoint: which one needs to be cached and how long. This is really a time-consuming task to implement Cache header for each endpoint as well as review data integrity. Therefore, I don’t think it is a good idea to start it if you don’t have a huge amount of traffic to process.
4. Proactively caching data object from client-side
At this level, client applications could be more sophisticated (and complex?). Thus it is important to improve the quality of user experience. At the front-end, we go a step further by caching data on client in an active manner. Remember how Facebook still works, i.e. showing post content, even though it is in offline mode? Caching data at client can reduce a lot of requests sent to server and therefore might be unnecessary to do at early stage. One thing is that it is challenging to manage cached data on client due to some issues with data integrity.
So far so good? I hope to provide you another angle of view of caching and what we can do with it in reality. In next post, I will go into details of a specific case and explain how caching should be done in a proper way.
I would like to send my big thanks to Quang Minh (a.k.a Minh Monmen) for the permission to translate his original post.