Overview of Caching, Distributed Cache, Caching Patterns & Techniques
Hi everyone, in this article we’ll understand what is Caching along with it’s benefits, what is Distributed cache and it’s advantages, various Cache patterns or policies, Caching eviction (clean-up) algorithms or techniques and common use-cases for Caching.
What is Caching?
Cache in simple terms is a data storage layer which stores frequently accessed data and help to serve the future requests for the same data at a quick pace rather then accessing the data from its primary storage location.
Caching allows us to efficiently reuse previously retrieved/computed data rather than spending time for accessing/computing the same data multiple times.
Will cover Cache implementation with a simple example in my upcoming article.
How caching helps (OR)advantages of caching
1. Application Performance
2. Backend Load
4. Predictable performance
5. Database Cost
- Application Performance:
As we are reading the frequently retrieved/computed data from in-memory cache, data retrieval will be extremely fast which in turn improves the overall application performance.
- Backend Load:
Since we are transferring the load on to in-memory cache rather than invoking the same from primary location (database) might help to reduce the load on the backend and improves performance. It also helps to avoid crash during spikes.
In-memory systems offers lower latency and higher request processing rates (input/output operations per second) compared to actual database by serving more and more requests/second.
- Predictable performance:
Sometimes we might need to deal with spikes in application usage especially on special events or festive offers on eCommerce sites might have an impact on application performance with increasing database load — might result in higher latencies. Caching comes for a rescue by mitigating such cases.
- Database Cost:
Database cost will be reduced since a single cache instance replacing database instances can perform numerous input/output operations per second.
What is Distributed Cache?
Distributed cache is a caching technique where cache is spread across multiple machines across multiple nodes spread across clusters and sometimes across data centers located around the globe.
Distributed cache is primarily used for
- High Availability — As the name signifies, distributing the cache across multiple machines help us to improve the cache availability. If one of the instance goes down for some reason, still we might have support from other machines which shared the load. We can also created a back up for every instance so that always specific number of instances will be maintained with reserved passive instances.
- Scalability — It will be easily scalable as the data will be stored in multiple locations which makes cache to remain light weight and smaller in size which in-turn help to perform search operation at a good pace.
Here are some of the commonly used caching patterns.
- Cache Aside —
In this pattern, cache works along side with the database where data will be lazy loaded into the cache. It will be best suited for read heavy data (which means data which won’t update on frequent basis).
In Fig: 1 — When we request for a specific data, then the application first looks for it in the cache (operation-1). When the application not able to find matching data in the cache then it falls back (operation-2) and retrieve the same from the database (as in operation 3 & 4) and the same will be updated in to cache for future retrievals and return the data back to the user.
In Fig: 2— When we request for a specific data, then the application first looks for it in the cache (operation-1) and returns the same if it finds matching data in the cache (operation-2).
- Read Through —
As the name signifies, it tries to read the data from the cache and cache communicates with the database on lazy-load basis.
In Fig: 1 — When the cache is asked for a data associated with a specific key (operation-1) and if it doesn’t exist (operation-2) then the cache retrieve the data from the datastore and place the same in the cache for future retrievals (operation-3) and finally it returns the value back to the caller.
In Fig: 2 — When the cache is asked for a data associated with a specific key (operation-1) and if it exist(operation-2) then the same will be returned back to the caller.
- Write Through —
In this technique, we write the data into the datastore through cache which means the data will be inserted/updated into the cache first followed by a datastore (as in operation 1 & 2) which helps to keep the data consistent across and best suited for write heavy requirements.
- Write Back —
In this technique, we make multiple data entries into the cache directly(operation-1 & operation-2) but not into the datastore simultaneously (operation-3). Rather we queue the data which we suppose to be inserted/updated into the cache and replicate the queued data to the datastore at later stages.
Since there is a delay to update the latest data into database when compared to cache, there is a possibility of data loss if the cache fails for some reason.
Since there is a delay to update the latest data into database when compared to cache, there is a possibility of data loss if the cache fails for some reason (should be resolved in combination with other patterns).
- Write Around —
In this pattern, the data will be written directly into the data store without writing it to the cache (operatoin-1). On read operation from the datastore, the same will be placed into the cache (as in operation-2 & 3).
Best suited for applications that won’t frequently re-read recently written data into the datastore.
- Refresh-Ahead —
In this pattern, cached data gets refreshed before it gets expired (operation-1 & 2) this helps in reducing latency since the data gets updated before it gets used. Later stages the same gets used during fetch as in operation-3.
Caching Eviction (clean-up) Techniques/Algorithms
Here are some of the commonly used Cache eviction (clean-up) techniques used when cache reach its maximum limit.
- Least Recently Used (LRU) — Updates cache with recently accessed items on top of the cache based on cache availability. When the cache limit is full, we remove the least recently accessed items from the cache.
- Least Frequently Used (LFU) — We basically increment the value every time when the data gets accessed from the cache, in this case item with lowest count will be evicted (removed) first.
- First In First Out (FIFO) — As the name signifies, we evict first item accessed first without considering how often or how many times it was accessed in the past.
- Last In First Out (LIFO) — As it signifies, it evicts the item which was most recently used irrespective of number of times or how often it was accessed in the past.
- Most Recently Used (MRU) — It actually helps when older items are more likely to be used. We actually remove the most recently accessed items first.
Use-Cases: Some of the common use-cases
4.DNS — Domain Name System
5.CDN — Content Delivery Network
7.API — Application Programming Interfaces
I hope you’ve found this article helpful in understanding overview of caching, how caching helpful in building complex applications and its advantages, overview of distributed cache and its benefits, commonly used caching patterns or policies and cache eviction techniques or algorithms.
Cache replacement policies - Wikipedia
In computing, cache algorithms (also frequently called cache replacement algorithms or cache replacement policies) are…