What is in-memory caching?
The article shows what is in-memory caching, how does it work, and some terms we need to know when to do cache for the application.
As we know, any server or computer stores data on the hard disk and in the RAM(Random Access Memory). The RAM is the temporary storage while the hard disk is the permanent computer storage. When comparing the speed of these storages, RAM has extremely faster than hard disk even of SSD. RAM can transfer thousands of megabytes per second while the hard disk only gives us around 50 to 250MB/s. Now we might think that we can temporarily store data in RAM to increase reading and writing speed for the application. The act of temporarily storing data in a RAM is called in-memory caching. With in-memory caching, it reduces both the I/O bound and CPU bound of the application.
The drawbacks of in-memory cache
Despite the high speed of RAM but it has some limitations. they are the limited storage size and the data is removed after power outgate or shutting down. So we have to manage what should be cached and what should be evicted when the size of RAM reaching the cache size. Happily, we have a lot of cache providers out there so we don’t have to build cache from the scratch.
As I just mentioned, the data of RAM is deleted when we shutting down the machine or suddenly crash. So we must strongly consider temporarily storing newly created data in the RAM. That’s might cause the user data lost. In case we have to store data on RAM for a period of time and we should be written that data to the hard disk as soon as possible.
Some important terms
When we plan to apply cache to the system, we should know about cache hit, cache miss, and cache algorithms. So what they are?
A cache hit occurs when the requested data can be found in a cache. In this case, the data is returned immediately without to read from the database, file, or compute the result.
A cache miss is the opposite of the cache hit, it occurs when the requested data can not be found in a cache. After finding the value in the cache, the application still needs to read data from the database, file, or compute the value then return that value.
The diagram shows that the basic flow when the cache hit or cache miss occurs. The high number of Cache hit occurred synonymous with effective caching, on the other hand, we should consider increasing the effectiveness of the caching solution, the cache library, or what we put in the cache.
Cache Replacement Policy
As the definition from Wikipedia:
In computing, cache algorithms (also frequently called cache replacement algorithms or cache replacement policies) are optimizing instructions, or algorithms, that a computer program or a hardware-maintained structure can utilize in order to manage a cache of information stored on the computer. Caching improves performance by keeping recent or often-used data items in memory locations that are faster or computationally cheaper to access than normal memory stores. When the cache is full, the algorithm must choose which items to discard to make room for the new ones.
I’m not going to explain in detail about Cache Replacement Policy but I want to list out common cache replacement policies:
- Bélády’s algorithm
- First in first out (FIFO)
- Last in first out (LIFO) or First in last out (FILO)
- Least recently used (LRU)
- Time aware least recently used (TLRU)
- Most recently used (MRU)
- Pseudo-LRU (PLRU)
- Random replacement (RR)
- Segmented LRU (SLRU)
- Least-frequently used (LFU)
- Least frequent recently used (LFRU)
- LFU with dynamic aging (LFUDA)
- Low inter-reference recency set (LIRS)
- Adaptive replacement cache (ARC)
- AdaptiveClimb (AC)
- Clock with adaptive replacement (CAR)
- Multi queue (MQ)
- Pannier: Container-based caching algorithm for compound objects
Check out the Wikipedia article: https://en.wikipedia.org/wiki/Cache_replacement_policies
Common Cache libraries:
- Caffeine Cache
Up to now, I have introduced what is in-memory cache, its drawback as well as some important terms when applying cache to an application.
Thanks for reading.