What is in-memory caching?

The article shows what is in-memory caching, how does it work, and some terms we need to know when to do cache for the application.

Thanh Tran
Dec 10, 2020 · 3 min read

As we know, any server or computer stores data on the hard disk and in the RAM(Random Access Memory). The RAM is the temporary storage while the hard disk is the permanent computer storage. When comparing the speed of these storages, RAM has extremely faster than hard disk even of SSD. RAM can transfer thousands of megabytes per second while the hard disk only gives us around 50 to 250MB/s. Now we might think that we can temporarily store data in RAM to increase reading and writing speed for the application. The act of temporarily storing data in a RAM is called in-memory caching. With in-memory caching, it reduces both the I/O bound and CPU bound of the application.

The drawbacks of in-memory cache

Despite the high speed of RAM but it has some limitations. they are the limited storage size and the data is removed after power outgate or shutting down. So we have to manage what should be cached and what should be evicted when the size of RAM reaching the cache size. Happily, we have a lot of cache providers out there so we don’t have to build cache from the scratch.

As I just mentioned, the data of RAM is deleted when we shutting down the machine or suddenly crash. So we must strongly consider temporarily storing newly created data in the RAM. That’s might cause the user data lost. In case we have to store data on RAM for a period of time and we should be written that data to the hard disk as soon as possible.

Some important terms

When we plan to apply cache to the system, we should know about cache hit, cache miss, and cache algorithms. So what they are?

Cache hit

A cache hit occurs when the requested data can be found in a cache. In this case, the data is returned immediately without to read from the database, file, or compute the result.

Cache miss

A cache miss is the opposite of the cache hit, it occurs when the requested data can not be found in a cache. After finding the value in the cache, the application still needs to read data from the database, file, or compute the value then return that value.

The diagram shows that the basic flow when the cache hit or cache miss occurs. The high number of Cache hit occurred synonymous with effective caching, on the other hand, we should consider increasing the effectiveness of the caching solution, the cache library, or what we put in the cache.

Cache Replacement Policy

As the definition from Wikipedia:

In computing, cache algorithms (also frequently called cache replacement algorithms or cache replacement policies) are optimizing instructions, or algorithms, that a computer program or a hardware-maintained structure can utilize in order to manage a cache of information stored on the computer. Caching improves performance by keeping recent or often-used data items in memory locations that are faster or computationally cheaper to access than normal memory stores. When the cache is full, the algorithm must choose which items to discard to make room for the new ones.

I’m not going to explain in detail about Cache Replacement Policy but I want to list out common cache replacement policies:

  1. Bélády’s algorithm
  2. First in first out (FIFO)
  3. Last in first out (LIFO) or First in last out (FILO)
  4. Least recently used (LRU)
  5. Time aware least recently used (TLRU)
  6. Most recently used (MRU)
  7. Pseudo-LRU (PLRU)
  8. Random replacement (RR)
  9. Segmented LRU (SLRU)
  10. Least-frequently used (LFU)
  11. Least frequent recently used (LFRU)
  12. LFU with dynamic aging (LFUDA)
  13. Low inter-reference recency set (LIRS)
  14. CLOCK-Pro
  15. Adaptive replacement cache (ARC)
  16. AdaptiveClimb (AC)
  17. Clock with adaptive replacement (CAR)
  18. Multi queue (MQ)
  19. Pannier: Container-based caching algorithm for compound objects

Check out the Wikipedia article: https://en.wikipedia.org/wiki/Cache_replacement_policies

Common Cache libraries:

  • EhCache
  • Caffeine Cache
  • Memcached
  • Redis
  • Hazelcast
  • Couchbase
  • Infinispan

Conclusion

Up to now, I have introduced what is in-memory cache, its drawback as well as some important terms when applying cache to an application.

Thanks for reading.

Give me motivation

To get new article update please follow our publication or follow us on social

Facebook: https://www.facebook.com/programmingsharing

Twitter: http://twitter.com/progsharing

Programming Sharing

We write about programming. We share about programming.

Sign up for Programming Sharings Newsletter

By Programming Sharing

Get new articles update by signing up our newsletter Take a look.

By signing up, you will create a Medium account if you don’t already have one. Review our Privacy Policy for more information about our privacy practices.

Check your inbox
Medium sent you an email at to complete your subscription.

Thanh Tran

Written by

Software Engineer at Terralogic. Blogger and Amateur Investor

Programming Sharing

The publication to share programming knowledge

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store