Simple LRU Cache

Sourav Kantha
3 min readApr 27, 2023

--

In software system design, a cache pronounced as “cash” or “cashe” is generally used for multi purposes like keeping the system more performant, reducing latency and accessing crucial data faster. It serves the purpose of making the operations faster by keeping a subset of data in main memory.

Let’s say that you have a primary datastore (like mysql or oracle) where you have product table and you find out via application logs that a specific set of products’ name, price etc are being queried more than other products in DB. You can create a read-through cache using LRU and avoid the network trip to DB and querying the table.

Although there are dozens of more use cases to use a cache.

Multiple types of policies are used for designing a cache. In principle, all types of policies have their own set of use cases. Some of popular policies used are as follows (this is not exhaustive) :-

Least recently used (LRU)

Time aware least recently used (TLRU)

Most recently used (MRU)

Segmented LRU (SLRU)

For more details on the policies and its working, please check out https://en.wikipedia.org/wiki/Cache_replacement_policies

In this article, I will focus on a simple LRU cache implementation which is written using java. The main motivation for writing this implementation is basically use of object oriented approach to write a LRU cache.

Basic idea of this cache is to keep the cache pages in a double linked list and keep the nodes mapped with their respective keys in a HashMap

The core idea is to keep replacing the head of the LL with recently accessed node. And if a new insert request comes when cache is full, then the tail (least recently used) node is discarded and new node is put as head of the LL.

LRU Cache — class diagram

In this simple LRU cache, I have only considered the vanilla use cases, hence the name Vanilla LRU cache 🙂

Operations Supported

T get(String key) — Get the specific value by key

void set(String key, T val) — Set the specific value with key

List<String> getKeys() — Get all the keys from cache

void clear() — Clear all the keys in cache

I have used generics for the value part to ensure that different types of object can be set as values after the cache instance is created.

public VanillaLRUCache(Integer cacheSize) {

assert(cacheSize > 0);
this.cacheSize = cacheSize;
cacheKeyCount = 0;
this.cacheMap = new ConcurrentHashMap<>(this.cacheSize);
this.pageNodeList = new PageNodeList<>();

}
public T get(String key) throws PageNotFoundException {

if (!this.cacheMap.containsKey(key))
throw new PageNotFoundException(ErrorCodes.ERR001, "Page Not found in cache");

PageNode<T> accessedNode = this.cacheMap.get(key);

this.pageNodeList.rearrange(accessedNode);

return this.cacheMap.get(key).getData();

}
public void set(String key, T val) {

if (this.cacheMap.containsKey(key)) {

PageNode<T> node2Update = this.cacheMap.get(key);
node2Update.setData(val);
return;
}

PageNode<T> page = new PageNode<>(key,val);

if (this.cacheKeyCount + 1 <= this.cacheSize) {

this.pageNodeList.add(page);

this.cacheKeyCount++;

} else {

final String key2BEvicted =
this.pageNodeList.removeLastNodeAndAddNewNode(page, this.cacheKeyCount);
this.cacheMap.remove(key2BEvicted);

}

this.cacheMap.put(key, page);
}

Please check my github for the complete code

I have used zerocode tdd for load testing. Its a nice tool, you can check out more details on this link

I have run the following load model

number.of.threads=200
ramp.up.period.in.seconds=0
loop.count=50

--

--