Untaboo In-memory caching!

Yogesh Sharma
Locus IQ
Published in
5 min readJul 14, 2021

Ideology

The pandemic has made online meetings a necessity and online meeting platforms support seamless recording, yet we go about taking notes; why?

The whole concept of taking notes seems to be redundant and a waste of time, doesn’t it? Albeit having innumerous contingencies, taking notes has been standardized, recommended, encouraged and ubiquitous. Why!?

For three primary reasons:

  • To avoid multiple redirections to get hold of information
  • Prefer familiarity to inscrutability
  • Abstraction has its advantages

Is all of this applicable to the technical implementation of stuff too? Well, yes! If every ounce of code at the end of the day represents some real-world entity OR a relationship amongst them, so should their applicability, challenges, and solutioning.

Let’s go through all the aforementioned reasons and relate them to caching (technical term for taking notes).

To avoid multiple redirections to get hold of information

How did the internet revolutionize the market? Market availability!

All information is available at the tip of your fingers! No asking around! No waiting for someone to get back with notes. Similarly, the essence of caches lies in their availability. Every hop across systems in micro service-based architecture OR databases OR flat files adds some unwanted latency. If cached, all the information is available in-memory/ external system augmenting negligible latency, thereby helping in delivering better UX.

Prefer familiarity to inscrutability

Shaping words to suit our requirements while taking MOMs not only makes it easier to recall/memorize faster but also enhances the thought process. Similarly, caching transparently from a database does not make sense. Remodeling the data according to functionalities and making them readily accessible helps reduce computation frequency, API latency, etc.

Abstraction has its advantages

Why does everyone stress on building shorter PPTs? Abstraction!

Do we ever really read the whole newspaper word-to-word? Skimming and scanning have always come to our rescue. Abstraction does not devoid one of information, it rather helps in perceiving the desired meaning.

Similarly, rather than worrying about detailed implementation, skimming with pluggable wrappers is always suggested. i.e. Usage-ready cache builders with performance tweaking parameters let us tackle complex use cases.

Cache Site

The question that comes to mind is: Where should we introduce caching?

Frontend: It is in fact a general practice to cache profile data, account settings, static images, etc. at the frontend. This helps avoid redundant calls and also ‘relaxes’ the servers from processing similar data time and again.

But is this really the best we can do to decrease API response time? What if we started building on the same logic and had a caching layer in the backend too.

Backend: While caching in the frontend helps avoid redundant calls to the server for that one logged-in person, caching in the backend will help off-load db-stress to the caching source across groups of users.

Nothing is all jewels and pearls.

Correction: Nothing is all jewels and pearls, nor is everything smut and soot.

Stereotypically, caching on backend servers and that too in-memory has been tabooed. I believe otherwise. If parsing and building fat files is a valid excuse to increase the machine size so should be avoiding multiple hits to DB (even if it is a read-only version). With increasing dependencies on the internet and the virtual world, databases tend to be the bottleneck of the system.

Diving deep

Caching objects in memory on backend servers should be done, albeit cautiously. Rather than caching anything and everything, a functional analysis of the product would highlight certain use cases that demand remodelling. Here we highlight a few instances where it has helped us at Locus:

Ephemeral yet static data

Though connection details are ephemeral in nature once the details are set if their frequency to update is close to zero these details can be persisted in memory. Few of these use cases being:

Third-party connection strings

We have ample options to choose from when we try to implement ABAC (attribute based access control) oriented third-party authenticators. They work on the principle of authenticating the attributes to incoming requests. These attributes can be modeled into anything as small as password policy or various clients representing their managerial/ operational behavior.

In order to govern their authentication; building and maintaining connections with these authenticators is vital. These connections could be of any form: web-socket-service (wss) OR session-based tokenization. Assume being charged on a per connection basis or having to mitigate throttling issues, how do you then optimize connection utilization? Cache it… group it… reuse it!

Rather than refabricating them, we can group the relevant users and keep reusing the same connection to authenticate details. Now, in-memory cache definitely comes to the rescue and stores these ephemeral connection strings, tokens, behaviors, whatnot.

Websocket connection details

Let us build a websocket room (WR1) for a bunch of cab drivers stationed on ABC Avenue. If a passenger requests a cab, we tap into this room and notify all these drivers in WR1 to accept.

Once all the requests are served and none of the drivers are around, we could use the same WR1 room representing XYZ street, instead. This ensures the reuse of the room and its connection details instead of going through the expense of destroying and rebuilding one.

I agree websocket rooms are not that memory/ resources expensive. But, what if we were dealing with a similar concept and had a limited number of payable rooms allotted. What if creation involved expensive implications.

Relatively perpetual data

  • Email templates from static files
  • Internationalization strings
  • Static assets’ paths
  • Client settings (timezone, language preference)

Static data which will not change frequently OR if changes are made, it will not have an adverse impact, is also a good use case to consider.

We have MBs of translation strings mapped to respective keywords. As the very mapping of each word will never change, it is sensible to cache them in memory. If compared against alternatives, incessant referring to static files will be mandatory and latency incurring.

Code! Code! Code!

There are many in-memory caching libraries available. Building a wrapper on top of them for convenient usage is of utmost importance. Like:

public abstract class BaseEntityCache<KEY, VALUE> {

protected LoadingCache<KEY, Optional<VALUE>> cache;

public BaseEntityCache(Function<KEY key, VALUE value> loader) {
this.cache = CacheBuilder.newBuilder().expireAfter(10_000)
.build(new CacheLoader<KEY, VALUE>() {
@Override
public VALUE load(KEY key) {loader.apply(key);}
}
}

void invalidate(KEY key) {
return getCache().invalidate(key);
}

void invalidateAll() {
return getCache().invalidateAll();
}

VALUE get(KEY key) {
return getCache().get(key).orElse(null);
}

void put(KEY key, VALUE value) {
return getCache().put(key, value);
}

}

There can be other variations to cache implementation as well. Synchronous fetch definitely solves almost all of our problems but leaves one. Intermittently, invalidation of existing keys (and having no logic to reload them) essentially implies another synchronous fetch to the data source.

There is a way around this to meet such frequent fetches. We can reload the invalidated cache in the background using ExecutorService. Let’s have a look:

public abstract class ResilientBaseEntityCache<KEY, VALUE>
extends BaseEntityCache<KEY, VALUE> {

private final ExecutorService executorService;

public ResilientBaseEntityCache(Function<KEY key, VALUE value> loader) {
this.cache = CacheBuilder.newBuilder()
.refreshAfter(10_000).expireAfter(10_000)
.build(new CacheLoader<KEY, VALUE>() {

@Override
public VALUE load(KEY key) {loader.apply(key);}

@Override
public VALUE reLoad(KEY key) {

Callable<VALUE> callable = () -> load(key);
ListenableFutureTask<VALUE> fLoader = ListenableFutureTask.create(callable);

executorService.execute(fLoader);
}

}
}
}

Take away

Concepts or their application should never be taboo-ed. Applying and mapping them to relevant use cases is what matters. This in turn helps in not only enhancing the system performance but also weaving the underlying coherence.

--

--