Caching : Terminology

Terms to know about caching

Adem Catamak
C# Programming
9 min readNov 8, 2022

--

Yazının Türkçe versiyonuna bu link aracılığıyla ulaşabilirsiniz.

One of the aspects of the software development process that refers to the engineering field is the efficient use of resources. It is aimed to provide a quality service to users by using limited resources. One of the techniques that can be used to achieve this goal is caching.

In this article, we will talk about caching-related terms before moving on to ‘how to cache’ and ‘what tools to use’.

https://www.webtekno.com/images/editor/default/0002/82/674234aacfa4b207bc20a8df3c936c361c159be0.jpeg

What Is Caching?

Caching means that data is stored in a faster-accessible location and read from there, rather than its original source.

For better visualization in our minds, we can use an example: Let’s think about the refrigerators in the market and at home. As it will be difficult to commute to the market each time you need the material, we keep some of the materials in the refrigerator at home. In this scenario, the market will be the main source of the data, and the refrigerator in our house will represent our cache space.

Photo by Eduardo Soares on Unsplash

The situation where we find the data we need in the cache space is called Cache Hit. The situation where we cannot find the data in cache space is called Cache Miss.

What Are the Advantages of Caching?

We defined caching as reading data from a place where it can be accessed faster. Based on this definition, we can conclude that the first advantage is the shortening of response time. When a person asks you for water, it is obvious that there will be a time difference between bringing it from the fridge and going to the market and bringing it.

We can include the reduction of network traffic among the advantages. If caching is done on the client side, there will be no need to send requests to the server side or if caching is done on the server side there will be no need to send requests to data storage (database, file system, etc.). Let’s assume that no one has products such as food, drinks and cleaning materials in their home. Let everyone go to the markets as needed. You can imagine how busy the roads to the markets would be in this scenario.

Cost reduction can be listed as another advantage. Thanks to caching, resources such as servers and databases will have less load. In this way, we can perform our operations by using machines with lower power or by running the same machines in shorter times.

There may be times when we cannot access the main source of the data due to some planned or unplanned circumstances. In cases where we cannot access the main source of the data, we can provide services for a while with the data in the cache. In this way, by extending the uptime, we protect our users against disruptions.

I tried to mention the important advantages that come to mind first. Many more advantages can be mentioned, however I think it’s time to move on to the disadvantages rather than deep diving into the advantages even more :)

What Are the Disadvantages of Caching?

The basic logic of caching is that we forward copies of data to the client from an environment where we can access the data faster. This basic logic causes a problem such as giving the client the old copy from the cache, even though the data changes in the original source. That’s the biggest downside to caching; invalid data is presented to the user.

Another disadvantage can be explained as follows. Imagine that you went to the refrigerator in your house and couldn’t find what you were looking for. In this case, you will have to get ready and go to the market. The time you spend searching the cache will mean a longer response time.

Apart from these, if you use a tool for caching (redis, memcache, etc.) these tools will have maintenance costs.

In addition, there will be additional tasks that you need to think about such as; filling the refrigerator, removing the expired products from the refrigerator and removing some products if the refrigerator is too full and making room.

What are Caching Strategies?

There are two main approaches to at what stage data should be cached. One of these methods is to cache data when needed. Another method is to save the data in the cache space before it is needed.

On Demand (Pulling)

It is an approach based on obtaining the data from the original data source and writing it to the cache space when we first access the data (with the Cache Miss).

The downside of this approach is that when we want to access the data for the first time, the access will be slow. The nice thing about the approach is to keep the caching space as small as possible.

Imagine that you are building an online game system. You might have millions of users. Even if the user does not log into the system, storing their information in the cache would create an unnecessary cost. Instead of this approach, when the user logs into the system, it seems more rational to cache the information when the user information is needed for the first time in order to use the resources efficiently. This approach can often be preferred in scenarios with huge amount of data.

Note: The Write-Around caching policy is the on-demand caching type.

Prepopulating (Pushing)

It is based on the principle of writing the data to the cache space, predicting that the data will be used before it is even accessed.

The advantage of this approach is that the data is predicted to be used, the data is written to the cache space beforehand, and thus the first access to the data is fast. We get rid of the penalties created by the Cache-Miss situation. Using cache space unnecessarily when the data is not accessed can be considered as a disadvantage.

Let’s say we are building an e-commerce order system. When a new order is created, there may be different systems (shipping service, billing service, etc.) to access the information about this order. The customer may also want to access the order detail from time to time to find out the latest status of his order. In this scenario, it may be a good solution to write the order information to the cache space without accessing the order information, anticipating that newly created orders will be queried.

Note: The approach of writing the data to the original source and the cache space is called Write-Through. If the process is thought to be completed by saving the data only to the cache space, then this approach is called Write-Back. In the Write-Back approach, after the completion of the process, the data is copied to the original data source. In both of these approaches, data is saved in the cache space before it is needed.

Cache Types by Storage Medium

In this section, we will talk about the two different types of caching considering the storage medium, in-memory and distributed, and their pros and cons.

In-Memory Caching

It is the caching type that the memory space on the machine where the application is running is selected as the caching space.

The biggest plus of this approach is that the memory access is very fast. Data can be accessed without the need for any network access. In addition, since there are no additional tools or services, there is no additional maintenance cost.

We experience the negative sides of this method in scenarios where the application is run on more than one machine. Let’s assume that we take a copy of the data from the original source and store it while the operation is being carried out on Machine-X. When Machine-Y needs the same data, the original data source will be queried once more. This machine will also keep a copy of the data in its own memory. This will happen for every machine that needs the data.

Another problem is that the copies stored in the memories of different machines can be different versions of the data. Machine-X can query the order status and see it as ‘Preparing’ and write it to its memory space. Then Machine-Y can query the same order and save it as ‘On the Road’. In this case, the client application may encounter different results depending on the machine it is querying.

Distributed Caching

In this approach, a different application/tool ​​gives the caching service. In this way, applications meet their caching needs through a common point.

The advantage of this approach is that the data in the cache is the same for Machine-X and Machine-Y thanks to the common service. Considering the previous example, when the client is querying the order status, the answer will be the same no matter which machine the request is fulfilled.

Another advantage of this approach is that after caching the data via Machine-X, Machine-Y does not cause data querying in the original data source. This approach is more effective in reducing the number of queries going to the data source.

We lose the positive aspects of the in-memory caching in this approach. To put it more clearly; the downsides of this approach are making a network call and facing the maintenance cost of the application/tool ​​that performs the caching process.

Cache Invalidation

We can assign a lifetime to the cached data. In this way, we prevent the cached data from staying there forever.

Absolute Time

In this strategy; while caching data, a decision is made that this data will be cleared from the cache after X amount of time. When the specified time expires, the data becomes invalid and is deleted from the cache space.

Sliding Window

When the sliding window strategy is used, a decision is made that the data will remain in the cache for X amount of time while caching. When the data is accessed, the life of the data is extended.

Note: In this strategy, the lifetime of the data is continuously extended in the scenario where there is continuous access to the data. If the data in the cache space is not modified when the original data is updated, in this scenario there is a theoretical possibility of giving the invalid data to the client forever.

Cache Eviction

When the cache space is full, you will need to make room for the new data you want to save. Different procedures have been established for these situations. In this section, we will briefly discuss these procedures.

Least Recently Used (LRU)

In this procedure, information of last accessed time of the data stored in the cache is kept. When deletion is triggered to free up cache space, deletion is performed starting with the oldest access date.

This procedure can be used with the assumption that if it has not been used for a long time, then it will not be used for a while.

Most Recently Used (MRU)

In this procedure, information of last accessed time of the data stored in the cache is kept. When deletion is triggered to free up cache space, the deletion is performed starting with the most recent access date.

It is appropriate to use this procedure in scenarios where we think that the data that has been accessed recently will not be queried again in the short term, and that the data that has not been accessed for a long time will be subject to an access request in the near future.

Least Frequently Used (LFU)

In this procedure, a value such as how many times the data was accessed is kept in the cache, instead of when the data was last accessed. In this way, when the cache cleaning process starts, the deletion process proceeds by starting from the data that is accessed the least.

Caching Types Based on Where the Application Code Runs

Client-Side Caching

In this case, we are talking about client-based cache space. Each client writes the data it needs to a cache space that it has access to. Without forwarding its request to the service provider, it tries to advance its process with the data it stores.

Server-Side Caching

In this scenario, the service provider manages its own cache space. Clients forward their requests to the service provider, and the service provider searches its cache space for the data needed to generate a response. The management of the cache space is a single point (under the responsibility of the service provider).

We talked about the basic terms to know about caching. I hope it was a useful article. You can click this link to check out my other articles.

--

--