Redis - Cache-Aside Pattern

Tegar Budi Septian
Blibli.com Tech Blog
4 min readJan 2, 2022

Imagine you have a web service which is able to get the detail information of a user. Every time the user hits his own profile on the app, your service will query to the database. It means reading from the disk, every request, even though it is a static data. Or maybe a data retrieval that costs more than 4 seconds or even tens of seconds. Your user may close your app eventually!

There is a way to reduce that time consumption, using Redis’ cache service.

REDIS

Redis is an open source, in-memory key value-store, used as database, cache, and message broker.

In term of cache, the idea is to store the data as cache in the memory or RAM where make it pretty fast to be read compared to the RDBMS (e.g MySQL, PostgreSQL) which uses disk.

Cache-Aside Pattern

Here is the explanation of the Cache-Aside Pattern:

  1. Cache hit. The service read data from cache. If there is the data wanted, it will return the data and the process stops here.
  2. Cache miss. If the data is not stored in the cache, the service will read the data from the database.
  3. Once the data is obtained, the service then stores the data to cache for future request.

Run Redis on your Machine

Redis can be downloaded in its original website, Download Redis. The website contains complete steps how to install it, from Redis source code to docker image installation method. And the easiest way is using docker image because you don’t have to compile the source code. You have to understand how docker works first before installing it.

Installation

Prepare a docker configuration file, docker-compose.yaml

version: '3.5'services:
redis:
container_name: redis
image: redis:6
ports:
- 6379:6379

In terminal, run this command.

docker-compose -f docker-compose.yaml up -d

To run the Redis, open Docker Desktop and run the redis image there.

Set and Get Data

To store data to Redis, we need to connect to Redis server using redis-cli command as below:

D:\tools\redis\docker container exec -it redis /bin/sh
#redis-cli -h localhost
localhost:6379>

docker container exec -it redis /bin.sh is to open Redis container.

redis-cli -h localhost, connect to Redis in our local machine.

Set/Store Data

Let’s say we have a user data from a database with this value as JSON string:

{
'id': 100
'name': 'Tegar Budi Septian'
'email': 'tegar@gmail.com'
}

and want to store that data to Redis cache so the service later in the next request will retrieve the value from Redis, not the database. We can use command set following by key and value pair.

set key value

localhost:6379> set user-service.user.100 "{'id': 100, 'name': 'Tegar Budi Septian', 'email': 'tegar@gmail.com'}"
OK
localhost:6379>

set is a command to store the data.

user-service.user.100 is the key. We can make any pattern of our key to make it more readable. For this example, user-service stands for the name of the service, user stands for the entity or table, and 100 stands for the id to make the key unique.

Then followed by the value in string.

Get Data

As our cache-aside diagram above, before we query to the database, we have to check if the data exists in redis or not. It can be done by using the command get.

get key

localhost:6379> get user-service.user.100
"{'id': 100, 'name': 'Tegar Budi Septian', 'email': 'tegar@gmail.com'}"
localhost:6379>

get is command to retrieve the value of the key.

Then followed by the key we’ve been created before, user-service.user.100. As the data exists, the service should not query to the database. Instead, use this data from Redis to be returned.

But, if the data is not available as below, the service can renew the data by querying to the database and store it to Redis.

localhost:6379> get user-service.user.100
(nil)
localhost:6379>
orlocalhost:6379> exists user-service.user.100
(integer) 0
localhost:6379>

As we demonstrate the cache-aside pattern above. It comes a problem here. What if for some reasons, maybe a data fail that is still being saved in the cache, the data in cache is now not up to date to the database, isn’t it?

To answer it, we can set an expiry time to the key where, for example every 15 minutes, the key is being terminated. So the next request will query to the database for the latest data.

Expiration

To make a key has expiry time, we need to add additional parameter when creating a key-value data.

localhost:6379> set user-service.user.100 "{'id': 100, 'name': 'Tegar Budi Septian', 'email': 'tegar@gmail.com'}" EX 900
OK
localhost:6379> ttl user-service.user.100
(integer) 887
localhost:6379>

There is an additional parameter after the value, EX followed by 900.

EX expiry time in seconds unit. 900 seconds equal to 15 minutes.

ttl followed by key used to show the remaining time of the key to be expired.

Conclusion

Some cases, we have static data or query that takes some amount of seconds. Where we can actually, somehow, find a way to make what we have better.

In this simple demonstration, Redis came to make the service we have faster in retrieving data.

--

--