Maximizing Redis Cache Performance — Code Aspects

Mykola Demchuk
SSENSE-TECH
Published in
5 min readDec 1, 2023

In the fast-paced data-focused digital world, the speed at which we can access information is essential in providing the best user experience and ensuring productivity. This is where Redis Cache comes in, helping to optimize speed and better utilize resources. However, effectively using Redis Cache requires more than just setting it up; it also involves employing advanced code techniques.

Since we use Redis at SSENSE, this article aims to share some of the techniques we found useful while leveraging with our services.

Optimizing Data Structures and Access Patterns

The foundation of efficient access to data in Redis lies in proper key design. Redis uses a key-value store, so it’s important to choose keys carefully to match your application’s data access patterns.

We should always ensure we use the proper data structures and efficiently manipulate them, as designing for growing legacy applications is not always easy or obvious. Let’s take a look at a real-world example and go through the step-by-step optimization process to find the best key-structure approach.

Let’s use a simple product object as an example to design a time and memory efficient cache architecture.

// Product 1
{
"id": 123,
"sku": "12345F123451",
"productCode": "12345F12345",
"name": "White T-shirt",
"stock": 12
}

Product 1 has only five properties, making it relatively simple. The general rule for small objects that need to be represented in Redis is to use Hashes instead of key-value pairs whenever possible. Storing data for small objects in Redis Hash is more memory efficient than using multiple keys. For example, for 10 million products, Hash used 2.3 times less memory than storing the same information in 50 million key value pairs (4.2 GB vs 1.8GB).

Memory usage for Redis key values and Hashes

From a latency perspective, when Hashes are small, Redis’ amortized time for HGET and HSET commands is O(1). Let’s run a simple load test to measure P99 latency with 20,000 requests per second. The server runs on AWS EC2 t3a.large with a simple NodeJs native HTTP client. We will store our product’s properties in simple key values and a single Hash, attempting to retrieve both sets of data.

Retrieving load test key values vs Hash

As we can see, Hash latency is nearly the same as key-pair values, but Hashes are preferable for memory optimization.

But what if the application gets bigger and the product object becomes more complex? It now has descriptions, thumbnails, sales information, etc. In this case, using Hashes can be trickier if we want to maintain a time-efficient application. We will need to consider our access patterns.

Now our fictitious product object has 15 keys, and it’s a good time to start optimizing our access design for the Hash keys. We need to keep track of memory usage with Hashes and efficiently fetch the necessary information. Compared to simple, small Hashes with very few properties, where Redis’ amortized time is O(1), retrieving a big Hash will take O(N) time, in our case O(15). But perhaps the application doesn’t need the entire Hash for every use case, so it’s advisable to only get the Hash keys that are needed for the specific use case.

Let’s consider a few examples when we fetch the whole Hash of the Product using HGETALL and only get five properties simultaneously using HGET.

Retrieving load test HGET vs HGETALL

As we can see, retrieving one by one is more efficient than using a whole Hash; therefore, we should only fetch the necessary information. The obvious downside of this approach is network overhead. Let’s talk about how we can mitigate this.

Optimizing Network Performance

When making a lot of requests, the application will use a significant amount of network resources. To optimize this, there are a few techniques we can adopt in our code to improve network efficiency. The first technique to consider is pipelining. By using pipelining, we can dispatch multiple commands to the server without pausing for individual replies, thus reducing latency and network usage for each operation.

Let’s see how powerful pipelining is by conducting another load test using the same 15-property object.

Retrieving load test Pipelining vs. Regular connections

Based on these results, considering simple usage, it is recommended to always use pipelining in your application.

LUA is another technique to optimize network performance and improve latency, but we need to be more careful when using it.

LUA scripting can run atomic complex commands directly on the Redis server, reducing the number of requests and helping you with advanced scenarios such as updating multiple properties in a Hash atomically. But this comes at a cost on Redis’ performance. Let’s compare the performance of LUA script vs a single operation to update the Hash. In this load test, we will try to update the five properties in our Product Hash using LUA and separate requests with pipelining.

Persisting load test LUA vs. Pipelining

As we can see, the LUA scripting gives good performance with concurrent commands and can compete with pipelining latency. However, when a LUA script is executed within Redis, it runs atomically, blocking all server activities during its entire runtime. This means that no other commands can be processed simultaneously. So LUA scripts should only be used when you need atomic transactions to prevent the need to roll back.

Conclusion

Optimizing Redis performance depends on a thorough understanding of critical factors such as data structures, network latency, hardware capabilities, and configuration parameters. Adjusting these components to align with your unique use case can enhance the effectiveness of your Redis deployment, resulting in a more seamless data management experience.

By leveraging the techniques seen we were able to improve both the latency to our customers and the capacity of our Redis clusters without any additional hardware.

Editorial reviews by Catherine Heim, Mario Bittencourt & Sam-Nicolai Johnston.

Want to work with us? Click here to see all open positions at SSENSE!

--

--