Optimizing Redis Memory Usage and Performance with Encoding and Compression in Java

Comviva MFS Engineering Tech Blog
5 min readMar 13, 2024

--

By Vishal Priyadarshi ,

Redis serves as a critical component in many modern applications, often used for caching JSON-encoded data. However, as data volumes grow, so does the memory usage of Redis, leading to increased costs and potential performance bottlenecks. In this article, we’ll explore various encoding and compression techniques in Java to optimize Redis memory usage and improve performance.

Problem Statement

The primary challenges we face are high Redis memory usage, increased costs, and slow encode/decode speeds. To address these issues, we’ll explore strategies such as utilizing binary serialization formats and implementing compression for large or repetitive data structures.

Efficient Data Handling Technologies: MessagePack, Snappy, and Zstandard

MessagePack: MessagePack is a binary serialization format that is designed to be more efficient in terms of both size and speed compared to traditional text-based formats like JSON. It aims to achieve smaller message sizes while maintaining compatibility with JSON-like data structures. MessagePack encodes data in a compact binary format, making it suitable for data exchange between applications, especially in resource-constrained environments or where network bandwidth is limited.

Snappy Compressor: Snappy is a fast and efficient compression/decompression library developed by Google. It is designed for high-speed data compression and decompression, with a focus on achieving fast compression and decompression speeds rather than maximum compression ratios. Snappy aims to provide reasonable compression ratios while being significantly faster than other compression algorithms like zlib. It is particularly well-suited for scenarios where speed is paramount, such as in-memory compression, network data transfer, and real-time data processing.

Zstandard (Zstd): Zstandard, also known as Zstd, is a fast and highly efficient compression algorithm developed by Facebook. It offers a wide range of compression levels, allowing users to adjust the trade-off between compression ratio and compression speed based on their specific requirements. Zstd provides both excellent compression ratios and fast compression and decompression speeds, making it suitable for a variety of use cases, including file compression, network communication, and in-memory compression. Additionally, Zstd supports features like prebuilt dictionaries for improved compression performance with specific data patterns and custom compression levels to optimize for different scenarios.

Solution Overview

We’ll cover the following encoding and compression techniques:

  1. JSON Plain: Encode data directly into JSON format without compression.
  2. Msgpack Plain: Encode data using MessagePack format without compression.
  3. JSON Snappy: Encode data into JSON format and compress using Snappy compression.
  4. Msgpack Snappy: Encode data using MessagePack format and compress using Snappy compression.
  5. JSON Zstd: Encode data into JSON format and compress using Zstd compression.
  6. Msgpack Zstd: Encode data using MessagePack format and compress using Zstd compression.
  7. JSON Zstd Dict: Encode data into JSON format and compress using Zstd compression with a pre-built dictionary.
  8. Msgpack Zstd Dict: Encode data using MessagePack format and compress using Zstd compression with a pre-built dictionary.

Benchmark and Performance Data

Size Comparison (Payload: String — 500 bytes)

Performance (Encode and Decode Time in ns/op)

Sample Code

import com.google.gson.Gson;
import org.msgpack.core.MessageBufferPacker;
import org.msgpack.core.MessagePack;
import org.xerial.snappy.Snappy;
import com.github.luben.zstd.Zstd;
import java.io.IOException;
public class EncodingAndCompressionExample {
public static void main(String[] args) {
// Sample string payload
String payload = "This is a sample string payload with 500 bytes of data.";
// JSON Plain
String jsonPlain = new Gson().toJson(payload);
System.out.println("JSON Plain: " + jsonPlain);
// Msgpack Plain
try {
MessageBufferPacker packer = MessagePack.newDefaultBufferPacker();
packer.packString(payload);
byte[] msgpackPlain = packer.toByteArray();
System.out.println("Msgpack Plain: " + msgpackPlain);
} catch (IOException e) {
e.printStackTrace();
}
// JSON Snappy
try {
byte[] jsonBytes = payload.getBytes();
byte[] jsonSnappy = Snappy.compress(jsonBytes);
System.out.println("JSON Snappy: " + jsonSnappy);
} catch (IOException e) {
e.printStackTrace();
}
// Msgpack Snappy
try {
MessageBufferPacker packer = MessagePack.newDefaultBufferPacker();
packer.packString(payload);
byte[] packedData = packer.toByteArray();
byte[] msgpackSnappy = Snappy.compress(packedData);
System.out.println("Msgpack Snappy: " + msgpackSnappy);
} catch (IOException e) {
e.printStackTrace();
}
// JSON Zstd
try {
byte[] jsonBytes = payload.getBytes();
byte[] jsonZstd = Zstd.compress(jsonBytes);
System.out.println("JSON Zstd: " + jsonZstd);
} catch (IOException e) {
e.printStackTrace();
}
// Msgpack Zstd
try {
MessageBufferPacker packer = MessagePack.newDefaultBufferPacker();
packer.packString(payload);
byte[] packedData = packer.toByteArray();
byte[] msgpackZstd = Zstd.compress(packedData);
System.out.println("Msgpack Zstd: " + msgpackZstd);
} catch (IOException e) {
e.printStackTrace();
}
// JSON Zstd Dict
try {
byte[] jsonBytes = payload.getBytes();
byte[] jsonZstdDict = Zstd.compressWithDictionary(jsonBytes, dictionary);
System.out.println("JSON Zstd Dict: " + jsonZstdDict);
} catch (IOException e) {
e.printStackTrace();
}
// Msgpack Zstd Dict
try {
MessageBufferPacker packer = MessagePack.newDefaultBufferPacker();
packer.packString(payload);
byte[] packedData = packer.toByteArray();
byte[] msgpackZstdDict = Zstd.compressWithDictionary(packedData, dictionary);
System.out.println("Msgpack Zstd Dict: " + msgpackZstdDict);
} catch (IOException e) {
e.printStackTrace();
}
}
}

Insights

After conducting thorough experiments and analyses, several key findings have emerged regarding encoding and compression techniques:

Encoding:

  • MessagePack encoding and decoding outperform JSON counterparts in terms of speed.
  • Notably, MessagePack decoding may encounter slowdowns, particularly when the destination is an interface, exhibiting slower performance compared to JSON decoding.
  • MessagePack achieves smaller encoded sizes than JSON due to its binary serialization format, optimizing data transfer efficiency.

Compression:

  • Snappy compression exhibits faster processing speeds compared to Zstandard (Zstd), albeit with a trade-off of lower compression ratios.
  • When employing Zstandard compression, utilizing a dictionary enhances compression ratios, albeit at the expense of compression speed.
  • It’s important to note that primitive data type payloads may not necessitate compression, as the overhead may outweigh the benefits.
  • The effectiveness of compression techniques is heavily influenced by the payload structure, emphasizing the importance of assessing compression ratios before implementation.

Conclusion

By leveraging these encoding and compression techniques, Java developers can effectively manage Redis memory usage and improve overall system performance. Experiment with different approaches to find the optimal combination for your specific use case.

Originally published at https://medium.com on March 13, 2024.

--

--