Client Buffers-Before taking Redis in production check this !

Vipul Divyanshu
5 min readFeb 25, 2019

--

Redis is widely used for variety of use cases requiring efficient and fast data access. Now with rise of various modules being present, Redis has gone from just a key value store to neural networks storage, graph search and much more.

While all the unique modules and data types and commands fine-tune databases to serve application requests without any additional processing at the application level, misconfiguration, or rather, using out-of-the-box configuration, can (and does) lead to operational challenges and performance issues.

Client Buffers

One of the most commonly overlooked configurations by first timers for handling heavy load applications are client buffers. As you already know that Redis is a fast in-memory database, which means that all data is managed and served directly from RAM. This allows Redis to deliver unparalleled performance, serving tens and hundreds of thousands of requests at sub-millisecond latencies. RAM is by far the fastest means of storage that technology offers today — to get a sense of latency numbers, have a look at the following:

Latency Comparison Numbers (~2012)
----------------------------------
L1 cache reference 0.5 ns
Branch mispredict 5 ns
L2 cache reference 7 ns 14x L1 cache
Mutex lock/unlock 25 ns
Main memory reference 100 ns 20x L2 cache, 200x L1 cache
Compress 1K bytes with Zippy 3,000 ns 3 us
Send 1K bytes over 1 Gbps network 10,000 ns 10 us
Read 4K randomly from SSD* 150,000 ns 150 us ~1GB/sec SSD
Read 1 MB sequentially from memory 250,000 ns 250 us
Round trip within same datacenter 500,000 ns 500 us
Read 1 MB sequentially from SSD* 1,000,000 ns 1,000 us 1 ms ~1GB/sec SSD, 4X memory
Disk seek 10,000,000 ns 10,000 us 10 ms 20x datacenter roundtrip
Read 1 MB sequentially from disk 20,000,000 ns 20,000 us 20 ms 80x memory, 20X SSD
Send packet CA->Netherlands->CA 150,000,000 ns 150,000 us 150 ms

Notes
-----
1 ns = 10^-9 seconds
1 us = 10^-6 seconds = 1,000 ns
1 ms = 10^-3 seconds = 1,000 us = 1,000,000 ns

Credit
------
By Jeff Dean: http://research.google.com/people/jeff/
Originally by Peter Norvig: http://norvig.com/21-days.html#answers

Contributions
-------------
'Humanized' comparison: https://gist.github.com/hellerbarde/2843375
Visual comparison chart: http://i.imgur.com/k0t1e.png

Redis in its most practical sense is used as a remote server, usually connected over a network, this being the case the latency between client receiving the data and acknowledging it takes longer than the RAM read time. This has multiple implications, one of which is Redis is stuck in serving the request for the longer duration of time than its actual RAM read time. Redis being single-threaded server would not have been famous for its unparalleled performance it’s know for if it was not for the client buffers.

Redis implements 3 types of client buffers for different class of clients, having a quick look at the ADVANCED CONFIG section in default redis.config file, shown below

client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60

Also same thing can seen by using

> config get client-output-buffer-limit1) "client-output-buffer-limit"
2) "normal 0 0 0 slave 268435456 67108864 60 pubsub 33554432 8388608 60"

This has the format

client-output-buffer-limit <class> <hard limit> <soft limit> <soft seconds>

A client is immediately disconnected once the hard limit is reached, or if the soft limit is reached and remains reached for the specified number of seconds (continuously), if your Redis interface doesn’t implement auto-reconnection, you will need to make sure that your application server or any service or Slave servers connected to the Redis doesn’t breach any of the limits. So for instance if the hard limit is 32 megabytes and the soft limit is 16 megabytes / 10 seconds, the client will get disconnected immediately if the size of the output buffers reach 32 megabytes, but will also get disconnected if the client reaches 16 megabytes and continuously overcomes the limit for 10 seconds.

By default normal clients(normal execute commands) are not limited because they don’t receive data without asking (in a push way), but just after a request, so only asynchronous clients may create a scenario where data is requested faster than it can read.

Any long running payload heavy pubsub application will require this buffer limit change, especially if a co-routine in an event loop receiving the pubsub message handles extra logic or performs IO before getting ready to receive the next pubsub message, which leads to the client buffer getting filled up rapidly, leading to possible disconnection. A solution that offers an immediate solution results from increasing the size of the output slave buffer by setting both the hard and soft limits to 256mb and 64mb.

config set client-output-buffer-limit "pubsub 268435456 67108864 0"

Similar setting change can be done for any slave replication issues. This has more severe repercussions, as a slave disconnect will cause the synchronisation to start from the beginning. Once the 256MB hard limit is reached, or if a soft limit of 67MB is reached and held for a continuous 60 seconds. In many cases, especially with a high ‘write’ load and insufficient bandwidth to the slave server, the replication process will never finish. This may lead to an infinite loop situation where the master Redis is constantly forking and snapshotting the entire dataset to disk, which can cause up to triple the amount of extra memory to be used together with a high rate of I/O operations. Additionally, this infinite loop situation results in the slave never being able to catch up and fully synchronize with the master Redis server.

You can change this directly in the config file and restart redis or by using the command

config set client-output-buffer-limit "slave 536870912 536870912 0"

As with many reconfigurations, it is important to understand that:

  1. Before increasing the size of the replication buffers you must make sure you have enough memory on your machine.
  2. The Redis memory usage calculation does not take the replication buffer size into account.

More optimzations can be followed though by reading the default config file and adjusting other parameters too to match you requirements.

Sources and references :

https://redislabs.com/community/tech-blog/

https://redis.io/modules

--

--

Vipul Divyanshu

Co-founder & CTO Streak, building platform to liberate technology in finance with @streak_world and @streaktech