Redis 7 有什麼新更新呢?
Redis 在 2022–04–27 推出新版本 7.0 功能更新,而 ElastiCache Redis 也在 2022–11–08 也推出該版本的更新,而這個版本有幾個重要的更新,例如 “ Redis Functions”’、”ACL improvements — ACLv2"、”Sharded Pub/Sub”、”AOF improvement — Support Multi-Part AOF” 等新功能。
[+] Redis 7 release notes
https://raw.githubusercontent.com/redis/redis/7.0/00-RELEASENOTES
[+] Amazon ElastiCache adds support for Redis 7
https://aws.amazon.com/tw/about-aws/whats-new/2022/11/amazon-elasticache-redis-7/
#1 Redis Functions
- Redis Functions 是一個新 script 功能,就跟使用 lua 是一樣的。會跟數據一樣,存放在 Redis server 端,就如果一般的鍵值。支援持久化,可複製,並且在節點重啓之後,可以直接從節點上讀取。
- Redis functions written in Lua 5.1
- Redis function 中不能去使用其他的 function (. (No support like LUA)
- Can’t block write → atomicity and can’t rollback → big overhead. (No support like LUA)
- Timeout setting default is 5 seconds (The same as LUA)
redis.confg
# The default is 5 seconds. It is possible to set it to 0 or a negative value
# to disable this mechanism (uninterrupted execution). Note that in the past
# this config had a different name, which is now an alias, so both of these do
# the same:
# lua-time-limit 5000
# busy-reply-threshold 50000 ==> The configuration parameter affecting max execution time is called busy-reply-threshold.
---
$ cat redis-6.2.6/redis.conf | grep lua
lua-time-limit 5000
$ cat redis-7.0.4/redis.conf | grep lua
# lua-time-limit 5000
# busy-reply-threshold 5000
- Redis Functions:
ElastiCache for Redis 7 includes Redis Functions, enabling developers to execute LUA scripts with application logic stored on the server. With Redis Functions, ElastiCache for Redis stores the functions alongside the data thus making the scripts just as durable as the data in Redis, and does not require re-sending the scripts to the server with every connection. ElastiCache for Redis 7 will automatically manage reloading your functions in case of node failures or replacements, and addition of shards when scaling out.
- Function Life Cycle
A function needs to be created and named in Redis before it can be used. To do this, the FUNCTION CREATE command is used. Function code is loaded into the specified engine that compiles and stores it. After the function is created, it can be invoked using FUNCTION CALL command that executes the named function. Created functions are also propagated to replicas and AOF, and are saved as part of the RDB file.
- 支援匯出/備分 functions
$ redis-cli -c -p 6381 --functions-rdb functions.rdb
sending REPLCONF capa eof
sending REPLCONF rdb-only 1
sending REPLCONF rdb-filter-only functionsSYNC sent to master, writing bytes of bulk transfer until EOF marker to 'functions.rdb'
Transfer finished with success after 266 bytes
[+] Redis functions | Redis:
https://redis.io/docs/manual/programmability/functions-intro/
Function 功能測試記錄
## 本機載入 FUNCTION
[+] FUNCTION LOAD:
https://redis.io/commands/function-load/
Usage: FUNCTION LOAD "#!lua name=mylib \n redis.register_function('myfunc', function(keys, args) return args[1] end)"
mylib
---
127.0.0.1:6381> FUNCTION LOAD "#!lua name=mylib\nredis.register_function('knockknock', function() return 'Who\\'s there?' end)"
"mylib"
127.0.0.1:6381> FCALL knockknock 0
"Who's there?"
---127.0.0.1:6381> FUNCTION LIST
1) 1) "library_name"
2) "mylib"
3) "engine"
4) "LUA"
5) "functions"
6) 1) 1) "name"
2) "knockknock"
3) "description"
4) (nil)
5) "flags"
6) (empty array)
[+] FUNCTION LIST:
https://redis.io/commands/function-list/
---
127.0.0.1:6382> FUNCTION LIST <--- 在從節點上找不到。
(empty array)
---### 在 Cluster mode enable Redis cluster 去使用 FUNCTION。
$ cat mylib.lua
#!lua name=myliblocal function my_hset(keys, args)
local hash = keys[1]
local time = redis.call('TIME')[1]
return redis.call('HSET', hash, '_last_modified_', time, unpack(args))
endredis.register_function('my_hset', my_hset)$ redis-cli --cluster-only-masters --cluster call 127.0.0.1:6381 FUNCTION LOAD "$(cat mylib.lua)"
>>> Calling FUNCTION LOAD #!lua name=myliblocal function my_hset(keys, args)
local hash = keys[1]
local time = redis.call('TIME')[1]
return redis.call('HSET', hash, '_last_modified_', time, unpack(args))
endredis.register_function('my_hset', my_hset)
127.0.0.1:6381: mylib
127.0.0.1:6382: mylib
127.0.0.1:6383: mylib## 使用 FCALL 命令,來執行 my_hset 去寫入數據。
[+] FCALL:
https://redis.io/commands/fcall/
---
127.0.0.1:6382> FCALL my_hset 1 myhash myfield "some value" another_field "another value"
(integer) 3
---## 讀取數據
127.0.0.1:6382> HGETALL myhash
1) "_last_modified_"
2) "1663057394"
3) "myfield"
4) "some value"
5) "another_field"
6) "another value"## 在其他節點上,去讀取數據
127.0.0.1:6384> HGETALL myhash
-> Redirected to slot [9295] located at 127.0.0.1:6382 <--- 轉導到該數據所在 slot 去讀取
1) "_last_modified_"
2) "1663057394"
3) "myfield"
4) "some value"
5) "another_field"
6) "another value"### 使用 FUNCTION
127.0.0.1:6382> FCALL my_hset 1 myhash2 myfield "aaaae" another_field "bbb"
-> Redirected to slot [13619] located at 127.0.0.1:6383
(integer) 3
127.0.0.1:6383> HGETALL myhash2
1) "_last_modified_"
2) "1663057584"
3) "myfield"
4) "aaaae"
5) "another_field"
6) "bbb"## 刪除 FUNCTION (只刪除該節點上的特定 FUNCTION)
[+] FUNCTION DELETE:
https://redis.io/commands/function-delete/
Usage: FUNCTION DELETE NAME
---
127.0.0.1:6381> FUNCTION DELETE mylib
OK
127.0.0.1:6381> FUNCTION LIST
(empty array)
---## 清除該節點上所有的 FUNCTION (其他節點不受影響)
[+] FUNCTION FLUSH:
https://redis.io/commands/function-flush/
---
127.0.0.1:6381> FUNCTION FLUSH
OK
---## 由於該節點上,找不到該 FUNCTION,所以報錯。
127.0.0.1:6382> FCALL my_hset 1 myhash5 myfield "555" another_field "555"
-> Redirected to slot [1492] located at 127.0.0.1:6381 <--- 這個節點上的 my_hset FUNCTION 被刪除了。
(error) ERR Function not found127.0.0.1:6383> FCALL my_hset 1 myhash3 myfield "ccc" another_field "cccc"
-> Redirected to slot [9490] located at 127.0.0.1:6382 <---- 沒有刪除的節點,正常運行。
(integer) 3## 從別個節點上 DUMP FUNCTION 內容。
[+] FUNCTION DUMP | Redis:
https://redis.io/commands/function-dump/
---
127.0.0.1:6383> FUNCTION DUMP
"\xf5\xc3@\xc2@\xea\x1f#!lua name=mylib\n\nlocal function\x16 my_hset(keys, args)\n \x80$\x06hash = @\x1a\x02[1]\xe0\x00\x16\x03time \x16\x05redis. L\bl('TIME')\x80$\x05return \x06\xe0\x01\x1e\x05HSET',`O\x11, '_last_modified_ \x18@Q\b, unpack(`\x83\x06)\nend\n\n\x80D\bregister_\xc0\xb5\x01('\xa0\xb6 ;\x80\t\x01t)\n\x00G\n?\x81|\xd0\xf9\xad"
---## 再匯入刪除 FUNCTION 的節點。
[+] FUNCTION RESTORE:
https://redis.io/commands/function-restore/
---
127.0.0.1:6381> FUNCTION RESTORE "\xf5\xc3@\xc2@\xea\x1f#!lua name=mylib\n\nlocal function\x16 my_hset(keys, args)\n \x80$\x06hash = @\x1a\x02[1]\xe0\x00\x16\x03time \x16\x05redis. L\bl('TIME')\x80$\x05return \x06\xe0\x01\x1e\x05HSET',`O\x11, '_last_modified_ \x18@Q\b, unpack(`\x83\x06)\nend\n\n\x80D\bregister_\xc0\xb5\x01('\xa0\xb6 ;\x80\t\x01t)\n\x00G\n?\x81|\xd0\xf9\xad"127.0.0.1:6381> FCALL my_hset 1 myhash5 myfield "555" another_field "555" <--- 此時,該節點上的 my_hset FUNCTION 就可以正常運行了。
(integer) 3
---### LUA script ###
[+] SCRIPT LOAD | Redis:
https://redis.io/commands/script-load/[+] Scripting with Lua | Redis:
https://redis.io/docs/manual/programmability/eval-intro/## 從分片組 shard01-主點上載入,只有在該節點上才能執行。
127.0.0.1:6381> SCRIPT LOAD "return 'Immabe a cached script'"
"c664a3bf70bd1d45c4284ffebb65a6f2299bfc9f"
127.0.0.1:6381> EVALSHA c664a3bf70bd1d45c4284ffebb65a6f2299bfc9f 0
"Immabe a cached script"
127.0.0.1:6381> SCRIPT EXISTS c664a3bf70bd1d45c4284ffebb65a6f2299bfc9f
1) (integer) 1## 從分片組 shard01-從節點上,是找不到該 LUA的
127.0.0.1:6382> EVALSHA c664a3bf70bd1d45c4284ffebb65a6f2299bfc9f 0
(error) NOSCRIPT No matching script. Please use EVAL.
127.0.0.1:6382> SCRIPT EXISTS c664a3bf70bd1d45c4284ffebb65a6f2299bfc9f
1) (integer) 0
#2 ACL improvements — ACLv2
- 可以對鍵值及操作命令,做多種更細緻的權限配置 (key based permissions and Selectors)。
- ACL: Fine-grained key-based permissions and allow users to support multiple sets of command rules with selectors.
- https://github.com/redis/redis/pull/9974
- https://redis.io/topics/acl#key-permissions
- https://redis.io/topics/acl#selectors
- ACL improvements
ElastiCache for Redis 7 includes support for the next version of Redis Access Control Lists (ACLs). ElastiCache for Redis 6 introduced ACLs, enabling customers to limit the commands or group of commands that a specific user could execute on a set of keys. With ElastiCache for Redis 7, clients can now specify permissions on specific keys or keyspaces in Redis using selectors, and specify multiple sets of permissions on keys or keyspaces for the same user.
Multiple selectors
A selector is a set of allowed commands + first args + a set of key patterns that match against Redis commands. Each user can now have 1 or more selectors, and as long as 1 selector matches against the user, the command will be allowed. We are introducing the concept of the “root permissions” which is the selector that is applied to users when they are created. This selector is mutable in order to be maximally backwards compatible.
Selectors
Starting with Redis 7.0, Redis supports adding multiple sets of rules that are evaluated independently of each other. These secondary sets of permissions are called selectors and added by wrapping a set of rules within parentheses. In order to execute a command, either the root permissions (rules defined outside of parenthesis) or any of the selectors (rules defined inside parenthesis) must match the given command. Internally, the root permissions are checked first followed by selectors in the order they were added.
For example, consider a user with the ACL rules +GET ~key1 (+SET ~key2). This user is able to execute GET key1 and SET key2 hello, but not GET key2 or SET key1 world.
Key based permissions
We will introduce three sub-permissions on keys, read + write. The goal here is to make it easy to restrict specific types of operations on key values and to allow defining permissions that work with modules. This functionality also lets you reason about “future proofing operations”. Key based permissions do not replace command permissions.
ACLv1 vs ACLv2 比較
ACL
- access-string "off +get ~keys"
- access-string "~objects: ~items:* ~public:"ACLv2
- access-string "off %R~foo1 %W~bar2 ~whatever resetchannels &channel1 -@all +get (%R~selector1 & -@all +get)"ACL DRYRUN (Redis7 added)
A new dryrun command was added that allows users to "test" whether or not a given user will be able to execute a command. I was using this mostly for testing, but Itamer suggested I commit it. It also makes some tests much easier to orchestrate.Usage: ACL DRYRUN <username> <command> [<arg> …]
#3 Sharded Pub/Sub
- 支援在 Cluster Mode Enabled (CME) 上,無需跨分片傳播通道信息,從而提高了可擴展性。
- redis_pubsub_demo.rb · GitHub
- Sharded pubsub implementation by hpatro · Pull Request #8621 · redis/redis · GitHub
- What is Redis Pub/Sub improvement?
Redis has supported the publish-subscribe mechanism since 2.0. Users using the pubsub command family can establish a message subscription system. However, Redis pubsub has some problems in the cluster mode; the most significant of which is the broadcast storm brought by large-scale clusters.
Redis pubsub is published and subscribed by channel. However, channels are not treated as data processing in cluster mode. They do not participate in hash value calculation and cannot be distributed by slot. Therefore, Redis broadcasts messages to users in cluster mode.
The problem is clear. If a cluster has 100 nodes and users publish messages to a channel at node 1, the node needs to broadcast the messages to the other 99. If only a few of the other nodes subscribe to the channel, most of the messages are invalid, which causes waste to the network, CPU, and other resources.
Sharded-pubsub is used to solve this problem. It distributes channels by shards. A shard node is only responsible for processing its channels rather than broadcasting them, which simply avoids the waste of resources.
Sharded Pub/Sub 重要改善
可以跨分片組通道(shard channels),來傳遞訊息,分片組通道會同分配鍵值的方式一樣,將特定”分片組通道”,交由特定的分片組(shard/slot)來處理,所以客戶端(Redis Client)就只會到該分片組上的節點,來取得訂閱的內容。
Shard channels are assigned to slots by the same algorithm used to assign keys to slots.
A shard message must be sent to a node that own the slot the shard channel is hashed to.
- 優點是,當發佈者(Pub)去 SPUBLISH 傳送內容到 “分片組通道(shard channels)” 時,就只會在特定的 “分片組(shard/slot)” 內來處理,而訂閱者(Sub)也必需到該分片組(shard/slot)節點,來訂閱的內容,這樣可以避免發佈者(Pub),傳遞內容到所有的分片組(shard/slot)上。
- Redis7 開始提供三個新的命令 SSUBSCRIBE, SUNSUBSCRIBE, SPUBLISH 來達到 Sharded Pub/Sub 目的。
SSUBSCRIBE, SUNSUBSCRIBE and SPUBLISH are used to implement sharded Pub/Sub.
Reids Pub/Sub and Sharded Pub/Sub 的差別,及詳細測試記錄,請參考以下文檔。
#4 Snapshot improvements
- 降低了copy-on-write期间的内存使用,進而降低在全同步、或是備份期間,內存不足的機率。
- Significant reduction of copy-on-write memory overheads (#8974)
- https://github.com/redis/redis/issues/8974
- https://blog.devgenius.io/an-in-depth-understanding-of-redis-7-0s-shared-copy-buffer-9057f57d8493
There are some memory we can release in the fork child process:
Serialized key-values the fork child process never access serialized key-values, so we try to free them. Because we only can release big bulk memory, and it is time consumed to iterate all items/members/fields/entries of complex data type. So we decide to iterate them and try to release them only when their average size of item/member/field/entry is more than page size of OS.
Replication backlog Because replication backlog is a cycle buffer, it will be changed quickly if redis has heavy write traffic, but in fork child process, we don’t need to access that.
Client buffers If clients have requests during having the fork child process, clients’ buffer also be changed frequently. The memory includes client query buffer, output buffer, and client struct used memory.
Redis snapshot backup 快照如何執行,請參考下面文檔。
AOF improvement — Support Multi-Part AOF
- 將AOF文件的存儲方式改為在一個文件夾下存儲多個文件。
- Implement Multi Part AOF mechanism to avoid AOFRW overheads: appenddirname
- https://github.com/redis/redis/pull/9788
The main issues with the the original AOFRW mechanism are:
buffering of commands that are processed during rewrite (consuming a lot of RAM)
freezes of the main process when the AOFRW completes to drain the remaining part of the buffer and fsync it.
double disk IO for the data that arrives during AOFRW (had to be written to both the old and new AOF files)
- 在AOF文件中增加了數據更新時間點的標識,使得用戶可以恢復某一時間點的數據。
- Add timestamp annotations in AOF for point-in-time recovery: aof-timestamp-enabled
- ps://github.com/redis/redis/pull/9326
Enabled with the new aof-timestamp-enabled config option.
Timestamp annotation format is “#TS:${timestamp}\r\n”.” TS” is short of timestamp and this method could save extra bytes in AOF.
We can use timestamp annotation for some special functions.
know the executing time of commands
restore data to a specific point-in-time (by using redis-check-aof to truncate the file)
- redis.confg
aof-timestamp-enabled no
$ cat redis-7.0.4.conf | grep 'appenddirname\|aof-timestamp-enabled'
appenddirname "appendonlydir"
aof-timestamp-enabled no
!!! 請注意 ElastiCache Redis 並不支援 AOF 功能,以下是不支援的原因。
Q: Why ElastiCache didn’t support AOF after engine version 2.8.22?
AOF is disabled by default. To enable AOF for a cluster running Redis, you must create a parameter group with the appendonly parameter set to yes. You then assign that parameter group to your cluster. You can also modify the appendfsync parameter to control how often Redis writes to the AOF file.
■ 主要理由是因為點節點發生故障時,EalsatiCache會直接更換該節點,所以AOF無法避免資料遺失,故使用多從節點來同步(replication)數據,才是比較佳的做法。另外AOF在還原數據時,所花費的時間也較使用RDB檔來得時間長,另外AOF 開啟 always 寫入時,也會因為磁盤寫入問題,而造成 Redis Engine stuck。
#5 Improved management of memory consumed
- Improved management of memory consumed by network buffers, and an option to drop clients when total memory exceeds a limit
- 多了一個參數 maxmemory-clients: 可以限定所有客戶端使用的內存總和的最大值。
A mechanism for disconnecting clients when the sum of all connected clients is above a configured limit. This prevents eviction or OOM caused by accumulated used memory between all clients. It’s a complimentary mechanism to the client-output-buffer-limit mechanism which takes into account not only a single client and not only output buffers but rather all memory used by all clients.
maxmemory-clients max memory all clients are allowed to consume, above this threshold we disconnect clients.
This config can either be set to 0 (meaning no limit), a size in bytes (possibly with MB/GB suffix),
or as a percentage of maxmemory by using the % suffix (e.g. setting it to 10% would mean 10% of maxmemory).
maxmemory-clients 5%
$ cat redis-7.0.4/redis.conf | grep maxmemory-clients
Client eviction is configured using the maxmemory-clients setting as follows:
maxmemory-clients 1g
舊的參數:
- client-output-buffer-limit normal 0 0 0
- client-output-buffer-limit replica 256mb 64mb 60
- client-output-buffer-limit pubsub 32mb 8mb 60
- Client-output-buffer-limit-slave-hard-limit (Default by node type)
- Client-output-buffer-limit-slave-soft-limit * Default by node type)
- client-query-buffer-limit: default 1GB (added in 4.0.10)
#6 RDB version up to 10
- 向下相容舊版的 RDB 檔,但一樣,只能向下相容,舊版本的 Redis 不能使用新版本 Redis 所匯出的 RDB 檔。
$ /home/ec2-user/redis-3.2.10/src/redis-server
$ redis-cli set key 111
OK
$ redis-cli get key
"111"
$ redis-cli info server | grep redis_version
redis_version:3.2.10
$ redis-cli shutdown
$ ls
dump.rdb
---//可以載入舊版本的 RDB 檔案。
$ /home/ec2-user/redis-7.0.4/src/redis-server
...
23315:M 12 Sep 2022 08:02:03.568 * DB loaded from disk: 0.000 seconds
23315:M 12 Sep 2022 08:02:03.568 * Ready to accept connections
$ redis-cli get key
"111"
$ redis-cli info server | grep redis_version
redis_version:7.0.4
---
//Redis 7 的 RDB 版本是10。
$ /home/ec2-user/redis-6.2.6/src/redis-server
...23509:M 12 Sep 2022 08:45:13.216 # Can't handle RDB format version 10
23509:M 12 Sep 2022 08:45:13.216 # Fatal error loading the DB: Invalid argument. Exiting.
#7 Protected-mode default value from no to yes
- 在redis.conf配置文件中,protected-mode 默認更改為yes。
- 只有當你希望你的客戶端在沒有授權的情況下可以連接到Redis server的時候可以將protected-mode設置為no。
redis.confg
# Protected mode is a layer of security protection, in order to avoid that
# Redis instances left open on the internet are accessed and exploited.
#
# By default protected mode is enabled. You should disable it only if
# you are sure you want clients from other hosts to connect to Redis
# even if no authentication is configured.
protected-mode yes
#8 Support for hostnames, instead of IP addresses only on cluster mode
- 集群支持顯示主機名,而不僅僅顯示ip地址。
- https://github.com/redis/redis/pull/9530
- https://github.com/redis/redis/pull/10436
- https://github.com/redis/redis/issues/1043
- New config “cluster-announce-hostname” which is a hostname that an externally facing client can use to connect to this node. Using the new mechanism we will send an hostname extension to all nodes, so that eventually all nodes in the cluster will know our hostname.
- Nodes do not talk to each other with the hostname.
- This hostname will be added as the 4th field to the CLUSTER SLOTS output which is the primary way clients will discover it.
- Another new config “cluster-preferred-endpoint-type” option to configure what type of endpoint is shown by default.
redis.config### cluster-announce-hostname "" ###
# Clusters can configure their announced hostname using this config. This is a common use case for
# applications that need to use TLS Server Name Indication (SNI) or dealing with DNS based
# routing. By default this value is only shown as additional metadata in the CLUSTER SLOTS
# command, but can be changed using 'cluster-preferred-endpoint-type' config. This value is
# communicated along the clusterbus to all nodes, setting it to an empty string will remove
# the hostname and also propagate the removal.
# cluster-announce-hostname "" <--- new setting.### cluster-preferred-endpoint-type ip | hostname | unknown-endpoint ###
# Clusters can advertise how clients should connect to them using either their IP address,
# a user defined hostname, or by declaring they have no endpoint. Which endpoint is
# shown as the preferred endpoint is set by using the cluster-preferred-endpoint-type
# config with values 'ip', 'hostname', or 'unknown-endpoint'. This value controls how
# the endpoint returned for MOVED/ASKING requests as well as the first field of CLUSTER SLOTS.
# If the preferred endpoint type is set to hostname, but no announced hostname is set, a '?'
# will be returned instead.
# cluster-preferred-endpoint-type ip <--- new setting.$ redis-cli -h xxx CLUSTER SLOTS
1) 1) (integer) 0
2) (integer) 5460
3) 1) "127.0.0.1" <-----
2) (integer) 6381
3) "5b6fe58a4135e23a356a3954d025865c948e124d"
4) 1) "hostname" <-----
2) "redis-6381" <-----
4) 1) "127.0.0.1"
2) (integer) 6385
3) "d6d99aad702df5a85a7f45bda862a2f25b32cd7e"
4) 1) "hostname" <-----
2) "redis-6385" <-----$ redis-cli -h xxx CLUSTER NODES <--- 不變,仍然是 ip 位置。
#9 Others new parameters
New configuration options
=========================
* CONFIG SET/GET can handle multiple configs in one call (#9748, #9914)
* Support glob pattern matching for config include files (#8980)
* appenddirname, folder where multi-part AOF files are stored (#9788)
* shutdown-timeout, default 10 seconds (#9872)
* maxmemory-clients, allows limiting the total memory usage by all clients (#8687)
* cluster-port, can control the bind port of cluster bus (#9389)
* bind-source-addr, configuration argument control IP of outgoing connections (#9142)
* busy-reply-threshold, alias for the old lua-time-limit (#9963)
* repl-diskless-sync-max-replicas, allows faster replication in some cases (#10092)
* latency-tracking, enabled by default, and latency-tracking-info-percentiles (#9462)
* cluster-announce-hostnameand cluster-preferred-endpoint-type (#9530)
* cluster-allow-pubsubshard-when-down (#8621)
* cluster-link-sendbuf-limit (#9774)
* list-max-listpack-*, hash-max-listpack-*, zset-max-listpack-* as aliases for
the old ziplist configs (#8887, #9366, #9740)
>> cluster-port: 用戶可以自定義集群的綁定端口。
原先是 redis cluster默認的通信(bus)端口 = port + 10000,該值可以動態設置指定端口。ElastiCache Redis cluster port is 1122
$ cat redis-7.0.4/redis.conf | grep cluster-port
# cluster-port 0>> shutdown-timeout: 當執行shutdown命令時,為slave節點複製剩餘offset的最大等待時間,一定程度上提高一致性。
$ cat redis-7.0.4/redis.conf | grep shutdown-timeout
# Maximum time to wait for replicas when shutting down, in seconds.
# The 'shutdown-timeout' value is the grace period's duration in seconds. It is only applicable when the instance has replicas. To disable the feature, set the value to 0.
# shutdown-timeout 10>> latency-tracking: 是否開啓命令latency追蹤
>> latency-tracking-info-percentiles: 對應p50(中位數)、p99、p99.9每個命令的耗時.
$ cat redis-7.0.4/redis.conf | grep latency-tracking
# latency-tracking yes
# latency-tracking-info-percentiles 50 99 99.9cluster-link-sendbuf-limit
---
# Cluster link send buffer limit is the limit on the memory usage of an individual
# cluster bus link's send buffer in bytes. Cluster links would be freed if they exceed
# this limit. This is to primarily prevent send buffers from growing unbounded on links
# toward slow peers (E.g. PubSub messages being piled up).
# This limit is disabled by default. Enable this limit when 'mem_cluster_links' INFO field
# and/or 'send-buffer-allocated' entries in the 'CLUSTER LINKS` command output continuously increase.
# Minimum limit of 1gb is recommended so that cluster link buffer can fit in at least a single
# PubSub message by default. (client-query-buffer-limit default value is 1gb)
---
$ cat redis-7.0.4/redis.conf | grep cluster-link-sendbuf-limit
# cluster-link-sendbuf-limit 0>> repl-diskless-sync
--- repl-diskless-sync-max-replicas 0
# When diskless replication is enabled with a delay, it is possible to let
# the replication start before the maximum delay is reached if the maximum
# number of replicas expected have connected. Default of 0 means that the
# maximum is not defined and Redis will wait the full delay.
--- repl-diskless-sync no --> yes (by default)
# The transmission can happen in two different ways:
#
# 1) Disk-backed: The Redis master creates a new process that writes the RDB
# file on disk. Later the file is transferred by the parent
# process to the replicas incrementally.
# 2) Diskless: The Redis master creates a new process that directly writes the
# RDB file to replica sockets, without touching the disk at all.
---
[ec2-user@ip-10-0-200-161 ~]$ cat redis-6.2.6/redis.conf | grep repl-diskless-sync
repl-diskless-sync no
repl-diskless-sync-delay 5
[ec2-user@ip-10-0-200-161 ~]$ cat redis-7.0.4/redis.conf | grep repl-diskless-sync
repl-diskless-sync yes
repl-diskless-sync-delay 5
repl-diskless-sync-max-replicas 0
#10 redis-cli new added parameters
redis-cli
-2 Start session in RESP2 protocol mode.
--json Output in JSON format (default RESP3, use -2 if you want to use with RESP2).
--quoted-json Same as --json, but produce ASCII-safe quoted strings, not Unicode.
--functions-rdb <filename> Like --rdb but only get the functions (not the keys)
* Adapt redis-check-aof tool for Multi Part AOF (#10061)
* Enable redis-benchmark to use RESP3 protocol mode (#10335)