redis cluster 強制kill某一個節點和shutdown某一個節點後修復過程
redis cluster 命令列,執行以下命令需登入cluster,是叢集所獨有的
叢集(cluster)
CLUSTER INFO 列印叢集的資訊
CLUSTER NODES 列出叢集當前已知的所有節點(node),以及這些節點的相關資訊
節點(node)
CLUSTER MEET <ip> <port> 將 ip 和 port 所指定的節點新增到叢集當中,讓它成為叢集的一份子
CLUSTER FORGET <node_id> 從叢集中移除 node_id 指定的節點
CLUSTER REPLICATE <node_id> 將當前節點設定為 node_id 指定的節點的從節點
CLUSTER SAVECONFIG 將節點的配置檔案儲存到硬碟裡面
槽(slot)
CLUSTER ADDSLOTS <slot> [slot ...] 將一個或多個槽(slot)指派(assign)給當前節點
CLUSTER DELSLOTS <slot> [slot ...] 移除一個或多個槽對當前節點的指派
CLUSTER FLUSHSLOTS 移除指派給當前節點的所有槽,讓當前節點變成一個沒有指派任何槽的節點
CLUSTER SETSLOT <slot> NODE <node_id> 將槽 slot 指派給 node_id 指定的節點,如果槽已經指派給另一個節點,那麼先讓另一個節點刪除該槽>,然後再進行指派
CLUSTER SETSLOT <slot> MIGRATING <node_id> 將本節點的槽 slot 遷移到 node_id 指定的節點中
CLUSTER SETSLOT <slot> IMPORTING <node_id> 從 node_id 指定的節點中匯入槽 slot 到本節點
CLUSTER SETSLOT <slot> STABLE 取消對槽 slot 的匯入(import)或者遷移(migrate)
鍵 (key)
CLUSTER KEYSLOT <key> 計算鍵 key 應該被放置在哪個槽上
CLUSTER COUNTKEYSINSLOT <slot> 返回槽 slot 目前包含的鍵值對數量
CLUSTER GETKEYSINSLOT <slot> <count> 返回 count 個 slot 槽中的鍵
驗證將叢集某一個節點強行kill後是否可以自動加入叢集:
檢視redis cluster關係:
[root@192-13-168-77 ~]# redis-cli -h 192.13.168.77 -p 2002 -c -a "ysBhqkYHDifB" cluster nodes
06031e33797ef0aa6427bddb1ff958f7af0f1a4a 192.13.168.77:4004@14004 master - 0 1575439453000 3 connected 10923-16383
d49ebf2a5f3605487ea4c8deee7e2aa2782667e6 192.13.168.77:2002@12002 myself,master - 0 1575439454000 1 connected 0-5460
532ef94c81188111827fef599ee73c0996a04e5e 192.13.168.77:7007@17007 slave 06031e33797ef0aa6427bddb1ff958f7af0f1a4a 0 1575439455000 6 connected
46233f6d8b508be0cedafc5f07aca04210f654ea 192.13.168.77:6006@16006 slave ef76f232efb578249e8d0ec8fef8ec02b3524010 0 1575439455742 5 connected
ef76f232efb578249e8d0ec8fef8ec02b3524010 192.13.168.77:3003@13003 master - 0 1575439454743 2 connected 5461-10922
b11fc826c15cdee6e026a59ed98f31c9fa490aaa 192.13.168.77:5005@15005 slave d49ebf2a5f3605487ea4c8deee7e2aa2782667e6 0 1575439454000 4 connected
檢視redis cluster程式:
[root@192-13-168-77 ~]# ps -ef | grep redis
root 2039 1 0 14:00 ? 00:00:01 /usr/local/redis-4.0.9/bin/redis-server 0.0.0.0:2002 [cluster]
root 2236 1 0 14:00 ? 00:00:01 /usr/local/redis-4.0.9/bin/redis-server 0.0.0.0:3003 [cluster]
root 2335 1 0 14:00 ? 00:00:01 /usr/local/redis-4.0.9/bin/redis-server 0.0.0.0:4004 [cluster]
root 2365 1 0 14:00 ? 00:00:01 /usr/local/redis-4.0.9/bin/redis-server 0.0.0.0:5005 [cluster]
root 2391 1 0 14:00 ? 00:00:01 /usr/local/redis-4.0.9/bin/redis-server 0.0.0.0:6006 [cluster]
root 2418 1 0 14:00 ? 00:00:01 /usr/local/redis-4.0.9/bin/redis-server 0.0.0.0:7007 [cluster]
[root@192-13-168-77 ~]# redis-cli -h 192.13.168.77 -p 2002 -c -a "ysBhqkYHDifB" cluster nodes
06031e33797ef0aa6427bddb1ff958f7af0f1a4a 192.13.168.77:4004@14004 master - 0 1575439453000 3 connected 10923-16383
d49ebf2a5f3605487ea4c8deee7e2aa2782667e6 192.13.168.77:2002@12002 myself,master - 0 1575439454000 1 connected 0-5460
532ef94c81188111827fef599ee73c0996a04e5e 192.13.168.77:7007@17007 slave 06031e33797ef0aa6427bddb1ff958f7af0f1a4a 0 1575439455000 6 connected
46233f6d8b508be0cedafc5f07aca04210f654ea 192.13.168.77:6006@16006 slave ef76f232efb578249e8d0ec8fef8ec02b3524010 0 1575439455742 5 connected
ef76f232efb578249e8d0ec8fef8ec02b3524010 192.13.168.77:3003@13003 master - 0 1575439454743 2 connected 5461-10922
b11fc826c15cdee6e026a59ed98f31c9fa490aaa 192.13.168.77:5005@15005 slave d49ebf2a5f3605487ea4c8deee7e2aa2782667e6 0 1575439454000 4 connected
透過redis cluster slots檢視主從關係:
[root@192-13-168-77 ~]# redis-cli -h 192.13.168.77 -p 2002 -c -a "ysBhqkYHDifB" cluster slots | xargs -n8 | awk '{print $3":"$4"->"$6":"$7}' | sort -nk2 -t ':' | uniq
192.13.168.77:2002->192.13.168.77:5005
192.13.168.77:3003->192.13.168.77:6006
192.13.168.77:7007->192.13.168.77:4004
強行kill:
[root@192-13-168-77 ~]# kill -9 2418
[root@192-13-168-77 ~]# ps -ef | grep redis
root 2039 1 0 14:00 ? 00:00:02 /usr/local/redis-4.0.9/bin/redis-server 0.0.0.0:2002 [cluster]
root 2236 1 0 14:00 ? 00:00:02 /usr/local/redis-4.0.9/bin/redis-server 0.0.0.0:3003 [cluster]
root 2335 1 0 14:00 ? 00:00:02 /usr/local/redis-4.0.9/bin/redis-server 0.0.0.0:4004 [cluster]
root 2365 1 0 14:00 ? 00:00:02 /usr/local/redis-4.0.9/bin/redis-server 0.0.0.0:5005 [cluster]
root 2391 1 0 14:00 ? 00:00:02 /usr/local/redis-4.0.9/bin/redis-server 0.0.0.0:6006 [cluster]
檢視7007從4004的日誌,4004升級為新master:
[root@192-13-168-77 log]# cat redis_4004.log
23377:C 04 Dec 13:40:40.502 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
23377:C 04 Dec 13:40:40.502 # Redis version=4.0.9, bits=64, commit=00000000, modified=0, pid=23377, just started
23377:C 04 Dec 13:40:40.502 # Configuration loaded
23378:M 04 Dec 13:40:40.517 # Server initialized
23378:M 04 Dec 13:40:40.517 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
23378:M 04 Dec 13:40:40.517 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
23378:M 04 Dec 13:57:15.782 # User requested shutdown...
23378:M 04 Dec 13:57:15.783 # Redis is now ready to exit, bye bye...
2334:C 04 Dec 14:00:25.888 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
2334:C 04 Dec 14:00:25.888 # Redis version=4.0.9, bits=64, commit=00000000, modified=0, pid=2334, just started
2334:C 04 Dec 14:00:25.889 # Configuration loaded
2335:M 04 Dec 14:00:25.896 # Server initialized
2335:M 04 Dec 14:00:25.896 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
2335:M 04 Dec 14:00:25.896 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
2335:M 04 Dec 14:01:24.003 # configEpoch set to 3 via CLUSTER SET-CONFIG-EPOCH
2335:M 04 Dec 14:01:24.180 # IP address for this node updated to 192.13.168.77
2335:M 04 Dec 14:01:29.005 # Cluster state changed: ok
2335:M 04 Dec 14:06:56.069 # Manual failover requested by slave 532ef94c81188111827fef599ee73c0996a04e5e.
2335:M 04 Dec 14:06:56.258 # Failover auth granted to 532ef94c81188111827fef599ee73c0996a04e5e for epoch 7
2335:M 04 Dec 14:06:56.263 # Connection with slave 192.13.168.77:7007 lost.
2335:M 04 Dec 14:06:56.266 # Configuration change detected. Reconfiguring myself as a replica of 532ef94c81188111827fef599ee73c0996a04e5e
2335:S 04 Dec 14:06:56.538 # Master replication ID changed to 880dfbbc29acda08aa0f997ec0d3d9238f987cca
2335:S 04 Dec 14:46:13.970 # Connection with master lost.
2335:S 04 Dec 14:46:14.427 # Error condition on socket for SYNC: Connection refused
2335:S 04 Dec 14:46:15.427 # Error condition on socket for SYNC: Connection refused
2335:S 04 Dec 14:46:16.430 # Error condition on socket for SYNC: Connection refused
2335:S 04 Dec 14:46:17.433 # Error condition on socket for SYNC: Connection refused
2335:S 04 Dec 14:46:18.436 # Error condition on socket for SYNC: Connection refused
2335:S 04 Dec 14:46:19.438 # Error condition on socket for SYNC: Connection refused
2335:S 04 Dec 14:46:20.441 # Error condition on socket for SYNC: Connection refused
2335:S 04 Dec 14:46:21.443 # Error condition on socket for SYNC: Connection refused
2335:S 04 Dec 14:46:22.445 # Error condition on socket for SYNC: Connection refused
2335:S 04 Dec 14:46:23.448 # Error condition on socket for SYNC: Connection refused
2335:S 04 Dec 14:46:24.449 # Error condition on socket for SYNC: Connection refused
2335:S 04 Dec 14:46:25.451 # Error condition on socket for SYNC: Connection refused
2335:S 04 Dec 14:46:26.453 # Error condition on socket for SYNC: Connection refused
2335:S 04 Dec 14:46:27.455 # Error condition on socket for SYNC: Connection refused
2335:S 04 Dec 14:46:28.457 # Error condition on socket for SYNC: Connection refused
2335:S 04 Dec 14:46:29.459 # Error condition on socket for SYNC: Connection refused
2335:S 04 Dec 14:46:29.960 # Start of election delayed for 658 milliseconds (rank #0, offset 3738).
2335:S 04 Dec 14:46:30.461 # Error condition on socket for SYNC: Connection refused
2335:S 04 Dec 14:46:30.662 # Starting a failover election for epoch 8.
2335:S 04 Dec 14:46:30.667 # Failover election won: I'm the new master.
2335:S 04 Dec 14:46:30.667 # configEpoch set to 8 after successful failover
2335:M 04 Dec 14:46:30.667 # Setting secondary replication ID to 880dfbbc29acda08aa0f997ec0d3d9238f987cca, valid up to offset: 3739. New replication ID is 8ae5eb556fbfd6c13ec33b1123c87de1fbe4db05
到7007data目錄下刪除nodes-7007.conf:
[root@192-13-168-77 ~]# cd /u02/redis/7007/
[root@192-13-168-77 7007]# ls
conf data log pid
[root@192-13-168-77 7007]# cd data/
[root@192-13-168-77 data]# ls
nodes-7007.conf redis_7007_dump.rdb
[root@192-13-168-77 data]# rm -rf nodes-7007.conf
[root@192-13-168-77 data]# ps -ef | grep redis
root 2039 1 0 14:00 ? 00:00:02 /usr/local/redis-4.0.9/bin/redis-server 0.0.0.0:2002 [cluster]
root 2236 1 0 14:00 ? 00:00:02 /usr/local/redis-4.0.9/bin/redis-server 0.0.0.0:3003 [cluster]
root 2335 1 0 14:00 ? 00:00:02 /usr/local/redis-4.0.9/bin/redis-server 0.0.0.0:4004 [cluster]
root 2365 1 0 14:00 ? 00:00:02 /usr/local/redis-4.0.9/bin/redis-server 0.0.0.0:5005 [cluster]
root 2391 1 0 14:00 ? 00:00:02 /usr/local/redis-4.0.9/bin/redis-server 0.0.0.0:6006 [cluster]
root 32397 30103 0 14:49 pts/1 00:00:00 grep redis
root 37840 1 0 Nov05 ? 00:25:35 /usr/sbin/glusterfs --volfile-server=10.66.5.10 --volfile-id=/volume-10-0-5-16-db01 /sharedisk/
啟動7007程式:
[root@192-13-168-77 data]# /usr/local/redis-4.0.9/bin/redis-server /u02/redis/7007/conf/redis_7007.conf
因強制kill無法自動加入叢集:
[root@192-13-168-77 data]# redis-cli -h 192.13.168.77 -p 2002 -c -a "ysBhqkYHDifB" cluster nodes
06031e33797ef0aa6427bddb1ff958f7af0f1a4a 192.13.168.77:4004@14004 master - 0 1575442290000 8 connected 10923-16383
d49ebf2a5f3605487ea4c8deee7e2aa2782667e6 192.13.168.77:2002@12002 myself,master - 0 1575442289000 1 connected 0-5460
532ef94c81188111827fef599ee73c0996a04e5e :0@0 master,fail,noaddr - 1575441974024 1575441972822 7 disconnected
46233f6d8b508be0cedafc5f07aca04210f654ea 192.13.168.77:6006@16006 slave ef76f232efb578249e8d0ec8fef8ec02b3524010 0 1575442290483 5 connected
ef76f232efb578249e8d0ec8fef8ec02b3524010 192.13.168.77:3003@13003 master - 0 1575442291483 2 connected 5461-10922
b11fc826c15cdee6e026a59ed98f31c9fa490aaa 192.13.168.77:5005@15005 slave d49ebf2a5f3605487ea4c8deee7e2aa2782667e6 0 1575442289481 4 connected
在每個節點執行:
CLUSTER FORGET 06031e33797ef0aa6427bddb1ff958f7af0f1a4a
[root@192-13-168-77 data]# redis-cli -h 192.13.168.77 -p 2002 -c -a "ysBhqkYHDifB"
192.13.168.77:2002> CLUSTER FORGET 532ef94c81188111827fef599ee73c0996a04e5e
OK
192.13.168.77:3003> CLUSTER FORGET 532ef94c81188111827fef599ee73c0996a04e5e
OK
192.13.168.77:5005> CLUSTER FORGET 532ef94c81188111827fef599ee73c0996a04e5e
OK
192.13.168.77:6006> CLUSTER FORGET 532ef94c81188111827fef599ee73c0996a04e5e
OK
重新啟動7007 redis程式:
[root@192-13-168-77 data]# /usr/local/redis-4.0.9/bin/redis-server /u02/redis/7007/conf/redis_7007.conf
[root@192-13-168-77 data]# ps -ef | grep redis
root 2039 1 0 14:00 ? 00:00:03 /usr/local/redis-4.0.9/bin/redis-server 0.0.0.0:2002 [cluster]
root 2236 1 0 14:00 ? 00:00:03 /usr/local/redis-4.0.9/bin/redis-server 0.0.0.0:3003 [cluster]
root 2335 1 0 14:00 ? 00:00:03 /usr/local/redis-4.0.9/bin/redis-server 0.0.0.0:4004 [cluster]
root 2365 1 0 14:00 ? 00:00:03 /usr/local/redis-4.0.9/bin/redis-server 0.0.0.0:5005 [cluster]
root 2391 1 0 14:00 ? 00:00:03 /usr/local/redis-4.0.9/bin/redis-server 0.0.0.0:6006 [cluster]
root 2931 1 0 15:07 ? 00:00:00 /usr/local/redis-4.0.9/bin/redis-server 0.0.0.0:7007 [cluster]
將7007重新加入為4004的slave
[root@192-13-168-77 data]# redis-trib.rb add-node --slave --master-id 06031e33797ef0aa6427bddb1ff958f7af0f1a4a 192.13.168.77:7007 192.13.168.77:2002
>>> Adding node 192.13.168.77:7007 to cluster 192.13.168.77:2002
>>> Performing Cluster Check (using node 192.13.168.77:2002)
M: d49ebf2a5f3605487ea4c8deee7e2aa2782667e6 192.13.168.77:2002
slots:0-5460 (5461 slots) master
1 additional replica(s)
M: 06031e33797ef0aa6427bddb1ff958f7af0f1a4a 192.13.168.77:4004
slots:10923-16383 (5461 slots) master
0 additional replica(s)
S: 46233f6d8b508be0cedafc5f07aca04210f654ea 192.13.168.77:6006
slots: (0 slots) slave
replicates ef76f232efb578249e8d0ec8fef8ec02b3524010
M: ef76f232efb578249e8d0ec8fef8ec02b3524010 192.13.168.77:3003
slots:5461-10922 (5462 slots) master
1 additional replica(s)
S: b11fc826c15cdee6e026a59ed98f31c9fa490aaa 192.13.168.77:5005
slots: (0 slots) slave
replicates d49ebf2a5f3605487ea4c8deee7e2aa2782667e6
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 192.13.168.77:7007 to make it join the cluster.
Waiting for the cluster to join.
>>> Configure node as replica of 192.13.168.77:4004.
[OK] New node added correctly.
檢查redis cluster狀態:
[root@192-13-168-77 data]# redis-trib.rb check 192.13.168.77:2002
>>> Performing Cluster Check (using node 192.13.168.77:2002)
M: d49ebf2a5f3605487ea4c8deee7e2aa2782667e6 192.13.168.77:2002
slots:0-5460 (5461 slots) master
1 additional replica(s)
M: 06031e33797ef0aa6427bddb1ff958f7af0f1a4a 192.13.168.77:4004
slots:10923-16383 (5461 slots) master
1 additional replica(s)
S: 46233f6d8b508be0cedafc5f07aca04210f654ea 192.13.168.77:6006
slots: (0 slots) slave
replicates ef76f232efb578249e8d0ec8fef8ec02b3524010
M: ef76f232efb578249e8d0ec8fef8ec02b3524010 192.13.168.77:3003
slots:5461-10922 (5462 slots) master
1 additional replica(s)
S: 1dba8274e36bde79a215a77d1f241ae6fc81c03e 192.13.168.77:7007
slots: (0 slots) slave
replicates 06031e33797ef0aa6427bddb1ff958f7af0f1a4a
S: b11fc826c15cdee6e026a59ed98f31c9fa490aaa 192.13.168.77:5005
slots: (0 slots) slave
replicates d49ebf2a5f3605487ea4c8deee7e2aa2782667e6
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
重新檢視叢集狀態:
[root@192-13-168-77 data]# redis-cli -h 192.13.168.77 -p 3003 -c -a "ysBhqkYHDifB" cluster nodes
b11fc826c15cdee6e026a59ed98f31c9fa490aaa 192.13.168.77:5005@15005 slave d49ebf2a5f3605487ea4c8deee7e2aa2782667e6 0 1575452066959 4 connected
1dba8274e36bde79a215a77d1f241ae6fc81c03e 192.13.168.77:7007@17007 slave 06031e33797ef0aa6427bddb1ff958f7af0f1a4a 0 1575452066000 8 connected
46233f6d8b508be0cedafc5f07aca04210f654ea 192.13.168.77:6006@16006 slave ef76f232efb578249e8d0ec8fef8ec02b3524010 0 1575452065958 5 connected
06031e33797ef0aa6427bddb1ff958f7af0f1a4a 192.13.168.77:4004@14004 master - 0 1575452067960 8 connected 10923-16383
ef76f232efb578249e8d0ec8fef8ec02b3524010 192.13.168.77:3003@13003 myself,master - 0 1575452065000 2 connected 5461-10922
d49ebf2a5f3605487ea4c8deee7e2aa2782667e6 192.13.168.77:2002@12002 master - 0 1575452067000 1 connected 0-5460
正常shtudown某一個節點後重新加入叢集:
[root@192-13-168-77 data]# redis-cli -h 192.13.168.77 -p 4004 -c -a "ysBhqkYHDifB" shutdown
[root@192-13-168-77 log]# cat redis_7007.log
4540:S 04 Dec 18:03:32.106 # Connection with master lost.
4540:S 04 Dec 18:03:32.921 # Error condition on socket for SYNC: Connection refused
4540:S 04 Dec 18:03:33.924 # Error condition on socket for SYNC: Connection refused
4540:S 04 Dec 18:03:34.925 # Error condition on socket for SYNC: Connection refused
4540:S 04 Dec 18:03:35.928 # Error condition on socket for SYNC: Connection refused
4540:S 04 Dec 18:03:36.930 # Error condition on socket for SYNC: Connection refused
4540:S 04 Dec 18:03:37.932 # Error condition on socket for SYNC: Connection refused
4540:S 04 Dec 18:03:38.934 # Error condition on socket for SYNC: Connection refused
4540:S 04 Dec 18:03:39.935 # Error condition on socket for SYNC: Connection refused
4540:S 04 Dec 18:03:40.936 # Error condition on socket for SYNC: Connection refused
4540:S 04 Dec 18:03:41.938 # Error condition on socket for SYNC: Connection refused
4540:S 04 Dec 18:03:42.940 # Error condition on socket for SYNC: Connection refused
4540:S 04 Dec 18:03:43.942 # Error condition on socket for SYNC: Connection refused
4540:S 04 Dec 18:03:44.943 # Error condition on socket for SYNC: Connection refused
4540:S 04 Dec 18:03:45.945 # Error condition on socket for SYNC: Connection refused
4540:S 04 Dec 18:03:46.948 # Error condition on socket for SYNC: Connection refused
4540:S 04 Dec 18:03:47.550 # Start of election delayed for 554 milliseconds (rank #0, offset 17724).
4540:S 04 Dec 18:03:47.950 # Error condition on socket for SYNC: Connection refused
4540:S 04 Dec 18:03:48.150 # Starting a failover election for epoch 9.
4540:S 04 Dec 18:03:48.155 # Failover election won: I'm the new master.
4540:S 04 Dec 18:03:48.155 # configEpoch set to 9 after successful failover
4540:M 04 Dec 18:03:48.155 # Setting secondary replication ID to 8ae5eb556fbfd6c13ec33b1123c87de1fbe4db05, valid up to offset: 17725. New replication ID is e44a29dcbf3dce8e1d29c06bd310ba2ba3d0c41b
檢視叢集狀態:
redis-cli -h 192.13.168.77 -p 4004 -c -a "ysBhqkYHDifB" cluster nodes
Could not connect to Redis at 192.13.168.77:4004: Connection refused
[root@192-13-168-77 log]# redis-cli -h 192.13.168.77 -p 2002 -c -a "ysBhqkYHDifB" cluster nodes
06031e33797ef0aa6427bddb1ff958f7af0f1a4a 192.13.168.77:4004@14004 master,fail - 1575453812190 1575453808482 8 disconnected
d49ebf2a5f3605487ea4c8deee7e2aa2782667e6 192.13.168.77:2002@12002 myself,master - 0 1575453939000 1 connected 0-5460
46233f6d8b508be0cedafc5f07aca04210f654ea 192.13.168.77:6006@16006 slave ef76f232efb578249e8d0ec8fef8ec02b3524010 0 1575453940773 5 connected
ef76f232efb578249e8d0ec8fef8ec02b3524010 192.13.168.77:3003@13003 master - 0 1575453939000 2 connected 5461-10922
1dba8274e36bde79a215a77d1f241ae6fc81c03e 192.13.168.77:7007@17007 master - 0 1575453939770 9 connected 10923-16383
b11fc826c15cdee6e026a59ed98f31c9fa490aaa 192.13.168.77:5005@15005 slave d49ebf2a5f3605487ea4c8deee7e2aa2782667e6 0 1575453940000 4 connected
重新啟動4004程式:
[root@192-13-168-77 log]# /usr/local/redis-4.0.9/bin/redis-server /u02/redis/4004/conf/redis_4004.conf
檢視叢集狀態:(4004自動新增進來並變成slave)
[root@192.13.168.77 log]# redis-cli -h 192.13.168.77 -p 2002 -c -a "ysBhqkYHDifB" cluster nodes
06031e33797ef0aa6427bddb1ff958f7af0f1a4a 192.13.168.77:4004@14004 slave 1dba8274e36bde79a215a77d1f241ae6fc81c03e 0 1575454019000 9 connected
d49ebf2a5f3605487ea4c8deee7e2aa2782667e6 192.13.168.77:2002@12002 myself,master - 0 1575454018000 1 connected 0-5460
46233f6d8b508be0cedafc5f07aca04210f654ea 192.13.168.77:6006@16006 slave ef76f232efb578249e8d0ec8fef8ec02b3524010 0 1575454019942 5 connected
ef76f232efb578249e8d0ec8fef8ec02b3524010 192.13.168.77:3003@13003 master - 0 1575454020944 2 connected 5461-10922
1dba8274e36bde79a215a77d1f241ae6fc81c03e 192.13.168.77:7007@17007 master - 0 1575454018000 9 connected 10923-16383
b11fc826c15cdee6e026a59ed98f31c9fa490aaa 192.13.168.77:5005@15005 slave d49ebf2a5f3605487ea4c8deee7e2aa2782667e6 0 1575454020000 4 connected
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/28939273/viewspace-2666937/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- redis cluster 故障後,主從位於不同節點的修復。Redis
- 強制專案開啟某一個htmlHTML
- redis cluster 故障後,主從位於相同節點的修復(丟失了部分資料)Redis
- redis cluster節點/新增刪除操作Redis
- Oracle RAC某一節點異常,你該怎麼辦?Oracle
- MongoDB修復config配置節點MongoDB
- 搭建Solana驗證者節點(全節點)的過程
- 填充每個節點的下一個右側節點指標指標
- redis cluster叢集死一個master剩下的master節點還能提供服務嗎RedisAST
- 填充每個節點的下一個右側節點指標 II指標
- DataNode工作機制 & 新增節點 &下線節點
- 116. 填充每個節點的下一個右側節點指標指標
- css 定位如何依次定位多個節點下面的子節點CSS
- LeetCode-116-填充每個節點的下一個右側節點指標LeetCode指標
- LeetCode117-填充每個節點的下一個右側節點指標 IILeetCode指標
- LeetCode-117-填充每個節點的下一個右側節點指標 IILeetCode指標
- 如何執行一個 Conflux 節點UX
- TCP 中的兩個細節點TCP
- 五 GBase 8a MPP Cluster節點替換
- Git 如何合併某一個 commitGitMIT
- 【linux】修改ip後hadoop只有四個節點的問題LinuxHadoop
- win10恢復到上一個時間節點的方法Win10
- selenium-grid 有多個節點,但 pytest.main 批次執行用例,每次只有一個節點執行用例,不能同時多個節點執行,要怎樣才能多個節點同時執行AI
- JavaScript學習之DOM(節點、節點層級、節點操作)JavaScript
- JavaScript 獲取下一個元素節點JavaScript
- K個節點翻轉連結串列
- CentOS7 單節點和多節點 HPL測試CentOS
- VCS中檢查Cluster中節點的狀態
- rac二節點例項redo故障無法啟動修復
- 兩個連結串列的第一個公共節點
- kubernetes環境部署單節點redisRedis
- WebRTC本地插入多個轉發節點,模擬多節點轉發,造成延遲Web
- leetcode------給定一個二叉樹和一個值sum,判斷是否有從根節點到葉子節點的節點值之和等於sum 的路徑,LeetCode二叉樹
- 過濾/篩選樹節點
- HashMap中紅黑樹插入節點的調整過程HashMap
- 檔案系統修復的一個過程
- DocumentFragment文件碎片(高效批量更新多個節點)Fragment
- js用字首名查詢class或id節點,js模糊查詢某個dom節點JS