mysql MGR搭建和模擬切換演練

水逸冰發表於2020-07-26

mysql MGR

環境

序號 IP port
1 192.168.61.16 3310
2 192.168.61.17 3310
3 192.168.61.18 3310

mgr配置的部分引數檔案

#for GTID

gtid_mode = on
enforce_gtid_consistency = on
#for MTS
binlog_group_commit_sync_delay = 10000
binlog_group_commit_sync_no_delay_count = 100
slave-parallel-type=LOGICAL_CLOCK
slave-parallel-workers=4        #一般建議設定4-8,太多的執行緒會增加執行緒之間的同步開銷
master_info_repository=TABLE
relay_log_info_repository=TABLE
relay_log_recovery=ON

#for semi sync
plugin-load = "rpl_semi_sync_master=semisync_master.so;rpl_semi_sync_slave=semisync_slave.so"
rpl-semi-sync-master-enabled = 1
rpl-semi-sync-slave-enabled = 1

#group replication config
binlog_checksum = NONE
transaction-write-set-extraction = XXHASH64
loose-group_replication_start_on_boot = OFF
loose-group_replication_bootstrap_group = OFF
loose-group_replication_group_name = 5e1f9814-c91d-11e7-8f9b-000c29ec1057
loose-group_replication_local_address = '192.168.61.18:33310'
loose-group_replication_group_seeds = '192.168.61.16:33310,192.168.61.17:33310,192.168.61.18:33310'
loose-group_replication_single_primary_mode=true
loose-group_replication_enforce_update_everywhere_checks=false

部分引數含義

引數名 含義
binlog_checksum 禁用binlog_checksum,避免不同版本間出現的checksum的問題。也可以不配置,不是必須的。
transaction-write-set-extraction 組複製要求每個表必須要有主鍵,該引數是指server層採集被更新的主鍵資訊被雜湊後儲存起來。同時一個組所有的成員必須要配置相同的雜湊演算法。
group_replication_start_on_boot  Server啟動時不自動啟動組複製
group_replication_group_name  將加入或者建立的複製組命名為5e1f9814-c91d-11e7-8f9b-000c29ec1057,可自定義(通過cat /proc/sys/kernel/random/uuid)。任何成員生成的gtid都會使用這個UUID。同時成員加入組是會檢查組名和加入的組名是否一致。
group_replication_local_address 設定成員本地地址
group_replication_group_seeds 設定種子成員的地址。新成員加入組時要和組內成員通訊,因此需要至少一個成員的地址。
group_replication_single_primary_mode 單主模式
group_replication_enforce_update_everywhere_checks 在單主模式下,該引數必須被設定為 FALSE。當主節點宕掉,自動會根據伺服器的server_uuid變數和group_replication_member_weight變數值,選擇下一個slave誰作為主節點,group_replication_member_weight的值最高的成員被選為新的主節點,在group_replication_member_weight值相同的情況下,group根據資料字典中 server_uuid排序,排序在最前的被選擇為主節點


此外,group_replication相關變數使用的loose-字首,是指示Server啟用時尚未載入複製外掛也將繼續啟動。

開啟單主模式

mysql> install plugin group_replication soname "group_replication.so";

Query OK, 0 rows affected (0.38 sec)

#啟用第一個成員時,用下面的引數告訴group replication外掛,這是該組第一個成員,需要做一些初始化操作。
mysql> set global group_replication_bootstrap_group=on;
Query OK, 0 rows affected (0.00 sec)

mysql> START GROUP_REPLICATION;
ERROR 3092 (HY000): The server is not configured properly to be an active member of the group. Please see more details on error log.

日誌:
2020-07-26T06:59:44.298053Z 2 [Warning] Plugin group_replication reported: 'Group Replication requires slave-preserve-commit-order to be set to ON when using more than 1 applier threads.'
需要開啟slave_preserve_commit_order


mysql> set global slave_preserve_commit_order = 1;
Query OK, 0 rows affected (0.00 sec)
mysql> START GROUP_REPLICATION;
Query OK, 0 rows affected (2.12 sec)

加入其它成員,同時注意每個節點都要有repl賬號

mysql> install plugin group_replication soname "group_replication.so";

Query OK, 0 rows affected (0.45 sec)

mysql> change master to master_user='repl',master_password='oracle' for channel 'group_replication_recovery';
Query OK, 0 rows affected, 2 warnings (0.01 sec)

mysql> START GROUP_REPLICATION;
Query OK, 0 rows affected (5.98 sec)

此時查詢節點資訊

mysql> SELECT * FROM performance_schema.replication_group_members;

+---------------------------+--------------------------------------+-------------+-------------+--------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| group_replication_applier | 68e12753-ce83-11ea-894d-000c297c40da | mdb02       |        3310 | ONLINE       |
| group_replication_applier | fff6c992-ce81-11ea-ae13-000c2970dcdf | mdb01       |        3310 | ONLINE       |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
2 rows in set (0.00 sec)

加入三個節點

mysql> install plugin group_replication soname "group_replication.so";

Query OK, 0 rows affected (0.14 sec)

mysql> set global group_replication_local_address = '192.168.61.18:33310';  
Query OK, 0 rows affected (0.00 sec)

mysql> set global group_replication_group_seeds = '192.168.61.16:33310,192.168.61.17:33310,192.168.61.18:33310';
Query OK, 0 rows affected (0.00 sec)

mysql> change master to master_user='repl',master_password='oracle' for channel 'group_replication_recovery';
Query OK, 0 rows affected, 2 warnings (0.01 sec)

mysql>  START GROUP_REPLICATION;
Query OK, 0 rows affected (3.33 sec)

節點加入過程中可以看到recovering狀態,完成後是online狀態。

mysql> SELECT * FROM performance_schema.replication_group_members;

+---------------------------+--------------------------------------+-------------+-------------+--------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| group_replication_applier | 68e12753-ce83-11ea-894d-000c297c40da | mdb02       |        3310 | ONLINE       |
| group_replication_applier | 7eb54729-ce83-11ea-8ac5-000c294cc2bd | mdb03       |        3310 | RECOVERING   |
| group_replication_applier | fff6c992-ce81-11ea-ae13-000c2970dcdf | mdb01       |        3310 | ONLINE       |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
3 rows in set (0.00 sec)

mysql> SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| group_replication_applier | 68e12753-ce83-11ea-894d-000c297c40da | mdb02       |        3310 | ONLINE       |
| group_replication_applier | 7eb54729-ce83-11ea-8ac5-000c294cc2bd | mdb03       |        3310 | ONLINE       |
| group_replication_applier | fff6c992-ce81-11ea-ae13-000c2970dcdf | mdb01       |        3310 | ONLINE       |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
3 rows in set (0.00 sec)

mgr單主模式故障切換

自動切換規則:

當主節點宕掉,自動會根據伺服器的server_uuid變數和group_replication_member_weight變數值,選擇下一個slave誰作為主節點,group_replication_member_weight的值最高的成員被選為新的主節點,在group_replication_member_weight值相同的情況下,group根據資料字典中 server_uuid排序,排序在最前的被選擇為主節點。

殺掉主庫後,從庫日誌記錄到主庫不通,叢集隨之發生了重構。

2020-07-26T08:46:37.806614Z 0 [Warning] Plugin group_replication reported: 'Member with address mdb01:3310 has become unreachable.'

2020-07-26T08:46:38.678114Z 0 [Warning] Plugin group_replication reported: 'Members removed from the group: mdb01:3310'

此時從庫mdb02已經被提升為主庫,super_read_only和read_only引數已經被關閉

mysql> show global variables like '%read_only%';

+-----------------------+-------+
| Variable_name         | Value |
+-----------------------+-------+
| innodb_read_only      | OFF   |
| read_only             | OFF   |
| super_read_only       | OFF   |
| transaction_read_only | OFF   |
| tx_read_only          | OFF   |
+-----------------------+-------+
5 rows in set (0.00 sec)

因為mdb02 的UUID是‘06d6223d-cf1c-11ea-8b6c-000c297c40da’,mdb03是‘083931e6-cf1c-11ea-842a-000c294cc2bd’

mysql> select '083931e6-cf1c-11ea-842a-000c294cc2bd'>'06d6223d-cf1c-11ea-8b6c-000c297c40da';

+-------------------------------------------------------------------------------+
| '083931e6-cf1c-11ea-842a-000c294cc2bd'>'06d6223d-cf1c-11ea-8b6c-000c297c40da' |
+-------------------------------------------------------------------------------+
|                                                                             1 |
+-------------------------------------------------------------------------------+
1 row in set (0.00 sec)

老的主庫重新加入叢集

mysql> change master to master_user='repl',master_password='oracle' for channel 'group_replication_recovery';

Query OK, 0 rows affected, 2 warnings (0.01 sec)

mysql> start group_replication;
Query OK, 0 rows affected (2.83 sec)

正常關閉現在的主庫mdb02.比較uuid,發現mdb03會被提升為主庫,現在我們用group_replication_member_weight引數控制。

mdb01執行:

mysql> set global group_replication_member_weight=60;

Query OK, 0 rows affected (0.00 sec)

正常關閉mdb02,其他節點日誌中只會出現

2020-07-26T09:04:06.564346Z 0 [Warning] Plugin group_replication reported: 'Members removed from the group: mdb02:3310'

mdb01如願提升為主庫

mysql>  show global variables like '%read_only%';

+-----------------------+-------+
| Variable_name         | Value |
+-----------------------+-------+
| innodb_read_only      | OFF   |
| read_only             | OFF   |
| super_read_only       | OFF   |
| transaction_read_only | OFF   |
| tx_read_only          | OFF   |
+-----------------------+-------+
5 rows in set (0.00 sec)

mgr單主模式切換到多主模式


# 停止組複製(所有節點執行):

stop group_replication;
set global group_replication_single_primary_mode = OFF;
set global group_replication_enforce_update_everywhere_checks = ON;

# 隨便選擇某個節點執行
SET GLOBAL group_replication_bootstrap_group = ON;
START GROUP_REPLICATION;
SET GLOBAL group_replication_bootstrap_group = OFF;

# 其他節點執行
START GROUP_REPLICATION;

mgr多主模式切換到單主模式


# 所有節點執行

stop group_replication;
set global group_replication_enforce_update_everywhere_checks = OFF;
set global group_replication_single_primary_mode = ON;

# 主節點執行
SET GLOBAL group_replication_bootstrap_group = ON;
START GROUP_REPLICATION;
SET GLOBAL group_replication_bootstrap_group = OFF;

# 從節點執行
START GROUP_REPLICATION;



來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/31480688/viewspace-2707032/,如需轉載,請註明出處,否則將追究法律責任。

相關文章