MGR(MySQL Group Replication)部署搭建測試

海豚dolphin發表於2020-11-25

1. 環境說明

192.168.11.131 mgr1  主節點
192.168.11.132 mgr2  從節點
192.168.11.133 mgr3  從節點

2. 在mgr1、mgr2、mgr3上安裝mysql##

(1)安裝過程(略),mysql版本-5.7.32
	根據官方文件,mysql配置檔案的最低要求:

		#---------- basic setting: 組複製的MySQL Server例項所需的配置設定 -----------#
		server_id=1			# 注意:各節點不同
		gtid_mode=ON
		enforce_gtid_consistency=ON
		master_info_repository=TABLE
		relay_log_info_repository=TABLE
		binlog_checksum=NONE
		log_slave_updates=ON
		log_bin=binlog
		binlog_format=ROW

		#---------- Storage Engines: 禁用非InnoDB儲存引擎 -----------#
		disabled_storage_engines="MyISAM,BLACKHOLE,FEDERATED,ARCHIVE,MEMORY"

		#---------- Group Replication Settings: 組複製配置 -----------#
		# 將組複製外掛新增到伺服器在啟動時載入的外掛列表中。
		plugin_load_add='group_replication.so'
		# 這個神奇的引數5.7.6版本引入,用於定義一個記錄事務的演算法,這個演算法使用hash標識來記錄事務。
		# 如果使用MGR,那麼這個hash值需要用於分散式衝突檢測何處理,在64位的系統,官網建議設定該引數使用 XXHASH64 演算法。
		transaction_write_set_extraction=XXHASH64
		# 此GROUP的名字,必須是一個有效的UUID,以此來區分整個內網裡邊的各個不的GROUP,可以使用SELECT UUID()生成。
		group_replication_group_name="a60fce0f-294c-11eb-8779-005056895fb9"
		# 是否隨伺服器啟動而自動啟動組複製
		group_replication_start_on_boot=off
		# 本地的IP地址字串,host:port
		group_replication_local_address="192.168.11.131:33061"			# 注意:各節點不同
		# 設定組成員的主機名和埠。
		# 注意:該埠用於組內成員之間內部通訊,不可用於與app連線。
		group_replication_group_seeds="192.168.11.131:33061,192.168.11.132:33061,192.168.11.133:33061"
		# 指定使用哪個Server來引導組(這裡指的是將此係統變數設定為ON的Server)
		# 在組所有可能涉及的Server的配置檔案中建議統一將此係統變數設定為OFF
		group_replication_bootstrap_group=off

    	        注意:
	         (a)三個節點的server_id不同;group_replication_local_address也不同(對應各節點IP和埠);
	         (b)如需必要,請將root使用者的ip地址修改為%(啟動組複製後不可再修改)。

3. mgr1上建立複製使用者

mysql> SET SQL_LOG_BIN=0;
mysql> CREATE USER repluser@'%' IDENTIFIED BY 'replpass';
mysql> GRANT REPLICATION SLAVE ON *.* TO repluser@'%';
mysql> FLUSH PRIVILEGES;
mysql> SET SQL_LOG_BIN=1;

4. mgr1上配置mgr

mysql> CHANGE MASTER TO MASTER_USER='rpluser', MASTER_PASSWORD='replpasss' FOR CHANNEL 'group_replication_recovery';

5. mgr1上安裝組複製外掛

(1)有兩種方式:
	(a)在配置檔案中指定載入外掛; 
	(b)手動安裝外掛。

(2)配置檔案方式:(本文采用的方式)	
    plugin_load_add='group_replication.so'

(3)手動安裝外掛方式:
    mysql> INSTALL PLUGIN group_replication SONAME 'group_replication.so';

(4)檢查是否安裝成功
    mysql> SHOW PLUGINS;
    | group_replication          | ACTIVE   | GROUP REPLICATION  | group_replication.so | GPL     |

6. mgr1上引導組(首次啟動組)

注意:預設情況下是單主模式,執行引導的節點為主節點,可以讀寫;後面加入的節點(未執行引導)為從節點,只讀(自動調整引數super_read_only=on/read_only=on)。
        所謂引導其實就是SET GLOBAL group_replication_bootstrap_group=ON;(可以理解為指定主節點)

mysql> SET GLOBAL group_replication_bootstrap_group=ON;
mysql> START GROUP_REPLICATION;
mysql> SET GLOBAL group_replication_bootstrap_group=OFF;

檢查是否已建立組:發現組已經存在,並且有一個組成員,狀態為ONLINE

mysql> SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| group_replication_applier | 8b4f47fe-2d32-11eb-a8cf-0050568907ba | mgr1        |        3306 | ONLINE       |
+---------------------------+--------------------------------------+-------------+-------------+--------------+

7. mgr1上建立測試庫、表,並新增資料,以測試在新節點接入到組中後,會同步這些資料

mysql> CREATE DATABASE test;
mysql> USE test;
mysql> CREATE TABLE t1 (c1 INT PRIMARY KEY, c2 TEXT NOT NULL);
mysql> INSERT INTO t1 VALUES (1, 'Luis');

mysql> select * from test.t1;
+----+------+
| c1 | c2   |
+----+------+
|  1 | Luis |
+----+------+

8. 新增mgr2到組

(1)建立複製使用者
    mysql> SET SQL_LOG_BIN=0;
    mysql> CREATE USER repluser@'%' IDENTIFIED BY 'replpass';
    mysql> GRANT REPLICATION SLAVE ON *.* TO repluser@'%';
    mysql> FLUSH PRIVILEGES;
    mysql> SET SQL_LOG_BIN=1;   
(2)配置mgr
	mysql> CHANGE MASTER TO MASTER_USER='repluser', MASTER_PASSWORD='replpass' FOR CHANNEL 'group_replication_recovery';
(3)啟動組複製: 注意由於在配置mgr1時,已經引導並建立過組,因此只需要啟動組複製將mgr2加入到組中即可。
	mysql> START GROUP_REPLICATION;
(4)檢查表performance_schema.replication_group_members,檢視組成員及狀態
    mysql> SELECT * FROM performance_schema.replication_group_members;
    +---------------------------+--------------------------------------+-------------+-------------+--------------+
    | CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
    +---------------------------+--------------------------------------+-------------+-------------+--------------+
    | group_replication_applier | 10a9692d-2d4c-11eb-b5f8-005056891ca6 | mgr2        |        3306 | ONLINE       |
    | group_replication_applier | 8b4f47fe-2d32-11eb-a8cf-0050568907ba | mgr1        |        3306 | ONLINE       |
    +---------------------------+--------------------------------------+-------------+-------------+--------------+
(5)由於mgr2的狀態時ONLINE,因此,mgr2已經追上了mgr1的事務,因此檢視錶test.t1。
    mysql> select * from test.t1;
    +----+------+
    | c1 | c2   |
    +----+------+
    |  1 | Luis |
    +----+------+
	事務已經自動同步完成。

(6)注意:在啟動組複製時出現的2個報錯及解決方法
    報錯1: [ERROR] Plugin group_replication reported: 'This member has more executed transactions than those present in the group. 
            Local transactions: 10a9692d-2d4c-11eb-b5f8-005056891ca6:1-2 > Group transactions: 8b4f47fe-2d32-11eb-a8cf-0050568907ba:1-2, a60fce0f-294c-11eb-8779-005056895fb9:1-4'
    原因:雖然該mysql是新裝的,但是在修改初始密碼時,開啟了binlog,因此將修改密碼的過程寫入到了binlog中,因此需要清空。
    解決:reset master將二進位制日誌清空,重新START GROUP_REPLICATION
        mysql> RESET MASTER;
            mysql> START GROUP_REPLICATION
    
    報錯2:2020-11-23T14:42:31.109221+08:00 46 [ERROR] Plugin group_replication reported: 'There was an error when connecting to the donor server. Please check that group_replication_recovery channel credentials and all MEMBER_HOST column values of performance_schema.replication_group_members table are correct and DNS resolvable.'
          2020-11-23T14:42:31.109267+08:00 46 [ERROR] Plugin group_replication reported: 'For details please check performance_schema.replication_connection_status table and error log messages of Slave I/O for channel group_replication_recovery.'

    原因:由於在配置mgr時,change master時,寫錯了使用者名稱或者密碼,因此報錯。
    解決:
        mysql> STOP GROUP_REPLICATION;
        mysql> CHANGE MASTER TO MASTER_USER='repluser', MASTER_PASSWORD='replpass' FOR CHANNEL 'group_replication_recovery';
        mysql> START GROUP_REPLICATION;

9. 新增mgr3到組:與mgr2加入組的步驟相同

(1)建立複製使用者
    mysql> RESET MASTER;
    mysql> SET SQL_LOG_BIN=0;
    mysql> CREATE USER repluser@'%' IDENTIFIED BY 'replpass';
    mysql> GRANT REPLICATION SLAVE ON *.* TO repluser@'%';
    mysql> FLUSH PRIVILEGES;
    mysql> SET SQL_LOG_BIN=1;   
(2)配置mgr
	mysql> CHANGE MASTER TO MASTER_USER='repluser', MASTER_PASSWORD='replpass' FOR CHANNEL 'group_replication_recovery';
(3)啟動組複製: 注意由於在配置mgr1時,已經引導並建立過組,因此只需要啟動組複製將mgr3加入到組中即可。
	mysql> START GROUP_REPLICATION;
(4)檢查表performance_schema.replication_group_members,檢視組成員及狀態
	mysql> SELECT * FROM performance_schema.replication_group_members;
    +---------------------------+--------------------------------------+-------------+-------------+--------------+
    | CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
    +---------------------------+--------------------------------------+-------------+-------------+--------------+
    | group_replication_applier | 10a9692d-2d4c-11eb-b5f8-005056891ca6 | mgr2        |        3306 | ONLINE       |
    | group_replication_applier | 8b4f47fe-2d32-11eb-a8cf-0050568907ba | mgr1        |        3306 | ONLINE       |
    | group_replication_applier | a67ba921-2d4e-11eb-8c11-005056892789 | mgr3        |        3306 | ONLINE       |
    +---------------------------+--------------------------------------+-------------+-------------+--------------+
(5)由於mgr3的狀態時ONLINE,因此,mgr2已經追上了mgr1的事務,因此檢視錶test.t1。
    mysql> select * from test.t1;
    +----+------+
    | c1 | c2   |
    +----+------+
    |  1 | Luis |
    +----+------+
    事務已經自動同步完成。

10. 單主模式下,如何檢視哪個伺服器是主伺服器?

mysql> SHOW STATUS LIKE 'group_replication_primary_member';
+----------------------------------+--------------------------------------+
| Variable_name                    | Value                                |
+----------------------------------+--------------------------------------+
| group_replication_primary_member | 8b4f47fe-2d32-11eb-a8cf-0050568907ba |
+----------------------------------+--------------------------------------+

結合命令:SELECT * FROM performance_schema.replication_group_members;可確定主伺服器的host為mgr1。

mysql> SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| group_replication_applier | 10a9692d-2d4c-11eb-b5f8-005056891ca6 | mgr2        |        3306 | ONLINE       |
| group_replication_applier | 8b4f47fe-2d32-11eb-a8cf-0050568907ba | mgr1        |        3306 | ONLINE       |
| group_replication_applier | a67ba921-2d4e-11eb-8c11-005056892789 | mgr3        |        3306 | ONLINE       |
+---------------------------+--------------------------------------+-------------+-------------+--------------+

11. 當伺服器加入組時,嘗試次數與間隔的設定

mysql> show global variables like 'group_replication_recovery_retry_count';
+----------------------------------------+-------+
| Variable_name                          | Value |
+----------------------------------------+-------+
| group_replication_recovery_retry_count | 10    |
+----------------------------------------+-------+
1 row in set (0.10 sec)

重試次數預設次數為10,可通過設定引數group_replication_recovery_retry_count調整。
如:mysql> SET GLOBAL group_replication_recovery_retry_count=15;

mysql> show global variables like 'group_replication_recovery_reconnect_interval';
+-----------------------------------------------+-------+
| Variable_name                                 | Value |
+-----------------------------------------------+-------+
| group_replication_recovery_reconnect_interval | 60    |
+-----------------------------------------------+-------+
1 row in set (0.14 sec)

間隔預設為60。

12. 將單主模式切換為多主模式

(1)停止組複製(所有節點執行):
    mysql> stop group_replication;

(2)單主模式設定為ON,多主模式設定為OFF
    mysql> set global group_replication_single_primary_mode=OFF;

(3)在所有節點啟用多主資料更新的嚴格一致性檢查
    mysql> set global group_replication_enforce_update_everywhere_checks=ON;

(4)引導,隨便選擇某個節點執行(本文選擇mgr2)
    mysql> SET GLOBAL group_replication_bootstrap_group=ON;
    mysql> START GROUP_REPLICATION;
    mysql> SET GLOBAL group_replication_bootstrap_group=OFF;

(5)其他節點執行
    mysql> START GROUP_REPLICATION;

(6)檢視組成員及狀態
    mysql> select * from performance_schema.replication_group_members;
    +---------------------------+--------------------------------------+-------------+-------------+--------------+
    | CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
    +---------------------------+--------------------------------------+-------------+-------------+--------------+
    | group_replication_applier | 10a9692d-2d4c-11eb-b5f8-005056891ca6 | mgr2        |        3306 | ONLINE       |
    | group_replication_applier | 8b4f47fe-2d32-11eb-a8cf-0050568907ba | mgr1        |        3306 | ONLINE       |
    | group_replication_applier | a67ba921-2d4e-11eb-8c11-005056892789 | mgr3        |        3306 | ONLINE       |
    +---------------------------+--------------------------------------+-------------+-------------+--------------+
    3 rows in set (0.00 sec)

(7)檢視所有節點的read_only及supper_read_only變數,都是OFF(所有節點都可讀寫)
    mysql> show global variables like '%read_only%';
    +-----------------------+-------+
    | Variable_name         | Value |
    +-----------------------+-------+
    | innodb_read_only      | OFF   |
    | read_only             | OFF   |
    | super_read_only       | OFF   |
    | transaction_read_only | OFF   |
    | tx_read_only          | OFF   |
    +-----------------------+-------+
    5 rows in set (0.04 sec)

    實際測試也證實組內所有節點均可讀寫。

(8)多主模式下,group_replication_primary_member變數值為空。
    mysql> SHOW STATUS LIKE 'group_replication_primary_member';
    +----------------------------------+-------+
    | Variable_name                    | Value |
    +----------------------------------+-------+
    | group_replication_primary_member |       |
    +----------------------------------+-------+
    1 row in set (0.04 sec)

13. 多主模式切換為單主模式

(1)所有節點執行
    mysql> stop group_replication;
    mysql> set global group_replication_enforce_update_everywhere_checks=OFF;
    mysql> set global group_replication_single_primary_mode=ON;

(2)主節點(mgr1)執行
    mysql> SET GLOBAL group_replication_bootstrap_group=ON;
    mysql> START GROUP_REPLICATION;
    mysql> SET GLOBAL group_replication_bootstrap_group=OFF;

(3)從節點(mgr2、mgr3)執行
    mysql> START GROUP_REPLICATION;

14. 多主模式下,新節點加入(mgr4)

(1)新節點配置檔案:
    server_id=4
    group_replication_local_address="192.168.11.134:33061"
    group_replication_group_seeds="192.168.11.131:33061,192.168.11.132:33061,192.168.11.133:33061,192.168.11.134:33061"

(2)在其他三個節點(mgr1、mgr2、mgr3)執行(重啟失效!若想持久生效,需要同時配置檔案中修改。)
    mysql> set global group_replication_group_seeds='192.168.11.131:33061,192.168.11.132:33061,192.168.11.133:33061,192.168.11.134:33061';

(2)建立複製使用者
    mysql> reset master;
    mysql> SET SQL_LOG_BIN=0;
    mysql> CREATE USER repluser@'%' IDENTIFIED BY 'replpass';
    mysql> GRANT REPLICATION SLAVE ON *.* TO repluser@'%';
    mysql> FLUSH PRIVILEGES;
    mysql> SET SQL_LOG_BIN=1;  

(3)配置mgr
    mysql> set global group_replication_single_primary_mode=OFF;
    mysql> set global group_replication_enforce_update_everywhere_checks=ON;

    mysql> CHANGE MASTER TO MASTER_USER='repluser', MASTER_PASSWORD='replpass' FOR CHANNEL 'group_replication_recovery';

(4)啟動組複製: 注意由於在配置mgr1時,已經引導並建立過組,因此只需要啟動組複製將mgr3加入到組中即可。
    mysql> START GROUP_REPLICATION;

(5)檢查表performance_schema.replication_group_members,檢視組成員及狀態
    mysql> SELECT * FROM performance_schema.replication_group_members;
    +---------------------------+--------------------------------------+-------------+-------------+--------------+
    | CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
    +---------------------------+--------------------------------------+-------------+-------------+--------------+
    | group_replication_applier | 10a9692d-2d4c-11eb-b5f8-005056891ca6 | mgr2        |        3306 | ONLINE       |
    | group_replication_applier | 8b4f47fe-2d32-11eb-a8cf-0050568907ba | mgr1        |        3306 | ONLINE       |
    | group_replication_applier | 8e5d0881-2e2e-11eb-a6ac-005056895fb9 | mgr4        |        3306 | ONLINE       |
    | group_replication_applier | a67ba921-2d4e-11eb-8c11-005056892789 | mgr3        |        3306 | ONLINE       |
    +---------------------------+--------------------------------------+-------------+-------------+--------------+
    4 rows in set (0.01 sec)

(6)注意:
    (a)其實相對於單節點模式,其區別在於多設定了兩個引數:group_replication_single_primary_mode=OFF 和  group_replication_enforce_update_everywhere_checks=ON;
    (b)若想持久儲存這臺新加入組的機器的資訊,需要在配置檔案中修改group_replication_group_seeds。

15. 多主模式下,故障轉移

(1)多主模式下,一個節點fail了,寫執行緒可以連線到其他的節點上,因為所有節點都是讀寫的。

(2)手動kill掉mgr1的mysqld和mysqld_safe執行緒,發現該節點被踢出。
    (a)錯誤日誌:      
        2020-11-24T17:12:10.632391+08:00 0 [Warning] Plugin group_replication reported: 'Member with address mgr1:3306 has become unreachable.'
        2020-11-24T17:12:11.620997+08:00 0 [Warning] Plugin group_replication reported: 'Members removed from the group: mgr1:3306'
        2020-11-24T17:12:11.621301+08:00 0 [Note] Plugin group_replication reported: 'Group membership changed to mgr2:3306, mgr4:3306, mgr3:3306 on view 16062011996623015:13.'
    (b)檢查表performance_schema.replication_group_members,檢視組成員已經沒有mgr1
        mysql>  select * from performance_schema.replication_group_members;
        +---------------------------+--------------------------------------+-------------+-------------+--------------+
        | CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
        +---------------------------+--------------------------------------+-------------+-------------+--------------+
        | group_replication_applier | 10a9692d-2d4c-11eb-b5f8-005056891ca6 | mgr2        |        3306 | ONLINE       |
        | group_replication_applier | 8e5d0881-2e2e-11eb-a6ac-005056895fb9 | mgr4        |        3306 | ONLINE       |
        | group_replication_applier | a67ba921-2d4e-11eb-8c11-005056892789 | mgr3        |        3306 | ONLINE       |
        +---------------------------+--------------------------------------+-------------+-------------+--------------+
        3 rows in set (0.01 sec)

(3)手動新增一部分資料,再重新啟動mgr1,並將其加入組中:
    mysql> set global group_replication_single_primary_mode=OFF;
    mysql> set global group_replication_enforce_update_everywhere_checks=ON;
    mysql> START GROUP_REPLICATION;

    發現mgr1已經新增進組:
        mysql>  select * from performance_schema.replication_group_members;
        +---------------------------+--------------------------------------+-------------+-------------+--------------+
        | CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
        +---------------------------+--------------------------------------+-------------+-------------+--------------+
        | group_replication_applier | 10a9692d-2d4c-11eb-b5f8-005056891ca6 | mgr2        |        3306 | ONLINE       |
        | group_replication_applier | 8b4f47fe-2d32-11eb-a8cf-0050568907ba | mgr1        |        3306 | ONLINE       |
        | group_replication_applier | 8e5d0881-2e2e-11eb-a6ac-005056895fb9 | mgr4        |        3306 | ONLINE       |
        | group_replication_applier | a67ba921-2d4e-11eb-8c11-005056892789 | mgr3        |        3306 | ONLINE       |
        +---------------------------+--------------------------------------+-------------+-------------+--------------+
        4 rows in set (0.00 sec)
        
        錯誤日誌:
        2020-11-24T17:16:53.584056+08:00 0 [Note] Plugin group_replication reported: 'Members joined the group: mgr1:3306'
        2020-11-24T17:16:53.587879+08:00 0 [Note] Plugin group_replication reported: 'Group membership changed to mgr2:3306, mgr1:3306, mgr4:3306, mgr3:3306 on view 16062011996623015:14.'
        2020-11-24T17:16:55.266815+08:00 0 [Note] Plugin group_replication reported: 'The member with address mgr1:3306 was declared online within the replication group'

    並且發現新新增的資料已經在mgr1上存在。

16. 單主模式下,故障轉移

(1)當主節點(primary member)掛掉或者因為其他原因而導致unreachable,會根據自動選舉機制選擇一個新的主節點。

(2)自動選舉機制的規則是什麼?
    首先會比較剩餘各節點的group_replication_member_weight變數的值,若都是相同版本的mysql,group_replication_member_weight的值高的當選為新主;
    如果各節點的group_replication_member_weight的值相同(預設情況下都是50),則會根據各節點的server_uuid的值進行字典排序,選擇第一個為新主。

(3)手動kill掉主節點(mgr1)的mysqld和mysqld_safe程式,檢視錯誤日誌(error log),發現節點被踢出,並且已經選擇出一個新的主(mgr2)。
    2020-11-25T09:20:36.787529+08:00 0 [Warning] Plugin group_replication reported: 'Member with address mgr1:3306 has become unreachable.'
    2020-11-25T09:20:37.795746+08:00 0 [Warning] Plugin group_replication reported: 'Members removed from the group: mgr1:3306'
    2020-11-25T09:20:37.796055+08:00 0 [Note] Plugin group_replication reported: 'Primary server with address mgr1:3306 left the group. Electing new Primary.'
    2020-11-25T09:20:37.796412+08:00 0 [Note] Plugin group_replication reported: 'A new primary with address mgr2:3306 was elected, enabling conflict detection until the new primary applies all relay logs.'
    2020-11-25T09:20:37.796604+08:00 102 [Note] Plugin group_replication reported: 'This server is working as secondary member with primary member address mgr2:3306.'
    2020-11-25T09:20:37.796696+08:00 0 [Note] Plugin group_replication reported: 'Group membership changed to mgr2:3306, mgr4:3306, mgr3:3306 on view 16062105035172940:5.'

(4)檢查表performance_schema.replication_group_members,檢視當前組成員及狀態
    mysql> select * from performance_schema.replication_group_members;
    +---------------------------+--------------------------------------+-------------+-------------+--------------+
    | CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
    +---------------------------+--------------------------------------+-------------+-------------+--------------+
    | group_replication_applier | 10a9692d-2d4c-11eb-b5f8-005056891ca6 | mgr2        |        3306 | ONLINE       |
    | group_replication_applier | 8e5d0881-2e2e-11eb-a6ac-005056895fb9 | mgr4        |        3306 | ONLINE       |
    | group_replication_applier | a67ba921-2d4e-11eb-8c11-005056892789 | mgr3        |        3306 | ONLINE       |
    +---------------------------+--------------------------------------+-------------+-------------+--------------+
    3 rows in set (0.01 sec)

(5)檢視group_replication_primary_member的值,再次確認主節點是mgr2。
    mysql> show global status like '%primary%';
    +----------------------------------+--------------------------------------+
    | Variable_name                    | Value                                |
    +----------------------------------+--------------------------------------+
    | group_replication_primary_member | 10a9692d-2d4c-11eb-b5f8-005056891ca6 |
    +----------------------------------+--------------------------------------+
    1 row in set (0.01 sec)

(6)手動新增部分資料後,再次將mgr1啟動,並啟動組複製
    (a)啟動mysql(略)

    (b)啟動組複製
        mysql> START GROUP_REPLICATION;

    (c)檢視組成員及狀態(mgr1又加入組中,是一個只讀節點)
        mysql> select * from performance_schema.replication_group_members;
        +---------------------------+--------------------------------------+-------------+-------------+--------------+
        | CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
        +---------------------------+--------------------------------------+-------------+-------------+--------------+
        | group_replication_applier | 10a9692d-2d4c-11eb-b5f8-005056891ca6 | mgr2        |        3306 | ONLINE       |
        | group_replication_applier | 8b4f47fe-2d32-11eb-a8cf-0050568907ba | mgr1        |        3306 | ONLINE       |
        | group_replication_applier | 8e5d0881-2e2e-11eb-a6ac-005056895fb9 | mgr4        |        3306 | ONLINE       |
        | group_replication_applier | a67ba921-2d4e-11eb-8c11-005056892789 | mgr3        |        3306 | ONLINE       |
        +---------------------------+--------------------------------------+-------------+-------------+--------------+
        4 rows in set (0.01 sec)

        mysql> show global status like '%primary%';
        +----------------------------------+--------------------------------------+
        | Variable_name                    | Value                                |
        +----------------------------------+--------------------------------------+
        | group_replication_primary_member | 10a9692d-2d4c-11eb-b5f8-005056891ca6 |
        +----------------------------------+--------------------------------------+
        1 row in set (0.01 sec)

        mysql> show global variables like '%read_only';
        +-----------------------+-------+
        | Variable_name         | Value |
        +-----------------------+-------+
        | innodb_read_only      | OFF   |
        | read_only             | ON    |
        | super_read_only       | ON    |
        | transaction_read_only | OFF   |
        | tx_read_only          | OFF   |
        +-----------------------+-------+
        5 rows in set (0.01 sec)

17. 組複製--指定ip白名單

(1)如果沒有顯示指定ip白名單,那麼自動允許的ip列表包含下列ip地址:(所有內網地址,範圍比較大)
    10/8 prefix       (10.0.0.0 - 10.255.255.255) - Class A
    172.16/12 prefix  (172.16.0.0 - 172.31.255.255) - Class B
    192.168/16 prefix (192.168.0.0 - 192.168.255.255) - Class C
    127.0.0.1 - localhost for IPv4

(2)如果要顯示指定ip白名單,而非使用預設允許列表,指定方式為:
    mysql> STOP GROUP_REPLICATION;
    mysql> SET GLOBAL group_replication_ip_whitelist="192.0.2.21/24,198.51.100.44,203.0.113.0/24,example.org,www.example.com/24";
    mysql> START GROUP_REPLICATION;

    注意: 
        (1)該列表必須包含所有mgr節點的地址(group_replication_local_address)。
        (2)為避免組成員離開後,導致有伺服器無法連線至組,最好所有組成員設定相同的ip白名單(非必須)。
        (3)ip白名單儘量使用ip地址,如需使用域名,優先使用hosts檔案解析。

18. 組複製的要求:

(1)基礎設施要求:
    (a)InnoDB儲存引擎
        資料必須儲存在InnoDB儲存引擎中,否則會報錯。可指定disabled_storage_engines引數阻止使用其他儲存引擎。
        disabled_storage_engines="MyISAM,BLACKHOLE,FEDERATED,ARCHIVE,MEMORY"
    (b)主鍵
        組複製的每個表都必須有主鍵,或者與主鍵等效的非空唯一鍵(not null unique)。
    (c)ipv4
        組複製僅支援ipv4網路。
    (d)網路效能
        組複製的例項要求部署在彼此通訊非常好的網路環境中,網路延遲和頻寬會影響組複製的效能和穩定性。

(2)伺服器例項配置要求:
    (a)唯一的server_id;
        server_id=1
    (b)開啟二進位制日誌;
        log_bin=/data/mysql/log/binlog/mysql-bin
        log_bin_index=/data/mysql/log/binlog/binlog.index
    (c)將副本更新寫入日誌;
        log_slave_updates=1   
    (d)二進位制日誌格式為row;
        binlog_format=row
    (e)關閉二進位制日誌校驗和
        binlog_checksum=NONE
    (f)開啟全域性事務識別符號(GTID)
        gtid_mode=on  
        enforce_gtid_consistency=1
    (g)複製資訊儲存方式為TABLE
        master_info_repository=TABLE  
        relay_log_info_repository=TABLE
    (h)記錄事務演算法
        transaction_write_set_extraction=XXHASH64
    (i)表名都轉換小寫在所有節點設定相同
        lower_case_table_names=1
    (j*)可啟用並行複製
        slave-parallel-type=LOGICAL_CLOCK
        slave-parallel-workers=N
        slave_preserve_commit_order=1
        slave_pending_jobs_size_max=134217728   #多執行緒複製中,work執行緒最大記憶體,可參考max_allowed_packet進行設定(本文中max_allowed_packet=128M)。

19. 組複製的限制

(1)間隙鎖(Gap Lock)
    組複製的併發事務認證過程未考慮間隙鎖。
    多主模式的事務隔離級別建議為:READ_COMMITTED
(2)表鎖和命名鎖(Table Locks and Named Locks)
    認證過程未考慮表鎖和命名鎖。
(3)二進位制日誌校驗和(binlog_checksum=NONE)
    由於校驗和設計限制,組複製無法使用,只能設定為NONE。
(4)SERIALIZABLE事務隔離級別
    多主模式下,不支援SERIALIZABLE事務隔離級別。
(5)並行DDL與DML操作
    多主模式下,對同一物件在不同伺服器上併發執行DDL與DML語句是不支援的。
(6)級聯外來鍵約束
    多主模式組(成員均配置有group_replication_single_primary_mode=OFF)不支援具有多級外來鍵依賴關係的表,特別是定義了CASCADING外來鍵約束的表。
    因為可能會導致無法檢測到的衝突,並導致該組成員之間的資料不一致。
    因此,在多主模式中建議設定group_replication_enforce_update_everywhere_checks=ON,以避免未檢測到的衝突。
(7)MySQL企業稽核和MySQL企業防火牆(MySQL Enterprise Edition)
    在5.7.21版之前,MySQL企業稽核和MySQL企業防火牆使用mysql庫中的MyISAM表,而組複製不支援MyISAM表。
(8)多主模式死鎖
    多主模式時,SELECT .. FOR UPDATE語句可能導致死鎖。
(9)複製過濾
    組複製不支援複製過濾,因為事務過濾可能無法讓組在一致性狀態上達成協議。
(10)組成員數量限制
    組成員個數最多為9。
(11)事務大小限制
    若單個事務太大,無法在5秒的時間內通過網路在組成員之間複製訊息(message),則可能會懷疑成員失敗了,然後將其驅逐出局。
    此外,大型事務可能還會由於記憶體分配問題導致系統變慢。為避免這些問題,請使用以下緩解措施:
    (a)限制事務大小;
    (b)可使用group_replication_transaction_size_limit變數指定事務大小限制,
        在5.7版本中該值為0,沒有限制;8.0版本中預設值為150000000位元組(約143 MB),
        超過此限制的事務將回滾,並且不會傳送到組複製的組通訊系統(GCS)進行分發。
    (c)使用系統變數group_replication_compression_threshold來指定訊息大小,系統預設值為1000000位元組,如果訊息大小大於該值會進行壓縮。
        當組複製的組通訊系統(GCS)收到group_replication_transaction_size_limit設定允許,但超出group_replication_compression_threshold設定的訊息時,將執行壓縮。
        如果group_replication_transaction_size_limit設定為零,則將禁用壓縮。(當網路頻寬成為瓶頸時,訊息壓縮可以在組通訊級別上將吞吐量提高多達30-40%)

        如未使用訊息壓縮並且未指定最大事務大小,則複製組成員上的應用程式執行緒可以處理的訊息的大小上限為該成員的值slave_max_allowed_packet系統變數,
        其預設值為最大值1073741824位元組(1 GB),當接收成員嘗試處理超過此限制的訊息時,該訊息將失敗。
        
        (但是max_allowed_packet會指定事務的大小,如果執行的事務超出max_allowed_packet但是未超出slave_max_allowed_packet會不會執行成功呢?(猜測不可以,未測試))

相關文章