MySQL主從複製架構轉換MGR架構
環境資訊
IP | port | role | info |
---|---|---|---|
192.168.188.81 | 3316 | node1 | master |
192.168.188.82 | 3316 | node2 | slave1 |
192.168.188.83 | 3316 | node3 | slave2 |
-
CentOS Linux release 7.6.1810 (Core)
-
MySQL Ver 8.0.19 for linux-glibc2.12 on x86_64 (MySQL Community Server - GPL)
-
MySQL Router Ver 8.0.20 for Linux on x86_64 (MySQL Community - GPL)
-
MySQL Shell Ver 8.0.20 for Linux on x86_64 - for MySQL 8.0.20 (MySQL Community Server (GPL))
軟體位置
在三個節點上部署好MySQL、MySQL Router、MySQL Shell。
搭建複製環境,並開啟增強半同步
-
所有節點配置
root@localhost [(none)]set global super_read_only=0 Query OK, 0 rows affected (0.00 sec) root@localhost [(none)]create user @ identified by Query OK, 0 rows affected (0.02 sec) root@localhost [(none)]grant replication slave on . to @ Query OK, 0 rows affected (0.02 sec) root@localhost [(none)]install plugin rpl_semi_sync_slave soname Query OK, 0 rows affected (0.01 sec) root@localhost [(none)]install plugin rpl_semi_sync_master soname Query OK, 0 rows affected (0.02 sec)
-
master節點配置
root@localhost [(none)]set global rpl_semi_sync_master_enabled=ON Query OK, 0 rows affected (0.01 sec) root@localhost [(none)]show global variables like +-------------------------------------------+------------+ Variable_name Value +-------------------------------------------+------------+ rpl_semi_sync_master_enabled ON rpl_semi_sync_master_timeout 10000 rpl_semi_sync_master_trace_level 32 rpl_semi_sync_master_wait_for_slave_count 1 rpl_semi_sync_master_wait_no_slave ON rpl_semi_sync_master_wait_point AFTER_SYNC rpl_semi_sync_slave_enabled OFF rpl_semi_sync_slave_trace_level 32 +-------------------------------------------+------------+ 8 rows (0.00 sec) root@localhost [(none)]reset master Query OK, 0 rows affected (0.04 sec)
-
slave節點配置
root@localhost [(none)]set global rpl_semi_sync_slave_enabled=ON Query OK, 0 rows affected (0.00 sec) root@localhost [(none)]change master to master_host=,master_port=3316,master_user=,master_password=,master_auto_position=1,get_master_public_key=1 Query OK, 0 rows affected, 2 warnings (0.04 sec) root@localhost [(none)]reset master Query OK, 0 rows affected (0.04 sec)
-
slave 啟動複製
root@localhost [(none)]start slave Query OK, 0 rows affected (0.03 sec) root@localhost [(none)]show slave status 1. row Slave_IO_State: Waiting master to send event Master_Host: 192.168.188.81 Master_User: rep Master_Port: 3316 Connect_Retry: 60 Master_Log_File: mysql-bin.000001 Read_Master_Log_Pos: 155 Relay_Log_File: ms82-relay-bin.000002 Relay_Log_Pos: 369 Relay_Master_Log_File: mysql-bin.000001 Slave_IO_Running: Yes Slave_SQL_Running: Yes ... ... Slave_SQL_Running_State: Slave has all relay log waiting more updates Master_Retry_Count: 86400 ... ... 1 row (0.00 sec)
-
master檢視半同步狀態
root@localhost [(none)]show global status like +--------------------------------------------+-------+ Variable_name Value +--------------------------------------------+-------+ Rpl_semi_sync_master_clients 2 Rpl_semi_sync_master_net_avg_wait_time 0 Rpl_semi_sync_master_net_wait_time 0 Rpl_semi_sync_master_net_waits 0 Rpl_semi_sync_master_no_times 0 Rpl_semi_sync_master_no_tx 0 Rpl_semi_sync_master_status ON Rpl_semi_sync_master_timefunc_failures 0 Rpl_semi_sync_master_tx_avg_wait_time 0 Rpl_semi_sync_master_tx_wait_time 0 Rpl_semi_sync_master_tx_waits 0 Rpl_semi_sync_master_wait_pos_backtraverse 0 Rpl_semi_sync_master_wait_sessions 0 Rpl_semi_sync_master_yes_tx 0 Rpl_semi_sync_slave_status OFF +--------------------------------------------+-------+ 15 rows (0.00 sec)
模擬業務,使用指令碼產生事務
-
建表
root@localhost [(none)]create database kk; Query OK, row affected (. sec) root@localhost [(none)]use kk Database changed root@localhost [kk]create table k1 ( id auto_increment , dtl () default ); Query OK, rows affected (. sec)
-
開啟一個session,執行指令碼產生事務
[root@ms81 ~]# while :; do echo "insert into kk.k1(dtl) values('duangduangduang');" | mysql -S /data/mysql/mysql3316/tmp/mysql.sock; sleep 1;done
手動配置MGR
配置Master,將Master轉為MGR
-
配置引數
root@localhost [kk]>install plugin group_replication soname 'group_replication.so';
Query OK, 0 rows affected (0.03 sec)
root@localhost [kk]>set persist binlog_checksum=NONE;
Query OK, 0 rows affected (0.02 sec)
root@localhost [kk]>set persist transaction_write_set_extraction=XXHASH64;
Query OK, 0 rows affected (0.00 sec)
root@localhost [kk]>select uuid();
+--------------------------------------+
| uuid() |
+--------------------------------------+
| 3260d70c-966e-11ea-ba8b-0242c0a8bc51 |
+--------------------------------------+
1 row in set (0.00 sec)
root@localhost [kk]>set persist group_replication_group_name='3260d70c-966e-11ea-ba8b-0242c0a8bc51';
Query OK, 0 rows affected (0.00 sec)
root@localhost [kk]>set persist group_replication_local_address="192.168.188.81:13306";
Query OK, 0 rows affected (0.00 sec)
root@localhost [kk]>set persist group_replication_group_seeds="192.168.188.81:13306,192.168.188.82:13306,192.168.188.83:13306";
Query OK, 0 rows affected (0.00 sec)
#也要加上這個,具體見文末
SET persist group_replication_recovery_get_public_key = 1;
root@localhost [kk]>set persist group_replication_bootstrap_group=off;
Query OK, 0 rows affected (0.00 sec)
root@localhost [kk]>set persist group_replication_start_on_boot=off;
Query OK, 0 rows affected (0.00 sec)
root@localhost [kk]>set global group_replication_bootstrap_group=on;
Query OK, 0 rows affected (0.00 sec)
root@localhost [kk]>start group_replication;
Query OK, 0 rows affected (3.36 sec)
root@localhost [kk]>set global group_replication_bootstrap_group=off;
Query OK, 0 rows affected (0.00 sec)
root@localhost [kk]>select * from performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| group_replication_applier | 29ea8b7f-966d-11ea-937c-0242c0a8bc51 | ms81 | 3316 | ONLINE | PRIMARY | 8.0.19 |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
1 row in set (0.01 sec)
-
此時發現發生事務的session出現了提醒
[root@ms81 ~]# while :; do echo "insert into kk.k1(dtl) values('duangduangduang');" | mysql -S /data/mysql/mysql3316/tmp/mysql.sock; sleep 1;done
ERROR 1290 (HY000) at line 1: The MySQL server is running with the --super-read-only option so it cannot execute this statement
ERROR 1290 (HY000) at line 1: The MySQL server is running with the --super-read-only option so it cannot execute this statement
ERROR 1290 (HY000) at line 1: The MySQL server is running with the --super-read-only option so it cannot execute this statement
去配置slave1 ,轉換為MGR
root@localhost [(none)]>install plugin group_replication soname 'group_replication.so';
Query OK, 0 rows affected (0.01 sec)
root@localhost [(none)]>set persist binlog_checksum=NONE;
Query OK, 0 rows affected (0.03 sec)
root@localhost [(none)]>set persist transaction_write_set_extraction=XXHASH64;
Query OK, 0 rows affected (0.00 sec)
root@localhost [(none)]>set persist group_replication_group_name='3260d70c-966e-11ea-ba8b-0242c0a8bc51';
Query OK, 0 rows affected (0.01 sec)
root@localhost [(none)]>set persist group_replication_local_address="192.168.188.82:13306";
Query OK, 0 rows affected (0.00 sec)
root@localhost [(none)]>set persist group_replication_group_seeds="192.168.188.81:13306,192.168.188.82:13306,192.168.188.83:13306";
Query OK, 0 rows affected (0.00 sec)
#也要加上這個,具體見文末
SET persist group_replication_recovery_get_public_key = 1;
root@localhost [(none)]>set persist group_replication_bootstrap_group=off;
Query OK, 0 rows affected (0.00 sec)
root@localhost [(none)]>set persist group_replication_start_on_boot=off;
Query OK, 0 rows affected (0.00 sec)
root@localhost [(none)]>start group_replication;
ERROR 3092 (HY000): The server is not configured properly to be an active member of the group. Please see more details on error log.
root@localhost [(none)]>select * from performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| group_replication_applier | 2cbcfaa5-966d-11ea-8707-0242c0a8bc52 | ms82 | 3316 | OFFLINE |
| |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
1 row in set (0.01 sec)
root@localhost [(none)]>stop group_replication;
Query OK, 0 rows affected (4.78 sec)
root@localhost [(none)]>change master to master_user='rep',master_password='rep' for channel 'group_replication_recovery';
Query OK, 0 rows affected, 2 warnings (0.03 sec)
root@localhost [(none)]>start group_replication;
Query OK, 0 rows affected (3.88 sec)
root@localhost [(none)]>select * from performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| group_replication_applier | 29ea8b7f-966d-11ea-937c-0242c0a8bc51 | ms81 | 3316 | ONLINE | PRIMARY | 8.0.19 |
| group_replication_applier | 2cbcfaa5-966d-11ea-8707-0242c0a8bc52 | ms82 | 3316 | ONLINE | SECONDARY | 8.0.19 |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
2 rows in set (0.00 sec)
如法炮製,改造slave2
在改造之前,我突然想到,現有的架構成為了:node1(master)\node2(slave1) 為MGR, node3(slave2)是node1(master)的從庫, 那麼檢查一下當前三個節點的情況:
node1:
root@localhost [kk]>select count(*) from kk.k1;
+----------+
| count(*) |
+----------+
| 456 |
+----------+
1 row in set (0.00 sec)
root@localhost [kk]>show master status ;
+------------------+----------+--------------+------------------+----------------------------------------------------------------------------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+------------------+----------+--------------+------------------+----------------------------------------------------------------------------------------+
| mysql-bin.000002 | 154142 | | | 3260d70c-966e-11ea-ba8b-0242c0a8bc51:1-350,
f78a6902-9679-11ea-b136-0242c0a8bc51:1-111 |
+------------------+----------+--------------+------------------+----------------------------------------------------------------------------------------+
1 row in set (0.00 sec)
node2:
root@localhost [(none)]>select count(*) from kk.k1;
+----------+
| count(*) |
+----------+
| 456 |
+----------+
1 row in set (0.00 sec)
root@localhost [(none)]>show master status ;
+------------------+----------+--------------+------------------+----------------------------------------------------------------------------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+------------------+----------+--------------+------------------+----------------------------------------------------------------------------------------+
| mysql-bin.000002 | 109956 | | | 3260d70c-966e-11ea-ba8b-0242c0a8bc51:1-350,
f78a6902-9679-11ea-b136-0242c0a8bc51:1-111 |
+------------------+----------+--------------+------------------+----------------------------------------------------------------------------------------+
1 row in set (0.00 sec)
##注意,node2的 IO、SQL THREAD沒有執行,但是 Executed_Gtid_Set 是跟進的噢
root@localhost [(none)]>show slave status\G
*************************** 1. row ***************************
Slave_IO_State:
Master_Host: 192.168.188.81
Master_User: rep
Master_Port: 3316
Connect_Retry: 60
Master_Log_File: mysql-bin.000002
Read_Master_Log_Pos: 74606
Relay_Log_File: ms82-relay-bin.000004
Relay_Log_Pos: 74820
Relay_Master_Log_File: mysql-bin.000002
Slave_IO_Running: No
Slave_SQL_Running: No
...
...
Master_Server_Id: 813316
Master_UUID: f78a6902-9679-11ea-b136-0242c0a8bc51
Master_Info_File: mysql.slave_master_info
...
Retrieved_Gtid_Set: 3260d70c-966e-11ea-ba8b-0242c0a8bc51:1-121,
f78a6902-9679-11ea-b136-0242c0a8bc51:1-111
Executed_Gtid_Set: 3260d70c-966e-11ea-ba8b-0242c0a8bc51:1-350,
f78a6902-9679-11ea-b136-0242c0a8bc51:1-111
Auto_Position: 1
...
1 row in set (0.00 sec)
node3:
root@localhost [(none)]>select count(*) from kk.k1;
+----------+
| count(*) |
+----------+
| 456 |
+----------+
1 row in set (0.00 sec)
root@localhost [(none)]>show master status ;
+------------------+----------+--------------+------------------+----------------------------------------------------------------------------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+------------------+----------+--------------+------------------+----------------------------------------------------------------------------------------+
| mysql-bin.000001 | 169340 | | | 3260d70c-966e-11ea-ba8b-0242c0a8bc51:1-350,
f78a6902-9679-11ea-b136-0242c0a8bc51:1-111 |
+------------------+----------+--------------+------------------+----------------------------------------------------------------------------------------+
1 row in set (0.00 sec)
root@localhost [(none)]>show slave status \G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.188.81
Master_User: rep
Master_Port: 3316
Connect_Retry: 60
Master_Log_File: mysql-bin.000002
Read_Master_Log_Pos: 154142
Relay_Log_File: ms83-relay-bin.000004
Relay_Log_Pos: 154356
Relay_Master_Log_File: mysql-bin.000002
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
...
...
Master_UUID: f78a6902-9679-11ea-b136-0242c0a8bc51
Master_Info_File: mysql.slave_master_info
...
Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates
...
Retrieved_Gtid_Set: 3260d70c-966e-11ea-ba8b-0242c0a8bc51:1-350,
f78a6902-9679-11ea-b136-0242c0a8bc51:1-111
Executed_Gtid_Set: 3260d70c-966e-11ea-ba8b-0242c0a8bc51:1-350,
f78a6902-9679-11ea-b136-0242c0a8bc51:1-111
Auto_Position: 1
...
1 row in set (0.00 sec)
-
轉換slave2
root@localhost [(none)]>install plugin group_replication soname 'group_replication.so';
Query OK, 0 rows affected (0.02 sec)
root@localhost [(none)]>set persist binlog_checksum=NONE;
Query OK, 0 rows affected (0.03 sec)
root@localhost [(none)]>set persist transaction_write_set_extraction=XXHASH64;
Query OK, 0 rows affected (0.00 sec)
root@localhost [(none)]>set persist group_replication_group_name='3260d70c-966e-11ea-ba8b-0242c0a8bc51';
Query OK, 0 rows affected (0.00 sec)
root@localhost [(none)]>set persist group_replication_local_address="192.168.188.83:13306";
Query OK, 0 rows affected (0.00 sec)
root@localhost [(none)]>set persist group_replication_group_seeds="192.168.188.81:13306,192.168.188.82:13306,192.168.188.83:13306";
group_rQuery OK, 0 rows affected (0.00 sec)
#也要加上這個,具體見文末
SET persist group_replication_recovery_get_public_key = 1;
root@localhost [(none)]>set persist group_replication_bootstrap_group=off;
Query OK, 0 rows affected (0.00 sec)
root@localhost [(none)]>set persist group_replication_start_on_boot=off;
Query OK, 0 rows affected (0.00 sec)
root@localhost [(none)]>stop slave;
Query OK, 0 rows affected (0.01 sec)
root@localhost [(none)]>change master to master_user='rep',master_password='rep' for channel 'group_replication_recovery';
Query OK, 0 rows affected, 2 warnings (0.05 sec)
plicatioroot@localhost [(none)]>start group_replication;
Query OK, 0 rows affected (4.64 sec)
root@localhost [(none)]>select * from performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| group_replication_applier | 29ea8b7f-966d-11ea-937c-0242c0a8bc51 | ms81 | 3316 | ONLINE | PRIMARY | 8.0.19 |
| group_replication_applier | 2cbcfaa5-966d-11ea-8707-0242c0a8bc52 | ms82 | 3316 | ONLINE | SECONDARY | 8.0.19 |
| group_replication_applier | 2db7ddf1-966d-11ea-a7b3-0242c0a8bc53 | ms83 | 3316 | ONLINE | SECONDARY | 8.0.19 |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
3 rows in set (0.00 sec)
root@localhost [(none)]>
一些tips
引數檔案!
手動轉換為MGR與通過MySQL Shell轉換的最大區別是,後者會自動通過set persist 方式將變更寫到mysqld-auto.cnf檔案中,而手動操作需要注意這一點。上述實驗完全沒編輯my.cnf ,如果使用set global,在MGR三節點再次冷啟動的時候,MGR的配置引數就沒了,無法啟動MGR。解決方法是:
-
在配置過程中用set persist 來代替set global , 持久化儲存配置
-
如果已經手快重啟了全部節點清空了臨時配置,那麼可以使用set persist再次設定一遍,設定好後理論上可以直接生效,啟動GR。
sha2_password魔咒
-
我通過set global 配置後,重啟了一下節點,再進行set persist持久化配置後,啟動MGR後, master順利online ,但是在做node2加入GR時,一直處於RECOVERING
-
檢查errlog後發現:
2020-05-15T14:35:46.869802+08:00 21 [System] [MY-010597] [Repl] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_recovery' executed'. Previous state master_host='ms81', master_port= 3316, master_log_file='', master_log_pos= 4, master_bind=''. New state master_host='ms81', master_port= 3316, master_log_file='', master_log_pos= 4, master_bind=''.
2020-05-15T14:35:46.906422+08:00 28 [Warning] [MY-010897] [Repl] Storing MySQL user name or password information in the master info repository is not secure and is therefore not recommended. Please consider using the USER and PASSWORD connection options for START SLAVE; see the 'START SLAVE Syntax' in the MySQL Manual for more information.
2020-05-15T14:35:46.907876+08:00 28 [ERROR] [MY-010584] [Repl] Slave I/O for channel 'group_replication_recovery': error connecting to master 'rep@ms81:3316' - retry-time: 60 retries: 1 message: Authentication plugin 'caching_sha2_password' reported error: Authentication requires secure connection. Error_code: MY-002061
2020-05-15T14:35:46.923832+08:00 21 [ERROR] [MY-011582] [Repl] Plugin group_replication reported: 'There was an error when connecting to the donor server. Please check that group_replication_recovery channel credentials and all MEMBER_HOST column values of performance_schema.replication_group_members table are correct and DNS resolvable.'
2020-05-15T14:35:46.923887+08:00 21 [ERROR] [MY-011583] [Repl] Plugin group_replication reported: 'For details please check performance_schema.replication_connection_status table and error log messages of Slave I/O for channel group_replication_recovery.'
-
檢查 performance_schema.replication_connection_status
root@localhost [(none)]>select * from performance_schema.replication_connection_status\G
...
...
...
*************************** 3. row ***************************
CHANNEL_NAME: group_replication_recovery
GROUP_NAME:
SOURCE_UUID:
THREAD_ID: NULL
SERVICE_STATE: OFF
COUNT_RECEIVED_HEARTBEATS: 0
LAST_HEARTBEAT_TIMESTAMP: 0000-00-00 00:00:00.000000
RECEIVED_TRANSACTION_SET:
LAST_ERROR_NUMBER: 2061
LAST_ERROR_MESSAGE: error connecting to master 'rep@ms81:3316' - retry-time: 60 retries: 1 message: Authentication plugin 'caching_sha2_password' reported error: Authentication requires secure connection.
...
...
3 rows in set (0.01 sec)
退化到recovering狀態,遇到連線問題,嘗試在change master上增加:
root@localhost [(none)]>change master to master_user='rep',master_password='rep',get_master_public_key=1 for channel 'group_replication_recovery';
ERROR 3139 (HY000): CHANGE MASTER with the given parameters cannot be performed on channel 'group_replication_recovery'.
這就尷尬了。
-
臨時解決方法
[root@ms82 ~]# mysql -h 192.168.188.81 -P 3316 -urep -prep
rep@192.168.188.81 [(none)]>exit
[root@ms82 ~]# mysql -S /data/mysql/mysql3316/tmp/mysql.sock
root@localhost [(none)]>stop group_replication;
Query OK, 0 rows affected (4.75 sec)
root@localhost [(none)]>start group_replication;
Query OK, 0 rows affected (5.75 sec)
root@localhost [(none)]>select * from performance_schema.replication_connection_status\G
*************************** 1. row ***************************
CHANNEL_NAME:
GROUP_NAME:
SOURCE_UUID: 29ea8b7f-966d-11ea-937c-0242c0a8bc51
THREAD_ID: NULL
SERVICE_STATE: OFF
COUNT_RECEIVED_HEARTBEATS: 0
LAST_HEARTBEAT_TIMESTAMP: 0000-00-00 00:00:00.000000
RECEIVED_TRANSACTION_SET: 29ea8b7f-966d-11ea-937c-0242c0a8bc51:1-530,
3260d70c-966e-11ea-ba8b-0242c0a8bc51:1-343
LAST_ERROR_NUMBER: 0
LAST_ERROR_MESSAGE:
LAST_ERROR_TIMESTAMP: 0000-00-00 00:00:00.000000
LAST_QUEUED_TRANSACTION:
LAST_QUEUED_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP: 0000-00-00 00:00:00.000000
LAST_QUEUED_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP: 0000-00-00 00:00:00.000000
LAST_QUEUED_TRANSACTION_START_QUEUE_TIMESTAMP: 0000-00-00 00:00:00.000000
LAST_QUEUED_TRANSACTION_END_QUEUE_TIMESTAMP: 0000-00-00 00:00:00.000000
QUEUEING_TRANSACTION:
QUEUEING_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP: 0000-00-00 00:00:00.000000
QUEUEING_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP: 0000-00-00 00:00:00.000000
QUEUEING_TRANSACTION_START_QUEUE_TIMESTAMP: 0000-00-00 00:00:00.000000
*************************** 2. row ***************************
CHANNEL_NAME: group_replication_applier
GROUP_NAME: 3260d70c-966e-11ea-ba8b-0242c0a8bc51
SOURCE_UUID: 3260d70c-966e-11ea-ba8b-0242c0a8bc51
THREAD_ID: NULL
SERVICE_STATE: ON
COUNT_RECEIVED_HEARTBEATS: 0
LAST_HEARTBEAT_TIMESTAMP: 0000-00-00 00:00:00.000000
RECEIVED_TRANSACTION_SET: 29ea8b7f-966d-11ea-937c-0242c0a8bc51:1-530,
3260d70c-966e-11ea-ba8b-0242c0a8bc51:1-781:787
LAST_ERROR_NUMBER: 0
LAST_ERROR_MESSAGE:
LAST_ERROR_TIMESTAMP: 0000-00-00 00:00:00.000000
LAST_QUEUED_TRANSACTION: 3260d70c-966e-11ea-ba8b-0242c0a8bc51:787
LAST_QUEUED_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP: 0000-00-00 00:00:00.000000
LAST_QUEUED_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP: 0000-00-00 00:00:00.000000
LAST_QUEUED_TRANSACTION_START_QUEUE_TIMESTAMP: 2020-05-15 14:38:54.721851
LAST_QUEUED_TRANSACTION_END_QUEUE_TIMESTAMP: 2020-05-15 14:38:54.721874
QUEUEING_TRANSACTION:
QUEUEING_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP: 0000-00-00 00:00:00.000000
QUEUEING_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP: 0000-00-00 00:00:00.000000
QUEUEING_TRANSACTION_START_QUEUE_TIMESTAMP: 0000-00-00 00:00:00.000000
*************************** 3. row ***************************
CHANNEL_NAME: group_replication_recovery
GROUP_NAME:
SOURCE_UUID:
THREAD_ID: NULL
SERVICE_STATE: OFF
COUNT_RECEIVED_HEARTBEATS: 0
LAST_HEARTBEAT_TIMESTAMP: 0000-00-00 00:00:00.000000
RECEIVED_TRANSACTION_SET:
LAST_ERROR_NUMBER: 0
LAST_ERROR_MESSAGE:
LAST_ERROR_TIMESTAMP: 0000-00-00 00:00:00.000000
LAST_QUEUED_TRANSACTION:
LAST_QUEUED_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP: 0000-00-00 00:00:00.000000
LAST_QUEUED_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP: 0000-00-00 00:00:00.000000
LAST_QUEUED_TRANSACTION_START_QUEUE_TIMESTAMP: 0000-00-00 00:00:00.000000
LAST_QUEUED_TRANSACTION_END_QUEUE_TIMESTAMP: 0000-00-00 00:00:00.000000
QUEUEING_TRANSACTION:
QUEUEING_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP: 0000-00-00 00:00:00.000000
QUEUEING_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP: 0000-00-00 00:00:00.000000
QUEUEING_TRANSACTION_START_QUEUE_TIMESTAMP: 0000-00-00 00:00:00.000000
3 rows in set (0.00 sec)
-
正規軍解決方法
SET GLOBAL group_replication_recovery_use_ssl = ON;
SET GLOBAL group_replication_recovery_get_public_key = 1; #已合併到操作中
SET GLOBAL group_replication_recovery_public_key_path = 'path to RSA public key file';
MGR冷啟動
-
將三節點全部關掉
mysql > shutdown ;
-
啟動node1
[root@ms81 ~]# mysqld --defaults-file=/data/mysql/mysql3316/my3316.cnf &
[root@ms81 ~]# mysql -S /data/mysql/mysql3316/tmp/mysql.sock
root@localhost [(none)]>set global group_replication_bootstrap_group=ON;
Query OK, 0 rows affected (0.00 sec)
root@localhost [(none)]>start group_replication;
Query OK, 0 rows affected (3.16 sec)
root@localhost [(none)]>set global group_replication_bootstrap_group=OFF;
Query OK, 0 rows affected (0.00 sec)
root@localhost [(none)]>select * from performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| group_replication_applier | f78a6902-9679-11ea-b136-0242c0a8bc51 | ms81 | 3316 | ONLINE | PRIMARY | 8.0.19 |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
1 rows in set (0.01 sec)
-
啟動node2
[root@ms82 ~]# mysqld --defaults-file=/data/mysql/mysql3316/my3316.cnf &
[root@ms82 ~]# mysql -S /data/mysql/mysql3316/tmp/mysql.sock
root@localhost [(none)]>start group_replication;
Query OK, 0 rows affected (3.45 sec)
root@localhost [(none)]>select * from performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| group_replication_applier | f78a6902-9679-11ea-b136-0242c0a8bc51 | ms81 | 3316 | ONLINE | PRIMARY | 8.0.19 |
| group_replication_applier | faaab4c3-9679-11ea-896f-0242c0a8bc52 | ms82 | 3316 | ONLINE | SECONDARY | 8.0.19 |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
2 rows in set (0.00 sec)
-同理,啟動node3
[root@ms83 ~]# mysqld --defaults-file=/data/mysql/mysql3316/my3316.cnf &
[root@ms83 ~]# mysql -S /data/mysql/mysql3316/tmp/mysql.sock
root@localhost [(none)]>start group_replication;
Query OK, 0 rows affected (3.45 sec)
root@localhost [(none)]>select * from performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| group_replication_applier | f78a6902-9679-11ea-b136-0242c0a8bc51 | ms81 | 3316 | ONLINE | PRIMARY | 8.0.19 |
| group_replication_applier | faaab4c3-9679-11ea-896f-0242c0a8bc52 | ms82 | 3316 | ONLINE | SECONDARY | 8.0.19 |
| group_replication_applier | fb358b40-9679-11ea-94cb-0242c0a8bc53 | ms83 | 3316 | ONLINE | SECONDARY | 8.0.19 |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
3 rows in set (0.01 sec)
https://mp.weixin.qq.com/s/uDUepIGNDt0CPTclgEVWUQ
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/31429259/viewspace-2723508/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- MySQL複製的常用架構MySql架構
- MySQL主從複製架構搭建及讀寫分離測試MySql架構
- 高效能Mysql主從架構的複製原理及配置詳解MySql架構
- MySQL 半同步複製+MMM架構MySql架構
- MySQL(二):主從複製結構、半同步複製、雙主複製結構、利用SSL實現安全的MySQL主從複製MySql
- MySQL 主從架構配置詳解MySql架構
- mysqlbackup線上配置Mysql主從架構MySql架構
- MySQL主從原理, 高可用架構與高效能架構MySql架構
- MySQL高可用架構之Keepalived+主從架構部署MySql架構
- mysql8.0 主從架構模式【0到1架構系列】MySql架構模式
- MySQL高可用架構-MMM、MHA、MGR、PXCMySql架構
- 高可用Mysql架構_Mysql主從複製、Mysql雙主熱備、Mysql雙主雙從、Mysql讀寫分離(Mycat中介軟體)、Mysql分庫分表架構(Mycat中介軟體)的演變MySql架構
- MySQL 5.6並行複製的架構圖MySql並行架構
- mysql主從複製+主備切換MySql
- MySQL一主一從架構的實現MySql架構
- FILE+POS 方式 GreatSQL 主從複製架構給主節點磁碟擴容SQL架構
- MySQL主從複製結構中常用引數MySql
- Redis學習 主從複製(master-replica)架構介紹及實現RedisAST架構
- MySQL主從複製與主主複製MySql
- GreatSQL 構建高效 HTAP 服務架構指南(MGR)SQL架構
- 基於bin-log&position搭建主從架構MySQL架構MySql
- MySQL的主從複製與MySQL的主主複製MySql
- 利用KEEPALIVED構建mysql主主複製MySql
- mysql5.7主從複製,主主複製MySql
- MySQL主從複製、半同步複製和主主複製MySql
- mysql複製--主從複製配置MySql
- MySQL 主從複製MySql
- 【MySql】主從複製MySql
- MySQL主從複製MySql
- MySql架構MySql架構
- redis主從複製幾種結構Redis
- Mysql主從架構搭建的時候遇到的問題MySql架構
- 架構師必備:MySQL主從同步原理和應用架構MySql主從同步
- 從MySQL雙主高可用架構,談戀愛關係。MySql架構
- MySQL主從複製、半同步複製和主主複製概述MySql
- MySQL主從複製錯誤——列型別轉換錯誤MySql型別
- MySQL 高可用架構之 MMM 架構MySql架構
- Redis高可用之戰:主從架構Redis架構