ProxySQL Cluster 高可用叢集環境部署記錄

散盡浮華發表於2019-02-21

 

ProxySQL在早期版本若需要做高可用,需要搭建兩個例項,進行冗餘。但兩個ProxySQL例項之間的資料並不能共通,在主例項上配置後,仍需要在備用節點上進行配置,對管理來說非常不方便。但是ProxySQl 從1.4.2版本後,ProxySQL支援原生的Cluster叢集搭建,例項之間可以互通一些配置資料,大大簡化了管理與維護操作

ProxySQL是一個非中心化代理,在拓撲中,建議將它部署在靠近應用程式伺服器的位置處。ProxySQL節點可以很方便地擴充套件到上百個節點,因為它支援runtime修改配置並立即生效。這意味著可以通過一些配置管理工具來協調、重配置ProxySQL叢集中的各例項,例如 Ansible/Chef/Puppet/Salt 等自動化工具,或者Etcd/Consul/ZooKeeper等服務發現軟體。這些特性使得可以高度定製ProxySQL的叢集。但儘管如此,這些管理方式還是有些缺點:
-  需要且依賴於外部軟體(配置管理軟體);
-  正因為上面的一點,使得這種管理方式不是原生支援的;
-  管理時間長度不可預測;
-  無法解決網路分割槽帶來的問題;

基於此,ProxySQL 1.4.x 版本嘗試支援原生叢集功能。叢集的搭建有很多種方式,如1+1+1的方式,還可以(1+1)+1的方式。採用(1+1)+1的叢集部署方式比較簡單,即先將兩個節點作為叢集啟動,然後其他節點選擇性加入的方式。

一、ProxySQL Cluster 配置說明

ProxySQL有兩個主要的元件實現ProxySQL叢集:
monitoring  (叢集監控元件)
re-configuration (remote configuration,遠端配置)

這兩個元件中都有4張表:
-  mysql_query_rules
mysql_servers
mysql_users
proxysql_servers

1、Monitoring
為了監控Cluster,ProxySQL引入了幾個新的表、命令、變數。

Admin variables
有幾個和Cluster相關的變數,它們是Admin變數,意味著修改它們需要使用load admin variables to runtime使其生效。
1) 用於同步的變數
admin-checksum_mysql_query_rules
admin-checksum_mysql_servers
admin-checksum_mysql_users: 如果有數百萬的users,則建議禁用該特性且不要依賴於它,因為它會非常慢。

這幾個變數都是布林值,其中:
admin-checksum_mysql_query_rules 設定為true時,每次執行"LOAD MYSQL QUERY RULES TO RUNTIME"時,proxysql都會生成一個新的配置checksum校驗碼(configuration checksum)。如果為false時,新的配置不會自動被廣播出去,也不會從其它節點同步配置到本機;

admin-checksum_mysql_servers 設定為true時,每次執行"LOAD MYSQL SERVERS TO RUNTIME"時,proxysql都會生成一個新的配置checksum。如果為false時,新的配置不會自動被廣播出去,也不會從其它節點同步配置到本機;

admin-checksum_mysql_users 設定為true時,每次執行"LOAD MYSQL USERS TO RUNTIME"時,proxysql都會生成一個新的配置checksum。如果為false時,新的配置不會自動被廣播出去,也不會從其它節點同步配置到本機;

2) 叢集認證憑據相關的變數

admin-cluster_username 和 admin-cluster_password:該憑據用於監控其它ProxySQL例項。需要注意,這個使用者/密碼必須是admin-admin_credentials中已經存在的,否則將會連線失敗。如果沒有定義叢集憑據,ProxySQL叢集將不會做任何檢查。

ProxySQL Cluster叢集間,ProxySQL 為了監控其他ProxySQL 例項需要認證引數:admin-cluster_username 和 admin-cluster_password。而且這2個引數指定的使用者名稱/密碼還必須配置到引數 admin-admin_credentials 中,否則會無法連線到其他ProxySQL

admin_credentials="admin:admin;cluster1:secret1pass"

3) 檢查時間間隔/頻率相關變數
-  admin-cluster_check_interval_ms:定義校驗碼檢查(checksum check)時間間隔。預設值1000(即1秒),最小值10,最大值300000。
admin-cluster_check_status_frequency:該變數定義做了多少次checksum檢查後,就執行一次狀態檢查(status check)。預設值10,最小0,最大10000。

4) 同步到磁碟相關的變數
在遠端同步配置之後,通常最好的做法是立即將新的更改儲存到磁碟。這樣重啟時,更改的配置不會丟失。
-  admin-cluster_mysql_query_rules_save_to_disk
admin-cluster_mysql_servers_save_to_disk
admin-cluster_mysql_users_save_to_disk
admin-cluster_proxysql_servers_save_to_disk

這幾個變數都是布林值。當設定為true(預設值)時,在遠端同步並load到runtime後,新的mysql_query_rules、mysql_servers、mysql_users、proxysql_servers配置會持久化到磁碟中。

5) 是否要遠端同步的變數
由於某些原因,可能多個ProxySQL例項會在同一時間進行重新配置。
例如,每個ProxySQL例項都在監控MySQL的replication,且自動探測到MySQL的故障轉移,在一個極短的時間內(可能小於1秒),這些ProxySQL例項可能會自動調整新的配置,而無需通過其它ProxySQL例項來同步新配置。

類似的還有,當所有ProxySQL例項都探測到了和某例項的臨時的網路問題,或者某個MySQL節點比較慢(replication lag, 拖後腿),這些ProxySQL例項都會自動地避開這些節點。這時各ProxySQL例項也無需從其它節點處同步配置,而是同時自動完成新的配置。

基於此,可以配置ProxySQL叢集,讓各ProxySQL例項暫時無需從其它例項處同步某些配置,而是等待一定次數的檢查之後,再觸發遠端同步。但是,如果本地和遠端節點的這些變數閾值不同,則還是會觸發遠端同步。

admin-cluster_mysql_query_rules_diffs_before_sync:
admin-cluster_mysql_servers_diffs_before_sync:
-  admin-cluster_mysql_users_diffs_before_sync:
admin-cluster_proxysql_servers_diffs_before_sync:

分別定義經過多少次的"無法匹配"檢查之後,觸發mysql_query_rules、mysql_servers、mysql_users、proxysql_servers配置的遠端同步。預設值3次,最小值0,表示永不遠端同步,最大值1000。

比如各例項監控mysql_servers配置並做校驗碼檢查,如果某例項和本地配置不同,當多次檢測到都不同時,將根據load to runtime的時間戳決定是否要從遠端將mysql_servers同步到本地。

6) 延遲同步
ProxySQL Cluster 可以定義達到多少個checksum 不同之後,才在叢集內部進行配置同步。
query rules, servers, users 和proxysql servers 分別有admin-cluster_XXX_diffs_before_sync 相關的引數,取值範圍0 ~ 1000,0 代表從不同步。預設3。

Configuration tables

1) proxysql_servers 表
proxysql_servers表定義了ProxySQL叢集中各ProxySQL例項列表。ProxySQL 叢集有哪些例項,可以檢視proxysql_servers 表。在新增ProxySQL 例項時,也需要 insert 該表,或者修改cnf 檔案中的 proxysql_servers 部分的配置。該表的定義語句如下:

CREATE TABLE proxysql_servers (
    hostname VARCHAR NOT NULL,
    port INT NOT NULL DEFAULT 6032,
    weight INT CHECK (weight >= 0) NOT NULL DEFAULT 0,
    comment VARCHAR NOT NULL DEFAULT '',
    PRIMARY KEY (hostname, port) )

各欄位的意義如下:
-  hostname:ProxySQL例項的主機名或IP地址;
-  port:ProxySQL例項的埠 (譯註:這個埠是ProxySQL示例的admin管理埠);
-  weight:目前未啟用該功能。定義叢集中各ProxySQL的權重值;
-  comment:註釋欄位,可隨意填寫;

proxysql_servers的配置項可以從傳統配置檔案中載入,即支援配置檔案。以下是傳統配置檔案中定義proxysql_servers的示例:

proxysql_servers =
(
    {
        hostname="172.16.0.101"
        port=6032
        weight=0
        comment="proxysql1"
    },
    {
        hostname="172.16.0.102"
        port=6032
        weight=0
        comment="proxysql2"
    }
)

特別注意:ProxySQL只有在磁碟資料庫檔案不存在,或者使用了--initial選項時才會讀取傳統配置檔案。
-  配置檔案暫時還不支援該表。
-  因為該ProxySQL Cluster功能仍處於試驗階段,不會自動從磁碟配置檔案中讀取到該表中。也就是說,目前階段,不支援在配置檔案中配置proxysql server表的內容!

2) runtime_proxysql_servers 表
正如其它runtime_表一樣,runtime_proxysql_servers表和proxysql_servers的結構完全一致,只不過它是runtime資料結構中的配置,也就是當前正在生效的配置。該表的定義語句如下:

CREATE TABLE runtime_proxysql_servers (
    hostname VARCHAR NOT NULL,
    port INT NOT NULL DEFAULT 6032,
    weight INT CHECK (weight >= 0) NOT NULL DEFAULT 0,
    comment VARCHAR NOT NULL DEFAULT '',
    PRIMARY KEY (hostname, port) )

3)runtime_checksums_values 表
runtime_checksums_values表是目前第一個不基於記憶體資料庫中的runtime_表(譯註:換句話說,沒有checksums_values表)。該表的定義語句如下:

CREATE TABLE runtime_checksums_values (
    name VARCHAR NOT NULL,
    version INT NOT NULL,
    epoch INT NOT NULL,
    checksum VARCHAR NOT NULL,
    PRIMARY KEY (name))

該表用於顯示在執行load to runtime命令時的一些資訊:
-  name:模組的名稱
-  version:執行了多少次load to runtime操作,包括所有隱式和顯式執行的(某些事件會導致ProxySQL內部自動執行load to runtime命令)
-  epoch:最近一次執行load to runtime的時間戳
-  checksum:執行load to runtime時生成的配置校驗碼(checksum)

該表的一個例項

Admin> SELECT * FROM runtime_checksums_values;
+-------------------+---------+------------+--------------------+
| name              | version | epoch      | checksum           |
+-------------------+---------+------------+--------------------+
| admin_variables   | 0       | 0          |                    |
| mysql_query_rules | 5       | 1503442167 | 0xD3BD702F8E759B1E |
| mysql_servers     | 1       | 1503440533 | 0x6F8CEF0F4BD6456E |
| mysql_users       | 1       | 1503440533 | 0xF8BDF26C65A70AC5 |
| mysql_variables   | 0       | 0          |                    |
| proxysql_servers  | 2       | 1503442214 | 0x89768E27E4931C87 |
+-------------------+---------+------------+--------------------+
6 rows in set (0,00 sec)

特別注意: 目前6個元件中只有4種模組的配置會生成對應的校驗碼(checksum),不能生成的元件是:admin_variables,mysql_variables。 checnsum 只有在執行了load to run ,並且admin-checksum_XXX = true 時,才可以正常生成。即:
LOAD MYSQL QUERY RULES TO RUNTIME:當admin-checksum_mysql_query_rules=true時生成一個新的mysql_query_rules配置校驗碼
LOAD MYSQL SERVERS TO RUNTIME:當admin-checksum_mysql_servers=true時生成一個新的mysql_servers配置校驗碼
LOAD MYSQL USERS TO RUNTIME:當admin-checksum_mysql_users=true時生成一個新的mysql_users配置校驗碼
LOAD PROXYSQL SERVERS TO RUNTIME:總是會生成一個新的proxysql_servers配置校驗碼
LOAD ADMIN VARIABLES TO RUNTIME:不生成校驗碼
LOAD MYSQL VARIABLES TO RUNTIME:不生產校驗碼

New commands (新命令):
LOAD PROXYSQL SERVERS FROM MEMORY / LOAD PROXYSQL SERVERS TO RUNTIME
從記憶體資料庫中載入proxysql servers配置到runtime資料結構
SAVE PROXYSQL SERVERS TO MEMORY / SAVE PROXYSQL SERVERS FROM RUNTIME
將proxysql servers配置從runtime資料結構持久化儲存到記憶體資料庫中
LOAD PROXYSQL SERVERS TO MEMORY / LOAD PROXYSQL SERVERS FROM DISK
從磁碟資料庫中載入proxysql servers配置到記憶體資料庫中
LOAD PROXYSQL SERVERS FROM CONFIG
從傳統配置檔案中載入proxysql servers配置到記憶體資料庫中
SAVE PROXYSQL SERVERS FROM MEMORY / SAVE PROXYSQL SERVERS TO DISK
將proxysql servers配置從記憶體資料庫中持久化儲存到磁碟資料庫中

state tables
有三個表被加入到stat 表中。

1)stats_proxysql_servers_checksums 表
該表記錄叢集中各個例項的元件checksum 資訊。

Admin> SHOW CREATE TABLE stats.stats_proxysql_servers_checksums\G
*************************** 1. row ***************************
       table: stats_proxysql_servers_checksums
Create Table: CREATE TABLE stats_proxysql_servers_checksums (
    hostname VARCHAR NOT NULL,
    port INT NOT NULL DEFAULT 6032,
    name VARCHAR NOT NULL,
    version INT NOT NULL,
    epoch INT NOT NULL,
    checksum VARCHAR NOT NULL,
    changed_at INT NOT NULL,
    updated_at INT NOT NULL,
    diff_check INT NOT NULL,
    PRIMARY KEY (hostname, port, name) )

各欄位意義如下:
-  hostname:ProxySQL例項的主機名
-  port:ProxySQL例項的埠
-  name:對端runtime_checksums_values中報告的模組名稱
-  version:對端runtime_checksum_values中報告的checksum的版本
注意,ProxySQL例項剛啟動時version=1:ProxySQL例項將永遠不會從version=1的例項處同步配置資料,因為一個剛剛啟動的ProxyQL例項不太可能是真相的來源,這可以防止新的連線節點破壞當前叢集配置
-  epoch:對端runtime_checksums_values中報告的checksum的時間戳epoch值
-  checksum:對端runtime_checksums_values中報告的checksum值
-  changed_at:探測到checksum發生變化的時間戳
-  updated_at:最近一次更新該類配置的時間戳
-  diff_check:一個計數器,用於記錄探測到的對端和本地checksum值已有多少次不同
需要等待達到閾值後,才會觸發重新配置。前面已經說明,在多個ProxySQL例項同時或極短時間內同時更改配置時,可以讓ProxySQL等待多次探測之後再決定是否從遠端同步配置。這個欄位正是用於記錄探測到的配置不同次數。如果diff_checks不斷增加卻仍未觸發同步操作,這意味著對端不是可信任的同步源,例如對端的version=1。另一方面,如果某對端節點不和ProxySQL叢集中的其它例項進行配置同步,這意味著叢集沒有可信任的同步源。這種情況可能是因為叢集中所有例項啟動時的配置都不一樣,它們無法自動判斷哪個配置才是正確的。可以在某個節點上執行load to runtime,使該節點被選舉為該類配置的可信任同步源。

2)stats_proxysql_servers_metrics 表
該表用來顯示群集模組在各個例項中執行 SHOW MYSQL STATUS 時,當前系統的部分指標。目前該表只是用來debug 的,在未來該表的各個指標將用來反映各個例項的健康狀態。

Admin> SHOW CREATE TABLE stats.stats_proxysql_servers_metrics\G
*************************** 1. row ***************************
       table: stats_proxysql_servers_metrics
Create Table: CREATE TABLE stats_proxysql_servers_metrics (
    hostname VARCHAR NOT NULL,
    port INT NOT NULL DEFAULT 6032,
    weight INT CHECK (weight >= 0) NOT NULL DEFAULT 0,
    comment VARCHAR NOT NULL DEFAULT '',
    response_time_ms INT NOT NULL,
    Uptime_s INT NOT NULL,
    last_check_ms INT NOT NULL,
    Queries INT NOT NULL,
    Client_Connections_connected INT NOT NULL,
    Client_Connections_created INT NOT NULL,
    PRIMARY KEY (hostname, port) )

當執行show mysql status語句時,顯示一些已檢索到的指標。欄位意義如下:
-  hostname:ProxySQL例項主機名
-  port:ProxySQL例項埠
-  weight:報告結果同proxysql_servers.weight欄位
-  comment:報告結果同proxysql_servers.comment欄位
-  response_time_ms:執行show mysql status的響應時長,單位毫秒
-  Uptime_s:ProxySQL例項的uptime,單位秒
-  last_check_ms:最近一次執行check距現在已多久,單位毫秒
-  Queries:該例項已執行query的數量
-  Client_Connections_connected:number of client's connections connected
-  Client_Connections_created:number of client's connections created
注意:當前這些狀態只為debug目的,但未來可能會作為遠端例項的健康指標。

3)stats_proxysql_servers_status
在目前的1.4.6 版本中,該表還未啟用。

2、Re-configuration
因為叢集間,所有節點都是相互監控的。每個ProxySQL節點都監控叢集中的其它例項,它們可以快速探測到某個例項的配置是否發生改變。如果某例項的配置發生改變,其它例項會檢查這個配置和自身的配置是否相同,因為其它節點的配置可能和本節點的配置同時(或在極短時間差範圍)發生了改變。

由於相互監控,所以當配置發生變動時,它們可以立即發現。當其他節點的配置發生變動時,本節點會先去檢查一次它自身的配置,因為有可能remote instance 和local instance 同時發生配置變動。
如果比較結果不同:
如果它們自身的 version = 1,就去找叢集內從version > 1的節點處找出epoch最大值的節點,並從該節點拉取配置應用到本地,並立即同步。
-  如果version >1, 該節點開始統計和其他節點間的differ 數。即開始對探測到的不同配置進行計數。
   -  當 differ 大於 cluster__name___diffs_before_sync , 並且cluster__name__diffs_before_sync > 0, 就去找叢集內 version >1, 並且epoch 最高的節點,並立即同步。也就是說當探測到不同配置的次數超過cluster_name_diffs_before_sync,且cluster_name_diffs_before_sync大於0時,找出version > 1且epoch值最大的節點,並從該節點拉取配置禁用應用。

同步配置的過程如下:
用於健康檢查的連線,也用來執行一系列類似於select _list_of_columns from runtime_module的select語句。例如:

SELECT hostgroup_id, hostname, port, status, weight, compression, max_connections, max_replication_lag, use_ssl, max_latency_ms, comment FROM runtime_mysql_servers;
SELECT writer_hostgroup, reader_hostgroup, comment FROM runtime_mysql_replication_hostgroups;

刪除本地配置。例如:

DELETE FROM mysql_servers;
DELETE FROM mysql_replication_hostgroups;

向本地配置表中插入已從遠端節點檢索到的新配置。
-  在內部執行LOAD module_name TO RUNTIME:這會遞增版本號,並建立一個新的checksum。
-  如果cluster_name_save_to_disk=true,再在內部執行SAVE module_name TO DISK。

3、 網路消耗
在上述描述的架構模式中,每個ProxySQL節點都監控叢集中其它所有節點,這是一個很典型並完整的點對點網路。
為了減少網路開銷,節點間並不總是交換所有的checksum 資訊,而是將所有version 、所有checksum (注意:是每個節點都有一個全域性checksum,而不是所有節點共有一個全域性checksum)。相結合產生的單個新的 checksum 進行交換。所以一旦這個新的checksum 發生變動,當全域性checksum改變,將檢索該全域性checksum對應的checksum列表,那麼得到詳細的各個模組的checksum。 通過該技術,200個節點的ProxySQL叢集中,如果每個節點的監控時間間隔為1000ms,每個節點的進/出流量只需50KB的頻寬

                                                                                                                                                                                            

1) ProxySQL 目前實現的功能
-  支援MySQL組複製(add support for MySQL Group Replication)
-  支援Scheduler(add support for Scheduler)

2) ProxySQL 未來可能要實現的功能
未來可能要實現的Cluster不完整特性列表。這些特性目前都還未實現,且實現後有可能會與此處描述的有所區別。
-  支援master選舉:ProxySQL內部將使用master關鍵字替代leader
-  只有master節點是可寫/可配置的
-  實現類似於MySQL複製的功能:從master複製到slave。這將允許實時推送配置內容,而非現在使用的主動pull機制
-  實現類似於MySQL複製的功能:從master複製到候選master
-  實現類似於MySQL複製的功能:從候選master複製到slave
-  將候選master定義為法定票數節點,slave不參與投票

3) 問題:如果不同節點在同一時刻載入了不同配置會如何,最後一個才生效嗎?
目前還未實現master和master選舉的機制。這意味著多個節點上可能會潛在地同時執行load命令(就像是多個master一樣),每個例項都會基於時間戳來檢測配置衝突,然後再觸發自動重新配置。如果所有節點在同一時刻載入的是相同的配置,則不會出現問題如果所有節點在不同時刻載入了不同的配置,則最後一個配置生效如果所有節點在同一時刻載入了不同配置,這些不同配置會正常進行傳播。直到出現衝突,然後回滾。慶幸的是,每個ProxySQL節點都知道其它每個節點的checksum,因此很容易監控並探測到不同的配置。

4)誰負責向所有節點寫入配置?
目前,ProxySQL叢集使用拉取(pull)機制,因此當探測到節點自身需要重新配置時,會從擁有最新配置的節點處拉取配置到本地並應用。

5)何實現選舉?Raft協議嗎?
關於選舉,正在實現計劃中,但應該不會採用Raft共識協議。ProxySQL使用表來儲存配置資訊,使用MySQL協議來執行對端健康檢查、配置資訊的查詢請求,以及使用MySQL協議實現心跳等等。所以對於ProxySQL來說,MySQL協議本身相比Raft協議要更多才多藝。

6)某些原因下,如果某個節點無法從遠端抓取新的配置會發生什麼?
配置更改是非同步傳播的。因此,某個ProxySQL節點可能暫時會無法獲取新的配置,例如網路問題。但是,當該例項探測到新的配置時,它會自動去抓取新配置。

7)跨DC的ProxySQL叢集是否實現?最佳實踐是怎樣的,每個DC一個ProxySQL叢集嗎?
ProxySQL叢集沒有邊界限制,因此一個ProxySQL叢集可以跨多個DC,一個DC內也可以有多個ProxySQL叢集。這依賴於實際應用場景。唯一的限制是,每個ProxySQL例項只能屬於單個ProxySQL叢集。ProxySQL叢集沒有名稱,為了確保ProxySQL例項不會加入到錯誤的叢集中,可以讓每個ProxySQL叢集採用不同的叢集認證憑據。

8)如何引導啟動一個ProxySQL叢集?
很簡單:只需讓proxysql_servers表中多於一個節點即可。

9)ProxySQL叢集中的其它節點如何知道有新節點?
這個無法自動知道,這是為了防止新節點破壞叢集。一個新節點在加入叢集時,會立刻從叢集中拉取配置,但不會將自己作為可信任的配置源通告出去。要讓其它節點知道有一個新的節點,只需向這些節點的proxysql_servers中加入新的節點資訊,然後執行load proxysql servers to runtime即可。

二、ProxySQL Cluster + MGR 高可用叢集方案部署記錄 

這裡針對MGR模式 (GTID模式也是一樣的) 部署ProxySQL Cluster雙節點叢集環境。可以結合keepalived,利用vip漂移實現ProxySQL節點故障無感知切換的高可用叢集方案。 

1、環境準備

172.16.60.211       MGR-node1 (master1)    
172.16.60.212       MGR-node2 (master2)     
172.16.60.213       MGR-node3 (master3)     
172.16.60.214       ProxySQL-node1          
172.16.60.220       ProxySQL-node2                   

[root@MGR-node1 ~]# cat /etc/redhat-release
CentOS Linux release 7.5.1804 (Core)
    
為了方便實驗,關閉所有節點的防火牆
[root@MGR-node1 ~]# systemctl stop firewalld
[root@MGR-node1 ~]# firewall-cmd --state
not running
    
[root@MGR-node1 ~]# cat /etc/sysconfig/selinux |grep "SELINUX=disabled"
SELINUX=disabled
[root@MGR-node1 ~]# setenforce 0           
setenforce: SELinux is disabled
[root@MGR-node1 ~]# getenforce             
Disabled
  
特別要注意一個關鍵點: 必須設定好各個mysql節點的主機名,並且保證能通過主機名找到各成員!
 
則必須要在每個節點的/etc/hosts裡面做主機名繫結,否則後續將節點加入group組會失敗!報錯RECOVERING!!
[root@MGR-node1 ~]# cat /etc/hosts
........
172.16.60.211    MGR-node1
172.16.60.212    MGR-node2
172.16.60.213    MGR-node3

2、在三個MGR節點安裝Mysql5.7

在三個mysql節點機上使用yum方式安裝Mysql5.7,參考:https://www.cnblogs.com/kevingrace/p/8340690.html
    
安裝MySQL yum資源庫
[root@MGR-node1 ~]# yum localinstall https://dev.mysql.com/get/mysql57-community-release-el7-8.noarch.rpm
    
安裝MySQL 5.7
[root@MGR-node1 ~]# yum install -y mysql-community-server
    
啟動MySQL伺服器和MySQL的自動啟動
[root@MGR-node1 ~]# systemctl start mysqld.service
[root@MGR-node1 ~]# systemctl enable mysqld.service
    
設定登入密碼
由於MySQL從5.7開始不允許首次安裝後使用空密碼進行登入!為了加強安全性,系統會隨機生成一個密碼以供管理員首次登入使用,
這個密碼記錄在/var/log/mysqld.log檔案中,使用下面的命令可以檢視此密碼:
[root@MGR-node1 ~]# cat /var/log/mysqld.log|grep 'A temporary password'
2019-01-11T05:53:17.824073Z 1 [Note] A temporary password is generated for root@localhost: TaN.k:*Qw2xs
    
使用上面檢視的密碼TaN.k:*Qw2xs 登入mysql,並重置密碼為123456
[root@MGR-node1 ~]# mysql -p                 #輸入預設的密碼:TaN.k:*Qw2xs
.............
mysql> set global validate_password_policy=0;
Query OK, 0 rows affected (0.00 sec)
    
mysql> set global validate_password_length=1;
Query OK, 0 rows affected (0.00 sec)
    
mysql> set password=password("123456");
Query OK, 0 rows affected, 1 warning (0.00 sec)
    
mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)
    
檢視mysql版本
[root@MGR-node1 ~]# mysql -p123456
........
mysql> select version();
+-----------+
| version() |
+-----------+
| 5.7.24    |
+-----------+
1 row in set (0.00 sec)
   
=====================================================================
溫馨提示
mysql5.7通過上面預設安裝後,執行語句可能會報錯:
ERROR 1819 (HY000): Your password does not satisfy the current policy requirements
   
這個報錯與Mysql 密碼安全策略validate_password_policy的值有關,validate_password_policy可以取0、1、2三個值:
解決辦法:
set global validate_password_policy=0;
set global validate_password_length=1;

3、MGR組複製環境部署 (多寫模式)

可以參考:https://www.cnblogs.com/kevingrace/p/10260685.html
 
由於之前做了其他測試,這裡需要將三個節點的mysql環境抹乾淨:
# systemctl  stop  mysqld
# rm -rf /var/lib/mysql
# systemctl start mysqld
 
然後重啟密碼
# cat /var/log/mysqld.log|grep 'A temporary password'
# mysql -p123456
mysql> set global validate_password_policy=0;
mysql> set global validate_password_length=1;
mysql> set password=password("123456");
mysql> flush privileges;
 
=======================================================
1) MGR-node1節點操作
[root@MGR-node1 ~]# mysql -p123456
.........
mysql> select uuid();
+--------------------------------------+
| uuid()                               |
+--------------------------------------+
| ae09faae-34bb-11e9-9f91-005056ac6820 |
+--------------------------------------+
1 row in set (0.00 sec)
 
[root@MGR-node1 ~]# cp /etc/my.cnf /etc/my.cnf.bak
[root@MGR-node1 ~]# >/etc/my.cnf
[root@MGR-node1 ~]# vim /etc/my.cnf
[mysqld]
datadir = /var/lib/mysql
socket = /var/lib/mysql/mysql.sock
        
symbolic-links = 0
        
log-error = /var/log/mysqld.log
pid-file = /var/run/mysqld/mysqld.pid
    
#GTID:
server_id = 1
gtid_mode = on
enforce_gtid_consistency = on
    
master_info_repository=TABLE
relay_log_info_repository=TABLE
binlog_checksum=NONE
        
#binlog
log_bin = mysql-bin
log-slave-updates = 1
binlog_format = row
sync-master-info = 1
sync_binlog = 1
       
#relay log
skip_slave_start = 1
    
transaction_write_set_extraction=XXHASH64     
loose-group_replication_group_name="5db40c3c-180c-11e9-afbf-005056ac6820"    
loose-group_replication_start_on_boot=off   
loose-group_replication_local_address= "172.16.60.211:24901"
loose-group_replication_group_seeds= "172.16.60.211:24901,172.16.60.212:24901,172.16.60.213:24901"
loose-group_replication_bootstrap_group=off
loose-group_replication_single_primary_mode=off     
loose-group_replication_enforce_update_everywhere_checks=on   
loose-group_replication_ip_whitelist="172.16.60.0/24,127.0.0.1/8"  
 
重啟mysql服務
[root@MGR-node1 ~]# systemctl restart mysqld
 
登入mysql進行相關設定操作
[root@MGR-node1 ~]# mysql -p123456
............
mysql> SET SQL_LOG_BIN=0; 
Query OK, 0 rows affected (0.00 sec)
 
mysql> GRANT REPLICATION SLAVE ON *.* TO rpl_slave@'%' IDENTIFIED BY 'slave@123';
Query OK, 0 rows affected, 1 warning (0.00 sec)
 
mysql> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.01 sec)
 
mysql> reset master;
Query OK, 0 rows affected (0.19 sec)
 
mysql> SET SQL_LOG_BIN=1;
Query OK, 0 rows affected (0.00 sec)
 
mysql> CHANGE MASTER TO MASTER_USER='rpl_slave', MASTER_PASSWORD='slave@123' FOR CHANNEL 'group_replication_recovery';
Query OK, 0 rows affected, 2 warnings (0.33 sec)
 
mysql> INSTALL PLUGIN group_replication SONAME 'group_replication.so';
Query OK, 0 rows affected (0.03 sec)
 
mysql> SHOW PLUGINS;
+----------------------------+----------+--------------------+----------------------+---------+
| Name                       | Status   | Type               | Library              | License |
+----------------------------+----------+--------------------+----------------------+---------+
...............
...............
| group_replication          | ACTIVE   | GROUP REPLICATION  | group_replication.so | GPL     |
+----------------------------+----------+--------------------+----------------------+---------+
46 rows in set (0.00 sec)
 
mysql> SET GLOBAL group_replication_bootstrap_group=ON;
Query OK, 0 rows affected (0.00 sec)
 
mysql> START GROUP_REPLICATION;
Query OK, 0 rows affected (2.34 sec)
 
mysql> SET GLOBAL group_replication_bootstrap_group=OFF;
Query OK, 0 rows affected (0.00 sec)
 
mysql> SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| group_replication_applier | 42ca8591-34bb-11e9-8296-005056ac6820 | MGR-node1   |        3306 | ONLINE       |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
1 row in set (0.00 sec)
 
比如要保證上面的group_replication_applier的狀態為"ONLINE"才對!
  
建立一個測試庫
mysql> CREATE DATABASE kevin CHARACTER SET utf8 COLLATE utf8_general_ci;     
Query OK, 1 row affected (0.03 sec)
  
mysql> use kevin;
Database changed
mysql> create table if not exists haha (id int(10) PRIMARY KEY AUTO_INCREMENT,name varchar(50) NOT NULL);
Query OK, 0 rows affected (0.24 sec)
  
mysql> insert into kevin.haha values(1,"wangshibo"),(2,"guohuihui"),(3,"yangyang"),(4,"shikui");     
Query OK, 4 rows affected (0.07 sec)
Records: 4  Duplicates: 0  Warnings: 0
  
mysql> select * from kevin.haha;
+----+-----------+
| id | name      |
+----+-----------+
|  1 | wangshibo |
|  2 | guohuihui |
|  3 | yangyang  |
|  4 | shikui    |
+----+-----------+
4 rows in set (0.00 sec)
 
=====================================================================
2) MGR-node2節點操作
[root@MGR-node2 ~]# cp /etc/my.cnf /etc/my.cnf.bak
[root@MGR-node2 ~]# >/etc/my.cnf
[root@MGR-node2 ~]# vim /etc/my.cnf
[mysqld]
datadir = /var/lib/mysql
socket = /var/lib/mysql/mysql.sock
      
symbolic-links = 0
      
log-error = /var/log/mysqld.log
pid-file = /var/run/mysqld/mysqld.pid
  
#GTID:
server_id = 2
gtid_mode = on
enforce_gtid_consistency = on
  
master_info_repository=TABLE
relay_log_info_repository=TABLE
binlog_checksum=NONE
      
#binlog
log_bin = mysql-bin
log-slave-updates = 1
binlog_format = row
sync-master-info = 1
sync_binlog = 1
     
#relay log
skip_slave_start = 1
  
transaction_write_set_extraction=XXHASH64
loose-group_replication_group_name="5db40c3c-180c-11e9-afbf-005056ac6820"
loose-group_replication_start_on_boot=off
loose-group_replication_local_address= "172.16.60.212:24901"
loose-group_replication_group_seeds= "172.16.60.211:24901,172.16.60.212:24901,172.16.60.213:24901"
loose-group_replication_bootstrap_group=off
loose-group_replication_single_primary_mode=off
loose-group_replication_enforce_update_everywhere_checks=on
loose-group_replication_ip_whitelist="172.16.60.0/24,127.0.0.1/8"
 
重啟mysql服務
[root@MGR-node2 ~]# systemctl restart mysqld
登入mysql進行相關設定操作
[root@MGR-node2 ~]# mysql -p123456
.........
mysql> SET SQL_LOG_BIN=0;
Query OK, 0 rows affected (0.00 sec)
 
mysql> GRANT REPLICATION SLAVE ON *.* TO rpl_slave@'%' IDENTIFIED BY 'slave@123';
Query OK, 0 rows affected, 1 warning (0.00 sec)
 
mysql> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)
 
mysql> reset master;
Query OK, 0 rows affected (0.17 sec)
 
mysql> SET SQL_LOG_BIN=1;
Query OK, 0 rows affected (0.00 sec)
 
mysql> CHANGE MASTER TO MASTER_USER='rpl_slave', MASTER_PASSWORD='slave@123' FOR CHANNEL 'group_replication_recovery';
Query OK, 0 rows affected, 2 warnings (0.21 sec)
 
mysql> INSTALL PLUGIN group_replication SONAME 'group_replication.so';
Query OK, 0 rows affected (0.20 sec)
 
mysql> SHOW PLUGINS;
+----------------------------+----------+--------------------+----------------------+---------+
| Name                       | Status   | Type               | Library              | License |
+----------------------------+----------+--------------------+----------------------+---------+
.............
.............
| group_replication          | ACTIVE   | GROUP REPLICATION  | group_replication.so | GPL     |
+----------------------------+----------+--------------------+----------------------+---------+
46 rows in set (0.00 sec)
 
mysql> START GROUP_REPLICATION;
Query OK, 0 rows affected (6.25 sec)
 
mysql> SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| group_replication_applier | 4281f7b7-34bb-11e9-8949-00505688047c | MGR-node2   |        3306 | ONLINE       |
| group_replication_applier | 42ca8591-34bb-11e9-8296-005056ac6820 | MGR-node1   |        3306 | ONLINE       |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
2 rows in set (0.00 sec)
 
檢視下,發現已經將MGR-node1節點新增的資料同步過來了
mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| kevin              |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
5 rows in set (0.00 sec)
 
mysql> select * from kevin.haha;
+----+-----------+
| id | name      |
+----+-----------+
|  1 | wangshibo |
|  2 | guohuihui |
|  3 | yangyang  |
|  4 | shikui    |
+----+-----------+
4 rows in set (0.00 sec)
 
=====================================================================
3) MGR-node3節點操作
[root@MGR-node3 ~]# cp /etc/my.cnf /etc/my.cnf.bak
[root@MGR-node3 ~]# >/etc/my.cnf
[root@MGR-node3 ~]# vim /etc/my.cnf
[mysqld]
datadir = /var/lib/mysql
socket = /var/lib/mysql/mysql.sock
      
symbolic-links = 0
      
log-error = /var/log/mysqld.log
pid-file = /var/run/mysqld/mysqld.pid
  
#GTID:
server_id = 3
gtid_mode = on
enforce_gtid_consistency = on
  
master_info_repository=TABLE
relay_log_info_repository=TABLE
binlog_checksum=NONE
      
#binlog
log_bin = mysql-bin
log-slave-updates = 1
binlog_format = row
sync-master-info = 1
sync_binlog = 1
     
#relay log
skip_slave_start = 1
  
transaction_write_set_extraction=XXHASH64
loose-group_replication_group_name="5db40c3c-180c-11e9-afbf-005056ac6820"
loose-group_replication_start_on_boot=off
loose-group_replication_local_address= "172.16.60.213:24901"
loose-group_replication_group_seeds= "172.16.60.211:24901,172.16.60.212:24901,172.16.60.213:24901"
loose-group_replication_bootstrap_group=off
loose-group_replication_single_primary_mode=off
loose-group_replication_enforce_update_everywhere_checks=on
loose-group_replication_ip_whitelist="172.16.60.0/24,127.0.0.1/8"
 
重啟mysql服務
[root@MGR-node3 ~]# systemctl restart mysqld
 
登入mysql進行相關設定操作
[root@MGR-node3 ~]# mysql -p123456
..........
mysql> SET SQL_LOG_BIN=0;
Query OK, 0 rows affected (0.00 sec)
 
mysql> GRANT REPLICATION SLAVE ON *.* TO rpl_slave@'%' IDENTIFIED BY 'slave@123';
Query OK, 0 rows affected, 1 warning (0.00 sec)
 
mysql> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.01 sec)
 
mysql> reset master;
Query OK, 0 rows affected (0.10 sec)
 
mysql> SET SQL_LOG_BIN=1;
Query OK, 0 rows affected (0.00 sec)
 
mysql> CHANGE MASTER TO MASTER_USER='rpl_slave', MASTER_PASSWORD='slave@123' FOR CHANNEL 'group_replication_recovery';
Query OK, 0 rows affected, 2 warnings (0.27 sec)
 
mysql> INSTALL PLUGIN group_replication SONAME 'group_replication.so';
Query OK, 0 rows affected (0.04 sec)
 
mysql> SHOW PLUGINS;
+----------------------------+----------+--------------------+----------------------+---------+
| Name                       | Status   | Type               | Library              | License |
+----------------------------+----------+--------------------+----------------------+---------+
.............
| group_replication          | ACTIVE   | GROUP REPLICATION  | group_replication.so | GPL     |
+----------------------------+----------+--------------------+----------------------+---------+
46 rows in set (0.00 sec)
 
mysql> START GROUP_REPLICATION;
Query OK, 0 rows affected (4.54 sec)
 
mysql> SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| group_replication_applier | 4281f7b7-34bb-11e9-8949-00505688047c | MGR-node2   |        3306 | ONLINE       |
| group_replication_applier | 42ca8591-34bb-11e9-8296-005056ac6820 | MGR-node1   |        3306 | ONLINE       |
| group_replication_applier | 456216bd-34bb-11e9-bbd1-005056880888 | MGR-node3   |        3306 | ONLINE       |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
3 rows in set (0.00 sec)
 
檢視下,發現已經將在其他節點上新增的資料同步過來了
mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| kevin              |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
5 rows in set (0.00 sec)
 
mysql> select * from kevin.haha;
+----+-----------+
| id | name      |
+----+-----------+
|  1 | wangshibo |
|  2 | guohuihui |
|  3 | yangyang  |
|  4 | shikui    |
+----+-----------+
4 rows in set (0.00 sec)
 
=====================================================================
4) 組複製資料同步測試
在任意一個節點上執行
mysql> SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| group_replication_applier | 2658b203-1565-11e9-9f8b-005056880888 | MGR-node3   |        3306 | ONLINE       |
| group_replication_applier | 2c1efc46-1565-11e9-ab8e-00505688047c | MGR-node2   |        3306 | ONLINE       |
| group_replication_applier | 317e2aad-1565-11e9-9c2e-005056ac6820 | MGR-node1   |        3306 | ONLINE       |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
3 rows in set (0.00 sec)
  
如上,說明已經在MGR-node1、MGR-node2、MGR-node3 三個節點上成功部署了基於GTID的組複製同步環境。
現在在三個節點中的任意一個上面更新資料,那麼其他兩個節點的資料庫都會將新資料同步過去的!
  
1)在MGR-node1節點資料庫更新資料
mysql> delete from kevin.haha where id>2;
Query OK, 2 rows affected (0.14 sec)
  
接著在MGR-node2、MGR-node3節點資料庫檢視,發現更新後資料已經同步過來了!
mysql> select * from kevin.haha;
+----+-----------+
| id | name      |
+----+-----------+
|  1 | wangshibo |
|  2 | guohuihui |
+----+-----------+
2 rows in set (0.00 sec)
  
2)在MGR-node2節點資料庫更新資料
mysql> insert into kevin.haha values(11,"beijing"),(12,"shanghai"),(13,"anhui");
Query OK, 3 rows affected (0.06 sec)
Records: 3  Duplicates: 0  Warnings: 0
  
接著在MGR-node1、MGR-node3節點資料庫檢視,發現更新後資料已經同步過來了!
mysql> select * from kevin.haha;
+----+-----------+
| id | name      |
+----+-----------+
|  1 | wangshibo |
|  2 | guohuihui |
| 11 | beijing   |
| 12 | shanghai  |
| 13 | anhui     |
+----+-----------+
5 rows in set (0.00 sec)
  
3)在MGR-node3節點資料庫更新資料
mysql> update kevin.haha set id=100 where name="anhui";
Query OK, 1 row affected (0.16 sec)
Rows matched: 1  Changed: 1  Warnings: 0
  
mysql> delete from kevin.haha where id=12;
Query OK, 1 row affected (0.22 sec)
  
接著在MGR-node1、MGR-node2節點資料庫檢視,發現更新後資料已經同步過來了!
mysql> select * from kevin.haha;
+-----+-----------+
| id  | name      |
+-----+-----------+
|   1 | wangshibo |
|   2 | guohuihui |
|  11 | beijing   |
| 100 | anhui     |
+-----+-----------+
4 rows in set (0.00 sec)

四、ProxySQL 安裝、讀寫分離配置、叢集部署

1) 兩個ProxySQL節點均安裝mysql客戶端,用於在本機連線到ProxySQL的管理介面

[root@ProxySQL-node1 ~]# vim /etc/yum.repos.d/mariadb.repo
[mariadb]
name = MariaDB
baseurl = http://yum.mariadb.org/10.3.5/centos6-amd64
gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
gpgcheck=1
    
安裝mysql-clinet客戶端
[root@ProxySQL-node1 ~]# yum install -y MariaDB-client
   
============================================================================
如果遇到報錯:
Error: MariaDB-compat conflicts with 1:mariadb-libs-5.5.60-1.el7_5.x86_64
 You could try using --skip-broken to work around the problem
 You could try running: rpm -Va --nofiles --nodigest
    
解決辦法:
[root@ProxySQL-node1 ~]# rpm -qa|grep mariadb
mariadb-libs-5.5.60-1.el7_5.x86_64
  
[root@ProxySQL-node1 ~]# rpm -e mariadb-libs-5.5.60-1.el7_5.x86_64 --nodeps
[root@ProxySQL-node1 ~]# yum install -y MariaDB-client

2) 兩個ProxySQL例項節點均安裝ProxySQL

proxysql的rpm包下載地址: https://pan.baidu.com/s/1S1_b5DKVCpZSOUNmtCXrrg
提取密碼: 5t1c
  
proxysql各版本下載:https://github.com/sysown/proxysql/releases
      
[root@ProxySQL-node ~]# yum install -y perl-DBI perl-DBD-MySQL
[root@ProxySQL-node ~]# rpm -ivh proxysql-1.4.8-1-centos7.x86_64.rpm --force
      
啟動proxysql  (或者使用"/etc/init.d/proxysql start"命令啟動)
[root@ProxySQL-node ~]# systemctl start proxysql
[root@ProxySQL-node ~]# systemctl restart proxysql

[root@ProxySQL-node ~]# ss -lntup|grep proxy
tcp    LISTEN     0      128       *:6080                  *:*                   users:(("proxysql",pid=29931,fd=11))
tcp    LISTEN     0      128       *:6032                  *:*                   users:(("proxysql",pid=29931,fd=28))
tcp    LISTEN     0      128       *:6033                  *:*                   users:(("proxysql",pid=29931,fd=27))
tcp    LISTEN     0      128       *:6033                  *:*                   users:(("proxysql",pid=29931,fd=26))
tcp    LISTEN     0      128       *:6033                  *:*                   users:(("proxysql",pid=29931,fd=25))
tcp    LISTEN     0      128       *:6033                  *:*                   users:(("proxysql",pid=29931,fd=24))
      
[root@ProxySQL-node ~]# mysql -uadmin -padmin -h127.0.0.1 -P6032
............
............
MySQL [(none)]> show databases;
+-----+---------------+-------------------------------------+
| seq | name          | file                                |
+-----+---------------+-------------------------------------+
| 0   | main          |                                     |
| 2   | disk          | /var/lib/proxysql/proxysql.db       |
| 3   | stats         |                                     |
| 4   | monitor       |                                     |
| 5   | stats_history | /var/lib/proxysql/proxysql_stats.db |
+-----+---------------+-------------------------------------+
5 rows in set (0.000 sec)
    
接著初始化Proxysql,將之前的proxysql資料都刪除
    
MySQL [(none)]> delete from scheduler ;
Query OK, 0 rows affected (0.000 sec)
     
MySQL [(none)]> delete from mysql_servers;
Query OK, 3 rows affected (0.000 sec)
     
MySQL [(none)]> delete from mysql_users;
Query OK, 1 row affected (0.000 sec)
     
MySQL [(none)]> delete from mysql_query_rules;
Query OK, 0 rows affected (0.000 sec)
     
MySQL [(none)]> delete from mysql_group_replication_hostgroups ;
Query OK, 1 row affected (0.000 sec)
     
MySQL [(none)]> LOAD MYSQL VARIABLES TO RUNTIME;
Query OK, 0 rows affected (0.000 sec)
     
MySQL [(none)]> SAVE MYSQL VARIABLES TO DISK;
Query OK, 94 rows affected (0.175 sec)
     
MySQL [(none)]> LOAD MYSQL SERVERS TO RUNTIME;
Query OK, 0 rows affected (0.003 sec)
     
MySQL [(none)]> SAVE MYSQL SERVERS TO DISK;
Query OK, 0 rows affected (0.140 sec)
     
MySQL [(none)]> LOAD MYSQL USERS TO RUNTIME;
Query OK, 0 rows affected (0.000 sec)
     
MySQL [(none)]> SAVE MYSQL USERS TO DISK;
Query OK, 0 rows affected (0.050 sec)
     
MySQL [(none)]> LOAD SCHEDULER TO RUNTIME;
Query OK, 0 rows affected (0.000 sec)
     
MySQL [(none)]> SAVE SCHEDULER TO DISK;
Query OK, 0 rows affected (0.096 sec)
     
MySQL [(none)]> LOAD MYSQL QUERY RULES TO RUNTIME;
Query OK, 0 rows affected (0.000 sec)
     
MySQL [(none)]> SAVE MYSQL QUERY RULES TO DISK;
Query OK, 0 rows affected (0.156 sec)
     
MySQL [(none)]>

3)配置ProxySQL Cluster

1)首先在172.16.60.214、172.16.60.220兩個例項節點上均配置proxysql.cnf檔案
[root@ProxySQL-node1 ~]# cp /etc/proxysql.cnf /etc/proxysql.cnf.bak
[root@ProxySQL-node1 ~]# vim /etc/proxysql.cnf
...............
...............
# 需要更改的部分
admin_variables=
{
        admin_credentials="admin:admin;cluster_kevin:123456"                 #配置用於proxysql cluster例項節點間通訊的賬號
#       mysql_ifaces="127.0.0.1:6032;/tmp/proxysql_admin.sock"
        mysql_ifaces="0.0.0.0:6032"                                                                #全網開放登入
#       refresh_interval=2000
#       debug=true
        cluster_username="cluster_kevin"                                                     #叢集使用者名稱稱,與最上面的相同
        cluster_password="123456"                                                                #叢集使用者密碼,與最上面的相同
        cluster_check_interval_ms=200                      
        cluster_check_status_frequency=100
        cluster_mysql_query_rules_save_to_disk=true
        cluster_mysql_servers_save_to_disk=true
        cluster_mysql_users_save_to_disk=true
        cluster_proxysql_servers_save_to_disk=true
        cluster_mysql_query_rules_diffs_before_sync=3
        cluster_mysql_servers_diffs_before_sync=3
        cluster_mysql_users_diffs_before_sync=3
        cluster_proxysql_servers_diffs_before_sync=3
}
 
proxysql_servers =                                                                                    #在這個部分提前定義好叢集的成員                         
(
        {
                hostname="172.16.60.214"
                port=6032
                weight=1
                comment="ProxySQL-node1"                                                                 
        },
        {
                hostname="172.16.60.220"
                port=6032
                weight=1
                comment="ProxySQL-node2"
        }
)
...............
...............

將proxysql.cnf配置檔案拷貝覆蓋到另外一個例項節點上
[root@ProxySQL-node1 ~]# rsync -e "ssh -p22" -avpgolr /etc/proxysql.cnf root@172.16.60.220:/etc/

2)重啟172.16.60.214和172.16.60.220兩個例項節點的proxysql服務 (注意:暫且不要重啟172.16.60.221例項節點的proxysql服務)
 
這裡要特別注意:
如果存在如果存在"proxysql.db"檔案(在/var/lib/proxysql目錄下),則ProxySQL服務只有在第一次啟動時才會去讀取proxysql.cnf檔案並解析;
後面啟動會就不會讀取proxysql.cnf檔案了!如果想要讓proxysql.cnf檔案裡的配置在重啟proxysql服務後生效(即想要讓proxysql重啟時讀取並
解析proxysql.cnf配置檔案),則需要先刪除/var/lib/proxysql/proxysql.db資料庫檔案,然後再重啟proxysql服務。這樣就相當於初始化啟動
proxysql服務了,會再次生產一個純淨的proxysql.db資料庫檔案(如果之前配置了proxysql相關路由規則等,則就會被抹掉)。
 
重啟第一個例項節點172.16.60.214的proxysql服務,重啟時要讀取並解析proxysql.cnf配置檔案
[root@ProxySQL-node1 ~]# rm -rf /var/lib/proxysql/proxysql.db 
[root@ProxySQL-node1 ~]# ll /var/lib/proxysql/proxysql.db
ls: cannot access /var/lib/proxysql/proxysql.db: No such file or directory
[root@ProxySQL-node1 ~]# systemctl restart proxysql             
[root@ProxySQL-node1 ~]# ll /var/lib/proxysql/proxysql.db
-rw------- 1 root root 122880 Feb 25 14:42 /var/lib/proxysql/proxysql.db
 
重啟第二個例項節點172.16.60.220的proxysql服務,重啟時要讀取並解析proxysql.cnf配置檔案
[root@ProxySQL-node2 ~]# rm -rf /var/lib/proxysql/proxysql.db
[root@ProxySQL-node2 ~]# ll /var/lib/proxysql/proxysql.db
ls: cannot access /var/lib/proxysql/proxysql.db: No such file or directory
[root@ProxySQL-node2 ~]# systemctl restart proxysql                   
[root@ProxySQL-node2 ~]# ll /var/lib/proxysql/proxysql.db
-rw------- 1 root root 122880 Feb 25 14:43 /var/lib/proxysql/proxysql.db

3)觀察叢集狀況 (在172.16.60.214和172.16.60.220節點上都可以檢視)
[root@ProxySQL-node1 ~]# mysql -uadmin -padmin -h127.0.0.1 -P6032
.............
MySQL [(none)]> select * from proxysql_servers;
+---------------+------+--------+----------------+
| hostname      | port | weight | comment        |
+---------------+------+--------+----------------+
| 172.16.60.214 | 6032 | 1      | ProxySQL-node1 |
| 172.16.60.220 | 6032 | 1      | ProxySQL-node2 |
+---------------+------+--------+----------------+
2 rows in set (0.000 sec)

MySQL [(none)]> select * from stats_proxysql_servers_metrics; 
+---------------+------+--------+----------------+------------------+----------+---------------+---------+------------------------------+----------------------------+
| hostname      | port | weight | comment        | response_time_ms | Uptime_s | last_check_ms | Queries | Client_Connections_connected | Client_Connections_created |
+---------------+------+--------+----------------+------------------+----------+---------------+---------+------------------------------+----------------------------+
| 172.16.60.220 | 6032 | 1      | ProxySQL-node2 | 1                | 82       | 1226          | 0       | 0                            | 0                          |
| 172.16.60.214 | 6032 | 1      | ProxySQL-node1 | 1                | 80       | 675           | 0       | 0                            | 0                          |
+---------------+------+--------+----------------+------------------+----------+---------------+---------+------------------------------+----------------------------+
2 rows in set (0.001 sec)

MySQL [(none)]> select hostname,port,comment,Uptime_s,last_check_ms from stats_proxysql_servers_metrics;  
+---------------+------+----------------+----------+---------------+
| hostname      | port | comment        | Uptime_s | last_check_ms |
+---------------+------+----------------+----------+---------------+
| 172.16.60.220 | 6032 | ProxySQL-node2 | 102      | 9064          |
| 172.16.60.214 | 6032 | ProxySQL-node1 | 100      | 8526          |
+---------------+------+----------------+----------+---------------+
2 rows in set (0.000 sec)

MySQL [(none)]> select hostname,name,checksum,updated_at from stats_proxysql_servers_checksums;
+---------------+-------------------+--------------------+------------+
| hostname      | name              | checksum           | updated_at |
+---------------+-------------------+--------------------+------------+
| 172.16.60.220 | admin_variables   |                    | 1551109910 |
| 172.16.60.220 | mysql_query_rules | 0x0000000000000000 | 1551109910 |
| 172.16.60.220 | mysql_servers     | 0x0000000000000000 | 1551109910 |
| 172.16.60.220 | mysql_users       | 0x0000000000000000 | 1551109910 |
| 172.16.60.220 | mysql_variables   |                    | 1551109910 |
| 172.16.60.220 | proxysql_servers  | 0x7D769422A4719C2F | 1551109910 |
| 172.16.60.214 | admin_variables   |                    | 1551109910 |
| 172.16.60.214 | mysql_query_rules | 0x0000000000000000 | 1551109910 |
| 172.16.60.214 | mysql_servers     | 0x0000000000000000 | 1551109910 |
| 172.16.60.214 | mysql_users       | 0x0000000000000000 | 1551109910 |
| 172.16.60.214 | mysql_variables   |                    | 1551109910 |
| 172.16.60.214 | proxysql_servers  | 0x7D769422A4719C2F | 1551109910 |
+---------------+-------------------+--------------------+------------+
12 rows in set (0.001 sec)

4)在第一個例項節點 172.16.60.214 上配置MGR的讀寫分離和主節點故障無感知環境

參考:https://www.cnblogs.com/kevingrace/p/10384691.html
 
1)在資料庫端建立proxysql登入需要的帳號 (在三個MGR任意一個節點上操作,會自動同步到其他節點)
[root@MGR-node1 ~]# mysql -p123456
.........
mysql> CREATE USER 'proxysql'@'%' IDENTIFIED BY 'proxysql';  
Query OK, 0 rows affected (0.07 sec)
  
mysql> GRANT ALL ON * . * TO  'proxysql'@'%';
Query OK, 0 rows affected (0.06 sec)
  
mysql> create user 'sbuser'@'%' IDENTIFIED BY 'sbpass';  
Query OK, 0 rows affected (0.05 sec)
  
mysql> GRANT ALL ON * . * TO 'sbuser'@'%';
Query OK, 0 rows affected (0.08 sec)
  
mysql> FLUSH PRIVILEGES;  
Query OK, 0 rows affected (0.07 sec)
 
2)建立檢查MGR節點狀態的函式和檢視 (在三個MGR任意一個節點上操作,會自動同步到其他節點)
在MGR-node1節點上,建立系統檢視sys.gr_member_routing_candidate_status,該檢視將為ProxySQL提供組複製相關的監控狀態指標。
下載addition_to_sys.sql指令碼,在MGR-node1節點執行如下語句匯入MySQL即可 (在mgr-node1節點的mysql執行後,會同步到其他兩個節點上)。
   
下載地址: https://pan.baidu.com/s/1bNYHtExy2fmqwvEyQS3sWg
提取密碼:wst7
 
匯入addition_to_sys.sql檔案資料
[root@MGR-node1 ~]# mysql -p123456 < /root/addition_to_sys.sql
mysql: [Warning] Using a password on the command line interface can be insecure.
  
在三個mysql節點上可以檢視該檢視:
[root@MGR-node1 ~]# mysql -p123456
............
mysql> select * from sys.gr_member_routing_candidate_status;
+------------------+-----------+---------------------+----------------------+
| viable_candidate | read_only | transactions_behind | transactions_to_cert |
+------------------+-----------+---------------------+----------------------+
| YES              | NO        |                   0 |                    0 |
+------------------+-----------+---------------------+----------------------+
1 row in set (0.01 sec)
 
3)在proxysql中增加帳號
[root@ProxySQL-node ~]# mysql -uadmin -padmin -h127.0.0.1 -P6032
...........
MySQL [(none)]> INSERT INTO MySQL_users(username,password,default_hostgroup) VALUES ('proxysql','proxysql',1);
Query OK, 1 row affected (0.000 sec)
   
MySQL [(none)]> UPDATE global_variables SET variable_value='proxysql' where variable_name='mysql-monitor_username'; 
Query OK, 1 row affected (0.001 sec)
   
MySQL [(none)]> UPDATE global_variables SET variable_value='proxysql' where variable_name='mysql-monitor_password';
Query OK, 1 row affected (0.002 sec)
   
MySQL [(none)]> LOAD MYSQL SERVERS TO RUNTIME;
Query OK, 0 rows affected (0.006 sec)
   
MySQL [(none)]> SAVE MYSQL SERVERS TO DISK;
Query OK, 0 rows affected (0.387 sec)
 
4) 配置proxysql
[root@ProxySQL-node ~]# mysql -uadmin -padmin -h127.0.0.1 -P6032
.............
MySQL [(none)]> delete from mysql_servers;
Query OK, 3 rows affected (0.000 sec)
   
MySQL [(none)]> insert into mysql_servers (hostgroup_id, hostname, port) values(1,'172.16.60.211',3306);
Query OK, 1 row affected (0.001 sec)
   
MySQL [(none)]> insert into mysql_servers (hostgroup_id, hostname, port) values(1,'172.16.60.212',3306);
Query OK, 1 row affected (0.000 sec)
   
MySQL [(none)]> insert into mysql_servers (hostgroup_id, hostname, port) values(1,'172.16.60.213',3306);
Query OK, 1 row affected (0.000 sec)
   
MySQL [(none)]> insert into mysql_servers (hostgroup_id, hostname, port) values(2,'172.16.60.211',3306);
Query OK, 1 row affected (0.000 sec)
   
MySQL [(none)]> insert into mysql_servers (hostgroup_id, hostname, port) values(2,'172.16.60.212',3306);
Query OK, 1 row affected (0.000 sec)
   
MySQL [(none)]> insert into mysql_servers (hostgroup_id, hostname, port) values(2,'172.16.60.213',3306);
Query OK, 1 row affected (0.000 sec)
   
MySQL [(none)]> select * from  mysql_servers ;
+--------------+---------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| hostgroup_id | hostname      | port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment |
+--------------+---------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| 1            | 172.16.60.211 | 3306 | ONLINE | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 1            | 172.16.60.212 | 3306 | ONLINE | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 1            | 172.16.60.213 | 3306 | ONLINE | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 2            | 172.16.60.211 | 3306 | ONLINE | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 2            | 172.16.60.212 | 3306 | ONLINE | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 2            | 172.16.60.213 | 3306 | ONLINE | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
+--------------+---------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
6 rows in set (0.001 sec)
 
需要確認一下沒有使用proxysql的讀寫分離規則(因為之前測試中配置了這個地方,所以需要刪除,以免影響後面的測試)。
MySQL [(none)]> delete from mysql_query_rules;
Query OK, 2 rows affected (0.000 sec)
   
MySQL [(none)]> commit;
Query OK, 0 rows affected (0.000 sec)
   
最後需要將global_variables,mysql_servers、mysql_users表的資訊載入到RUNTIME,更進一步載入到DISK:
MySQL [(none)]> LOAD MYSQL VARIABLES TO RUNTIME;
Query OK, 0 rows affected (0.001 sec)
   
MySQL [(none)]> SAVE MYSQL VARIABLES TO DISK;
Query OK, 94 rows affected (0.080 sec)
   
MySQL [(none)]> LOAD MYSQL SERVERS TO RUNTIME;
Query OK, 0 rows affected (0.003 sec)
   
MySQL [(none)]> SAVE MYSQL SERVERS TO DISK;
Query OK, 0 rows affected (0.463 sec)
   
MySQL [(none)]> LOAD MYSQL USERS TO RUNTIME;
Query OK, 0 rows affected (0.001 sec)
   
MySQL [(none)]> SAVE MYSQL USERS TO DISK;
Query OK, 0 rows affected (0.134 sec)
 
接著測試一下能否正常登入資料庫 (測試命令執行多次,則會登入到不同的mysql節點上)
[root@ProxySQL-node1 ~]# mysql -uproxysql -pproxysql -h 127.0.0.1 -P6033 -e"select @@hostname"
+------------+
| @@hostname |
+------------+
| MGR-node1  |
+------------+
 
5)配置scheduler
首先,請在Github地址https://github.com/ZzzCrazyPig/proxysql_groupreplication_checker下載相應的指令碼
這個地址有三個指令碼可供下載:
proxysql_groupreplication_checker.sh:用於multi-primary模式,可以實現讀寫分離,以及故障切換,同一時間點多個節點可以多寫;
gr_mw_mode_cheker.sh:用於multi-primary模式,可以實現讀寫分離,以及故障切換,不過在同一時間點只能有一個節點能寫;
gr_sw_mode_checker.sh:用於single-primary模式,可以實現讀寫分離,以及故障切換;
由於這裡實驗的環境是multi-primary模式,所以選擇proxysql_groupreplication_checker.sh指令碼。
 
三個指令碼我已打包放在了百度雲盤上,下載地址:https://pan.baidu.com/s/1lUzr58BSA_U7wmYwsRcvzQ
提取密碼:9rm7
  
將下載的指令碼proxysql_groupreplication_checker.sh放到目錄/var/lib/proxysql/下,並增加可以執行的許可權:
[root@ProxySQL-node ~]# chmod a+x /var/lib/proxysql/proxysql_groupreplication_checker.sh
[root@ProxySQL-node ~]# ll /var/lib/proxysql/proxysql_groupreplication_checker.sh     
-rwxr-xr-x 1 root root 6081 Feb 20 14:25 /var/lib/proxysql/proxysql_groupreplication_checker.sh
  
====================================================================
特別注意:
proxysql_groupreplication_checker.sh 監控指令碼要在三個節點上都要下載,並各自放到/var/lib/proxysql目錄下
 
 
MySQL [(none)]> INSERT INTO scheduler(id,interval_ms,filename,arg1,arg2,arg3,arg4, arg5) VALUES (1,'10000','/var/lib/proxysql/proxysql_groupreplication_checker.sh','1','2','1','0','/var/lib/proxysql/proxysql_groupreplication_checker.log');
Query OK, 1 row affected (0.000 sec)
  
MySQL [(none)]> select * from scheduler;
+----+--------+-------------+--------------------------------------------------------+------+------+------+------+---------------------------------------------------------+---------+
| id | active | interval_ms | filename                                               | arg1 | arg2 | arg3 | arg4 | arg5                                                    | comment |
+----+--------+-------------+--------------------------------------------------------+------+------+------+------+---------------------------------------------------------+---------+
| 1  | 1      | 10000       | /var/lib/proxysql/proxysql_groupreplication_checker.sh | 1    | 2    | 1    | 0    | /var/lib/proxysql/proxysql_groupreplication_checker.log |         |
+----+--------+-------------+--------------------------------------------------------+------+------+------+------+---------------------------------------------------------+---------+
1 row in set (0.000 sec)
  
MySQL [(none)]> LOAD SCHEDULER TO RUNTIME;
Query OK, 0 rows affected (0.001 sec)
  
MySQL [(none)]> SAVE SCHEDULER TO DISK;
Query OK, 0 rows affected (0.118 sec)
  
==============================================================================
scheduler各column的說明:
active : 1: enable scheduler to schedule the script we provide
interval_ms : invoke one by one in cycle (eg: 5000(ms) = 5s represent every 5s invoke the script)
filename: represent the script file path
arg1~arg5: represent the input parameters the script received
  
指令碼proxysql_groupreplication_checker.sh對應的引數說明如下:
arg1 is the hostgroup_id for write
arg2 is the hostgroup_id for read
arg3 is the number of writers we want active at the same time
arg4 represents if we want that the member acting for writes is also candidate for reads
arg5 is the log file
  
schedule資訊載入後,就會分析當前的環境,mysql_servers中顯示出當前只有172.16.60.211是可以寫的,
172.16.60.212以及172.16.60.213是用來讀的。
  
MySQL [(none)]> select * from  mysql_servers ;              //上面操作後,稍等一會兒後執行此命令才會有下面的結果
+--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| hostgroup_id | hostname      | port | status       | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment |
+--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| 1            | 172.16.60.211 | 3306 | ONLINE       | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 1            | 172.16.60.212 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 1            | 172.16.60.213 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 2            | 172.16.60.211 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 2            | 172.16.60.212 | 3306 | ONLINE       | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 2            | 172.16.60.213 | 3306 | ONLINE       | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
+--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
6 rows in set (0.000 sec)
  
因為schedule的arg4,我這裡設為了0,就表示可寫的節點不能用於讀。那我將arg4設定為1試一下:
MySQL [(none)]> update scheduler set arg4=1;
Query OK, 1 row affected (0.000 sec)
  
MySQL [(none)]> select * from scheduler;
+----+--------+-------------+--------------------------------------------------------+------+------+------+------+---------------------------------------------------------+---------+
| id | active | interval_ms | filename                                               | arg1 | arg2 | arg3 | arg4 | arg5                                                    | comment |
+----+--------+-------------+--------------------------------------------------------+------+------+------+------+---------------------------------------------------------+---------+
| 1  | 1      | 10000       | /var/lib/proxysql/proxysql_groupreplication_checker.sh | 1    | 2    | 1    | 1    | /var/lib/proxysql/proxysql_groupreplication_checker.log |         |
+----+--------+-------------+--------------------------------------------------------+------+------+------+------+---------------------------------------------------------+---------+
1 row in set (0.000 sec)
  
MySQL [(none)]> SAVE SCHEDULER TO DISK;
Query OK, 0 rows affected (0.286 sec)
  
MySQL [(none)]> LOAD SCHEDULER TO RUNTIME;
Query OK, 0 rows affected (0.000 sec)
  
MySQL [(none)]> select * from  mysql_servers;          //上面操作後,稍微等一會兒執行此命令才會有下面的結果
+--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| hostgroup_id | hostname      | port | status       | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment |
+--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| 1            | 172.16.60.211 | 3306 | ONLINE       | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 1            | 172.16.60.212 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 1            | 172.16.60.213 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 2            | 172.16.60.211 | 3306 | ONLINE       | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 2            | 172.16.60.212 | 3306 | ONLINE       | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 2            | 172.16.60.213 | 3306 | ONLINE       | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
+--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
6 rows in set (0.000 sec)
  
arg4設定為1之後,172.16.60.211節點用來寫的同時,也可以被用來讀。
  
便於下面的測試還是將arg4設為0:
MySQL [(none)]> update scheduler set arg4=0;
Query OK, 1 row affected (0.000 sec)
  
MySQL [(none)]> SAVE SCHEDULER TO DISK;
Query OK, 0 rows affected (0.197 sec)
  
MySQL [(none)]> LOAD SCHEDULER TO RUNTIME;
Query OK, 0 rows affected (0.000 sec)
  
MySQL [(none)]> select * from  mysql_servers;             //稍微等一會兒執行此命令,才會有下面的結果
+--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| hostgroup_id | hostname      | port | status       | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment |
+--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| 1            | 172.16.60.211 | 3306 | ONLINE       | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 1            | 172.16.60.212 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 1            | 172.16.60.213 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 2            | 172.16.60.211 | 3306 | OFFLINE_SORT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 2            | 172.16.60.212 | 3306 | ONLINE       | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 2            | 172.16.60.213 | 3306 | ONLINE       | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
+--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
6 rows in set (0.000 sec)
  
各個節點的gr_member_routing_candidate_status檢視也顯示了當前節點是否是正常狀態的,
proxysql就是讀取的這個檢視的資訊來決定此節點是否可用。
  
  
[root@MGR-node1 ~]# mysql -p123456            
...........
mysql> select * from sys.gr_member_routing_candidate_status\G;
*************************** 1. row ***************************
    viable_candidate: YES
           read_only: NO
 transactions_behind: 0
transactions_to_cert: 0
1 row in set (0.00 sec)
  
ERROR:
No query specified
 
6)設定讀寫分離
MySQL [(none)]> insert into mysql_query_rules (active, match_pattern, destination_hostgroup, apply) values (1,"^SELECT",2,1);
Query OK, 1 row affected (0.001 sec)
  
MySQL [(none)]> LOAD MYSQL QUERY RULES TO RUNTIME;
Query OK, 0 rows affected (0.001 sec)
  
MySQL [(none)]> SAVE MYSQL QUERY RULES TO DISK;
Query OK, 0 rows affected (0.264 sec)
 
對於for update需要在gruop1上執行,可以加上規則:
MySQL [(none)]> insert into mysql_query_rules(active,match_pattern,destination_hostgroup,apply) values(1,'^SELECT.*FOR UPDATE$',1,1);
Query OK, 1 row affected (0.001 sec)
  
在proxysql本機或其他客戶機上檢查下,select 語句,一直連線的是172.16.60.212和172.16.60.213
[root@MGR-node3 ~]# mysql -uproxysql -pproxysql -h172.16.60.214 -P6033 -e "select @@hostname"
mysql: [Warning] Using a password on the command line interface can be insecure.
+------------+
| @@hostname |
+------------+
| MGR-node3  |
+------------+
[root@MGR-node3 ~]# mysql -uproxysql -pproxysql -h172.16.60.214 -P6033 -e "select @@hostname"
mysql: [Warning] Using a password on the command line interface can be insecure.
+------------+
| @@hostname |
+------------+
| MGR-node2  |
+------------+
[root@MGR-node3 ~]# mysql -uproxysql -pproxysql -h172.16.60.214 -P6033 -e "select @@hostname"
mysql: [Warning] Using a password on the command line interface can be insecure.
+------------+
| @@hostname |
+------------+
| MGR-node2  |
+------------+
[root@MGR-node3 ~]# mysql -uproxysql -pproxysql -h172.16.60.214 -P6033 -e "select @@hostname"
mysql: [Warning] Using a password on the command line interface can be insecure.
+------------+
| @@hostname |
+------------+
| MGR-node3  |
+------------+
 
7)驗證資料的讀寫分離效果
[root@ProxySQL-node ~]# mysql -uproxysql -pproxysql -h172.16.60.214 -P6033 -e "select @@hostname" 
+------------+
| @@hostname |
+------------+
| MGR-node2  |
+------------+
[root@ProxySQL-node ~]# mysql -uproxysql -pproxysql -h172.16.60.214 -P6033 -e "select * from kevin.haha"
+-----+-----------+
| id  | name      |
+-----+-----------+
|   1 | wangshibo |
|   2 | guohuihui |
|  11 | beijing   |
| 100 | anhui     |
+-----+-----------+
  
[root@ProxySQL-node ~]# mysql -uproxysql -pproxysql -h172.16.60.214 -P6033 -e "delete from kevin.haha where id=1;"                    
[root@ProxySQL-node ~]# mysql -uproxysql -pproxysql -h172.16.60.214 -P6033 -e "delete from kevin.haha where id=2;"
[root@ProxySQL-node ~]# mysql -uproxysql -pproxysql -h172.16.60.214 -P6033 -e "select * from kevin.haha"              
+-----+---------+
| id  | name    |
+-----+---------+
|  11 | beijing |
| 100 | anhui   |
+-----+---------+
  
[root@ProxySQL-node ~]# mysql -uproxysql -pproxysql -h172.16.60.214 -P6033 -e 'insert into kevin.haha values(21,"zhongguo"),(22,"xianggang"),(23,"taiwan");'
  
[root@ProxySQL-node ~]# mysql -uproxysql -pproxysql -h172.16.60.214 -P6033 -e "select * from kevin.haha"                                +-----+-----------+
| id  | name      |
+-----+-----------+
|  11 | beijing   |
|  21 | zhongguo  |
|  22 | xianggang |
|  23 | taiwan    |
| 100 | anhui     |
  
最後在proxysql管理端檢視讀寫分離情況
[root@ProxySQL-node ~]# mysql -uadmin -padmin -h 127.0.0.1 -P6032
..........
MySQL [(none)]> select hostgroup,username,digest_text,count_star from stats_mysql_query_digest;
+-----------+----------+------------------------------------------------------+------------+
| hostgroup | username | digest_text                                          | count_star |
+-----------+----------+------------------------------------------------------+------------+
| 1         | proxysql | insert into kevin.haha values(?,?),(?,?),(?,?)       | 1          |
| 1         | proxysql | insert into kevin.haha values(?,yangyang)            | 1          |
| 1         | proxysql | delete from kevin.haha where id=?                    | 2          |
| 1         | proxysql | select @@version_comment limit ?                     | 120        |
| 1         | proxysql | KILL ?                                               | 8          |
| 1         | proxysql | select @@hostname                                    | 11         |
| 1         | proxysql | KILL QUERY ?                                         | 10         |
| 2         | proxysql | select @@hostname, sleep(?)                          | 53         |
| 1         | proxysql | insert into kevin.haha values(?,yangyang),(?,shikui) | 2          |
| 1         | proxysql | show databases                                       | 1          |
| 2         | proxysql | select @@hostname                                    | 31         |
| 2         | proxysql | select * from kevin.haha                             | 4          |
| 1         | proxysql | insert into kevin.haha values(?,wawa)                | 3          |
+-----------+----------+------------------------------------------------------+------------+
13 rows in set (0.002 sec)
  
MySQL [(none)]> select * from  mysql_servers;
+--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| hostgroup_id | hostname      | port | status       | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment |
+--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| 1            | 172.16.60.211 | 3306 | ONLINE       | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 1            | 172.16.60.212 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 1            | 172.16.60.213 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 2            | 172.16.60.211 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 2            | 172.16.60.212 | 3306 | ONLINE       | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 2            | 172.16.60.213 | 3306 | ONLINE       | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
+--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
6 rows in set (0.000 sec)
  
通過上面可以看到:
寫操作都分配到了group1組內,即寫操作分配到172.16.60.211節點上。
讀操作都分配到了group2組內,即讀操作分配到172.16.60.212、172.16.60.213節點上。
 
8)設定故障應用無感應
在上面的讀寫分離規則中,我設定了172.16.60.211為可寫節點,172.16.60.212,172.16.60.213為只讀節點
如果此時172.16.60.211變成只讀模式的話,應用能不能直接連到其它的節點進行寫操作?
   
現手動將172.16.60.211變成只讀模式:
[root@MGR-node1 ~]# mysql -p123456
........
mysql> set global read_only=1;
Query OK, 0 rows affected (0.00 sec)
   
接著觀察一下mysql_servers的狀態,自動將group1的172.16.60.212改成了online,group2的172.16.60.211,
172.16.60.213變成online了,就表示將172.16.60.212變為可寫節點,其它兩個節點變為只讀節點了。
   
[root@ProxySQL-node ~]# mysql -uadmin -padmin -h 127.0.0.1 -P6032
........
MySQL [(none)]> select * from  mysql_servers;
+--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| hostgroup_id | hostname      | port | status       | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment |
+--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| 1            | 172.16.60.211 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 1            | 172.16.60.212 | 3306 | ONLINE       | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 1            | 172.16.60.213 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 2            | 172.16.60.211 | 3306 | ONLINE       | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 2            | 172.16.60.212 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 2            | 172.16.60.213 | 3306 | ONLINE       | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
+--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
6 rows in set (0.001 sec)
   
通過模擬的連線也可以看到select語句都連線到172.16.60.211和172.16.60.213進行了。 (模擬時可以稍微間隔一段時間,快速測試可能會連線同一個讀節點)
[root@MGR-node3 ~]# mysql -uproxysql -pproxysql -h172.16.60.214 -P6033 -e "select @@hostname"
mysql: [Warning] Using a password on the command line interface can be insecure.
+------------+
| @@hostname |
+------------+
| MGR-node3  |
+------------+
[root@MGR-node3 ~]# mysql -uproxysql -pproxysql -h172.16.60.214 -P6033 -e "select @@hostname"
mysql: [Warning] Using a password on the command line interface can be insecure.
+------------+
| @@hostname |
+------------+
| MGR-node1  |
+------------+
[root@MGR-node3 ~]# mysql -uproxysql -pproxysql -h172.16.60.214 -P6033 -e "select @@hostname"
mysql: [Warning] Using a password on the command line interface can be insecure.
+------------+
| @@hostname |
+------------+
| MGR-node3  |
+------------+
[root@MGR-node3 ~]# mysql -uproxysql -pproxysql -h172.16.60.214 -P6033 -e "select @@hostname"
mysql: [Warning] Using a password on the command line interface can be insecure.
+------------+
| @@hostname |
+------------+
| MGR-node1  |
+------------+
[root@MGR-node3 ~]# mysql -uproxysql -pproxysql -h172.16.60.214 -P6033 -e "select @@hostname"
mysql: [Warning] Using a password on the command line interface can be insecure.
+------------+
| @@hostname |
+------------+
| MGR-node1  |
+------------+
   
然後再將將172.16.60.211變為可寫模式後,mysql_servers也恢復過來了。
[root@MGR-node1 ~]# mysql -p123456
........
mysql> set global read_only=0;
Query OK, 0 rows affected (0.00 sec)
   
接著觀察一下mysql_servers的狀態
[root@ProxySQL-node ~]# mysql -uadmin -padmin -h 127.0.0.1 -P6032
.........
MySQL [(none)]> select * from  mysql_servers;
+--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| hostgroup_id | hostname      | port | status       | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment |
+--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| 1            | 172.16.60.211 | 3306 | ONLINE       | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 1            | 172.16.60.212 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 1            | 172.16.60.213 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 2            | 172.16.60.211 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 2            | 172.16.60.212 | 3306 | ONLINE       | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 2            | 172.16.60.213 | 3306 | ONLINE       | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
+--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
6 rows in set (0.000 sec)
 
經過測試將172.16.60.211節點停止組複製(stop group_replication)或者該節點當機(mysql服務掛掉)後,mysql_servers表的資訊也會正常的切換新的節點。
待172.16.60.211恢復再加入到組複製後,mysql_servers也會正常的將172.16.60.211改成online狀態。

5)觀察以上配置的ProxySQL叢集中兩個例項之間(172.16.60.214和271.16.60.220)的資料同步

1) 登入到172.16.60.220例項節點的proxysql管理埠。
  
會發現上面在172.16.60.214例項節點上配置的mysql規則都同步到自己上面來了。
[root@ProxySQL-node2 ~]# mysql -uadmin -padmin -h127.0.0.1 -P6032
..........
MySQL [(none)]> select * from mysql_servers;
+--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| hostgroup_id | hostname      | port | status       | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment |
+--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| 1            | 172.16.60.211 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 2            | 172.16.60.213 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 2            | 172.16.60.212 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 2            | 172.16.60.211 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 1            | 172.16.60.213 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 1            | 172.16.60.212 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
+--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
6 rows in set (0.000 sec)
  
2) 由於叢集配置裡沒有scheduler配置,所以這裡需要在172.16.60.220節點上手動配置下:
proxysql_groupreplication_checker.sh 檔案已經提前下載放到了/var/lib/proxysql目錄下了
[root@ProxySQL-node2 ~]# mysql -uadmin -padmin -h127.0.0.1 -P6032
..........
MySQL [(none)]> INSERT INTO scheduler(id,interval_ms,filename,arg1,arg2,arg3,arg4, arg5) VALUES (1,'10000','/var/lib/proxysql/proxysql_groupreplication_checker.sh','1','2','1','0','/var/lib/proxysql/proxysql_groupreplication_checker.log');
Query OK, 1 row affected (0.000 sec)
  
MySQL [(none)]> select * from scheduler;
+----+--------+-------------+--------------------------------------------------------+------+------+------+------+---------------------------------------------------------+---------+
| id | active | interval_ms | filename                                               | arg1 | arg2 | arg3 | arg4 | arg5                                                    | comment |
+----+--------+-------------+--------------------------------------------------------+------+------+------+------+---------------------------------------------------------+---------+
| 1  | 1      | 10000       | /var/lib/proxysql/proxysql_groupreplication_checker.sh | 1    | 2    | 1    | 0    | /var/lib/proxysql/proxysql_groupreplication_checker.log |         |
+----+--------+-------------+--------------------------------------------------------+------+------+------+------+---------------------------------------------------------+---------+
1 row in set (0.000 sec)
  
MySQL [(none)]> LOAD SCHEDULER TO RUNTIME;
Query OK, 0 rows affected (0.001 sec)
  
MySQL [(none)]> SAVE SCHEDULER TO DISK;
Query OK, 0 rows affected (0.099 sec)
  
這樣在第二個例項節點172.16.60.220上也可以實現mysql節點的故障無感應切換了
  
這樣一個簡單的"1+1"模式的proxy cluster叢集環境就配置好了。在叢集中的172.16.60.214、172.16.60.220兩個例項節點的任意一個節點上新增
讀寫分離規則,則配置資訊都會被同步到叢集中的其他例項節點上!!
  
3) 比如測試下在叢集中第二個例項節點172.16.60.220上新增測試資料
[root@ProxySQL-node2 ~]# mysql -uadmin -padmin -h127.0.0.1 -P6032
............
............
#原有資料
MySQL [(none)]> select * from mysql_servers;
+--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| hostgroup_id | hostname      | port | status       | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment |
+--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| 1            | 172.16.60.211 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 2            | 172.16.60.213 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 2            | 172.16.60.212 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 2            | 172.16.60.211 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 1            | 172.16.60.213 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 1            | 172.16.60.212 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
+--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
6 rows in set (0.001 sec)
  
#插入新測試資料
MySQL [(none)]> insert into mysql_servers (hostgroup_id, hostname, port) values(1,'172.16.60.202',3306);
Query OK, 1 row affected (0.000 sec)
  
MySQL [(none)]> insert into mysql_servers (hostgroup_id, hostname, port) values(2,'172.16.60.202',3306);
Query OK, 1 row affected (0.000 sec)
  
# 持久化,並載入到執行環境中
MySQL [(none)]> save mysql servers to disk;
Query OK, 0 rows affected (0.197 sec)
  
MySQL [(none)]> load mysql servers to runtime;
Query OK, 0 rows affected (0.006 sec)
  
4) 然後到另一臺節點172.16.60.214上觀察資料
[root@ProxySQL-node1 ~]# mysql -uadmin -padmin -h 127.0.0.1 -P6032
............
............
MySQL [(none)]> select * from mysql_servers;
+--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| hostgroup_id | hostname      | port | status       | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment |
+--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| 1            | 172.16.60.211 | 3306 | ONLINE       | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 2            | 172.16.60.202 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 2            | 172.16.60.211 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 2            | 172.16.60.212 | 3306 | ONLINE       | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 2            | 172.16.60.213 | 3306 | ONLINE       | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 1            | 172.16.60.202 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 1            | 172.16.60.212 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 1            | 172.16.60.213 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
+--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
8 rows in set (0.001 sec)
  
MySQL [(none)]> select * from runtime_mysql_servers;
+--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| hostgroup_id | hostname      | port | status       | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment |
+--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| 1            | 172.16.60.211 | 3306 | ONLINE       | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 2            | 172.16.60.202 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 2            | 172.16.60.213 | 3306 | ONLINE       | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 2            | 172.16.60.212 | 3306 | ONLINE       | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 2            | 172.16.60.211 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 1            | 172.16.60.202 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 1            | 172.16.60.213 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 1            | 172.16.60.212 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
+--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
8 rows in set (0.003 sec)
  
# 可以看到在172.16.60.220節點上新插入的測試資料,已經被更新到172.16.60.214例項中的memory和runtime環境中。
# 注意:資料差異檢查是根據runtime進行檢查的,只對memory和disk進行更改,並不觸發同步操作。
  
5) 檢視172.16.60.214例項節點的ProxySQL日誌
[root@ProxySQL-node1 ~]# tail -10000 /var/lib/proxysql/proxysql.log
...........
...........
#檢測到172.16.60.220例項傳來的新配置檔案校驗值
2019-02-25 15:31:24 [INFO] Cluster: detected a new checksum for mysql_servers from peer 172.16.60.214:6032, version 841, epoch 1551079884, checksum 0x8C28F2C5130ACBAE . Not syncing yet ...
#根據傳來的配置校驗值,版本號,時間戳,與自己的版本進行比較,決定進行同步操作
2019-02-25 15:31:24 [INFO] Cluster: checksum for mysql_servers from peer 172.16.60.214:6032 matches with local checksum 0x8C28F2C5130ACBAE , we won't sync.
..........
..........
#從遠端獲取新的差異配置資訊
2019-02-25 15:32:26 [INFO] Cluster: Fetching MySQL Servers from peer 172.16.60.220:6032 completed
2019-02-25 15:32:26 [INFO] Cluster: Fetching checksum for MySQL Servers from peer 172.16.60.220:6032 before proceessing
#獲取完資訊後,本地進行校驗,並請求遠端校驗值進行比較
2019-02-25 15:32:26 [INFO] Cluster: Fetching checksum for MySQL Servers from peer 172.16.60.220:6032 successful. Checksum: 0xC18FFC0511F726C9
...........
...........
#開始寫mysql_servers表
2019-02-25 15:32:26 [INFO] Cluster: Writing mysql_servers table
2019-02-25 15:32:26 [INFO] Cluster: Writing mysql_replication_hostgroups table
#將剛剛接收並儲存到memory的配置載入到runtime環境中
2019-02-25 15:32:47 [INFO] Cluster: Loading to runtime MySQL Servers from peer 172.16.60.220:6032
2019-02-25 15:32:47 [INFO] Cluster: Saving to disk MySQL Servers from peer 172.16.60.220:6032
...........
...........
#先輸出之前自己的配置資訊
2019-02-25 15:36:13 [INFO] Dumping mysql_servers
+--------------+---------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+-----------------+
| hostgroup_id | hostname      | port | weight | status | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment | mem_pointer     |
+--------------+---------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+-----------------+
| 1            | 172.16.60.211 | 3306 | 1      | 0      | 0           | 1000            | 0                   | 0       | 0              |         | 139810134116992 |
| 2            | 172.16.60.213 | 3306 | 1      | 0      | 0           | 1000            | 0                   | 0       | 0              |         | 139810134117760 |
| 2            | 172.16.60.212 | 3306 | 1      | 0      | 0           | 1000            | 0                   | 0       | 0              |         | 139810134117632 |
| 2            | 172.16.60.211 | 3306 | 1      | 2      | 0           | 1000            | 0                   | 0       | 0              |         | 139810134117504 |
| 1            | 172.16.60.213 | 3306 | 1      | 0      | 0           | 1000            | 0                   | 0       | 0              |         | 139810134117376 |
| 1            | 172.16.60.212 | 3306 | 1      | 0      | 0           | 1000            | 0                   | 0       | 0              |         | 139810134117248 |
+--------------+---------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+-----------------+
...................
...................
#再輸出一遍更新傳來的的配置資訊
2019-02-25 15:36:13 [INFO] Dumping mysql_servers_incoming
+--------------+---------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| hostgroup_id | hostname      | port | weight | status | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment |
+--------------+---------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| 1            | 172.16.60.211 | 3306 | 1      | 0      | 0           | 1000            | 0                   | 0       | 0              |         |
| 2            | 172.16.60.202 | 3306 | 1      | 2      | 0           | 1000            | 0                   | 0       | 0              |         |
| 2            | 172.16.60.211 | 3306 | 1      | 0      | 0           | 1000            | 0                   | 0       | 0              |         |
| 2            | 172.16.60.212 | 3306 | 1      | 0      | 0           | 1000            | 0                   | 0       | 0              |         |
| 2            | 172.16.60.213 | 3306 | 1      | 0      | 0           | 1000            | 0                   | 0       | 0              |         |
| 1            | 172.16.60.202 | 3306 | 1      | 2      | 0           | 1000            | 0                   | 0       | 0              |         |
| 1            | 172.16.60.212 | 3306 | 1      | 0      | 0           | 1000            | 0                   | 0       | 0              |         |
| 1            | 172.16.60.213 | 3306 | 1      | 0      | 0           | 1000            | 0                   | 0       | 0              |         |
+--------------+---------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
2019-02-25 15:36:13 [INFO] New mysql_replication_hostgroups table
2019-02-25 15:36:13 [INFO] New mysql_group_replication_hostgroups table
2019-02-25 15:36:13 [INFO] Dumping current MySQL Servers structures for hostgroup ALL
HID: 1 , address: 172.16.60.211 , port: 3306 , weight: 1 , status: ONLINE , max_connections: 1000 , max_replication_lag: 0 , use_ssl: 0 , max_latency_ms: 0 , comment:
HID: 1 , address: 172.16.60.212 , port: 3306 , weight: 1 , status: ONLINE , max_connections: 1000 , max_replication_lag: 0 , use_ssl: 0 , max_latency_ms: 0 , comment:
HID: 1 , address: 172.16.60.213 , port: 3306 , weight: 1 , status: ONLINE , max_connections: 1000 , max_replication_lag: 0 , use_ssl: 0 , max_latency_ms: 0 , comment:
HID: 1 , address: 172.16.60.202 , port: 3306 , weight: 1 , status: OFFLINE_SOFT , max_connections: 1000 , max_replication_lag: 0 , use_ssl: 0 , max_latency_ms: 0 , comment:
HID: 2 , address: 172.16.60.211 , port: 3306 , weight: 1 , status: ONLINE , max_connections: 1000 , max_replication_lag: 0 , use_ssl: 0 , max_latency_ms: 0 , comment:
HID: 2 , address: 172.16.60.212 , port: 3306 , weight: 1 , status: ONLINE , max_connections: 1000 , max_replication_lag: 0 , use_ssl: 0 , max_latency_ms: 0 , comment:
HID: 2 , address: 172.16.60.213 , port: 3306 , weight: 1 , status: ONLINE , max_connections: 1000 , max_replication_lag: 0 , use_ssl: 0 , max_latency_ms: 0 , comment:
HID: 2 , address: 172.16.60.202 , port: 3306 , weight: 1 , status: OFFLINE_SOFT , max_connections: 1000 , max_replication_lag: 0 , use_ssl: 0 , max_latency_ms: 0 , comment:
#最後輸出一遍自己更新後的資訊
2019-02-25 15:36:13 [INFO] Dumping mysql_servers
+--------------+---------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+-----------------+
| hostgroup_id | hostname      | port | weight | status | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment | mem_pointer     |
+--------------+---------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+-----------------+
| 1            | 172.16.60.211 | 3306 | 1      | 0      | 0           | 1000            | 0                   | 0       | 0              |         | 139810134116992 |
| 2            | 172.16.60.202 | 3306 | 1      | 2      | 0           | 1000            | 0                   | 0       | 0              |         | 139810132824192 |
| 2            | 172.16.60.213 | 3306 | 1      | 0      | 0           | 1000            | 0                   | 0       | 0              |         | 139810134117760 |
| 2            | 172.16.60.212 | 3306 | 1      | 0      | 0           | 1000            | 0                   | 0       | 0              |         | 139810134117632 |
| 2            | 172.16.60.211 | 3306 | 1      | 0      | 0           | 1000            | 0                   | 0       | 0              |         | 139810134117504 |
| 1            | 172.16.60.202 | 3306 | 1      | 2      | 0           | 1000            | 0                   | 0       | 0              |         | 139810132824320 |
| 1            | 172.16.60.213 | 3306 | 1      | 0      | 0           | 1000            | 0                   | 0       | 0              |         | 139810134117376 |
| 1            | 172.16.60.212 | 3306 | 1      | 0      | 0           | 1000            | 0                   | 0       | 0              |         | 139810134117248 |
+--------------+---------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+-----------------+
2019-02-25 15:36:13 [INFO] Cluster: detected a new checksum for mysql_servers from peer 172.16.60.214:6032, version 1031, epoch 1551080173, checksum 0x9715B5645359B3BD . Not syncing yet ...
2019-02-25 15:36:13 [INFO] Cluster: checksum for mysql_servers from peer 172.16.60.214:6032 matches with local checksum 0x9715B5645359B3BD , we won't sync.
  
6) 由於上面在172.16.60.220例項插入的是測試資料。
這裡在172.16.60.214例項上刪除這個測試資料,同樣也是同步到172.16.60.220例項上 (或是在172.16.60.220節點上刪除這個測試資料,也是會同步到另一個節點上)
[root@ProxySQL-node1 ~]# mysql -uadmin -padmin -h 127.0.0.1 -P6032
..................
..................
MySQL [(none)]> select * from mysql_servers;      
+--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| hostgroup_id | hostname      | port | status       | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment |
+--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| 1            | 172.16.60.211 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 2            | 172.16.60.202 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 2            | 172.16.60.211 | 3306 | ONLINE       | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 2            | 172.16.60.212 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 2            | 172.16.60.213 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 1            | 172.16.60.202 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 1            | 172.16.60.212 | 3306 | ONLINE       | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 1            | 172.16.60.213 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
+--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
8 rows in set (0.000 sec)
  
MySQL [(none)]> delete from mysql_servers where hostname="172.16.60.202";
Query OK, 2 rows affected (0.000 sec)
  
MySQL [(none)]> save mysql servers to disk;                            
Query OK, 0 rows affected (0.233 sec)
  
MySQL [(none)]> load mysql servers to runtime;
Query OK, 0 rows affected (0.004 sec)
  
  
7) 到172.16.60.220例項上檢視,發現刪除的資料也同步過來了
[root@ProxySQL-node2 ~]# mysql -uadmin -padmin -h127.0.0.1 -P603
...............
...............
MySQL [(none)]> select * from mysql_servers;
+--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| hostgroup_id | hostname      | port | status       | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment |
+--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| 1            | 172.16.60.211 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 2            | 172.16.60.213 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 2            | 172.16.60.212 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 2            | 172.16.60.211 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 1            | 172.16.60.213 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 1            | 172.16.60.212 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
+--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
6 rows in set (0.000 sec)
  
同樣可以檢視172.16.60.220例項的proxysql日誌
[root@ProxySQL-node2 ~]# tail -f /var/lib/proxysql/proxysql.log
.............
.............

==========================================================================================
注意上面的一個問題:
第二個例項節點172.16.60.220上檢視的"status"狀態都是"OFFLINE_SOFT"。 檢視它的proxysql.log日誌:
[root@ProxySQL-node2 ~]# tail -f /var/lib/proxysql/proxysql.log|grep -i error|grep -v 172.16.60.214
2019-02-26 00:27:44 MySQL_Monitor.cpp:408:monitor_connect_thread(): [ERROR] Server 172.16.60.211:3306 is returning "Access denied" for monitoring user
2019-02-26 00:27:44 MySQL_Monitor.cpp:408:monitor_connect_thread(): [ERROR] Server 172.16.60.212:3306 is returning "Access denied" for monitoring user
2019-02-26 00:27:44 MySQL_Monitor.cpp:408:monitor_connect_thread(): [ERROR] Server 172.16.60.213:3306 is returning "Access denied" for monitoring user

[root@ProxySQL-node2 ~]# mysql -uadmin -padmin -h127.0.0.1 -P603
...............
...............
MySQL [(none)]> SELECT * FROM monitor.mysql_server_connect_log ORDER BY time_start_us DESC LIMIT 10;
+---------------+------+------------------+-------------------------+------------------------------------------------------------------------+
| hostname      | port | time_start_us    | connect_success_time_us | connect_error                                                          |
+---------------+------+------------------+-------------------------+------------------------------------------------------------------------+
| 172.16.60.213 | 3306 | 1551112664169293 | 0                       | Access denied for user 'monitor'@'172.16.60.220' (using password: YES) |
| 172.16.60.212 | 3306 | 1551112664161534 | 0                       | Access denied for user 'monitor'@'172.16.60.220' (using password: YES) |
| 172.16.60.211 | 3306 | 1551112664153844 | 0                       | Access denied for user 'monitor'@'172.16.60.220' (using password: YES) |
| 172.16.60.213 | 3306 | 1551112604169034 | 0                       | Access denied for user 'monitor'@'172.16.60.220' (using password: YES) |
| 172.16.60.212 | 3306 | 1551112604161305 | 0                       | Access denied for user 'monitor'@'172.16.60.220' (using password: YES) |
| 172.16.60.211 | 3306 | 1551112604153591 | 0                       | Access denied for user 'monitor'@'172.16.60.220' (using password: YES) |
| 172.16.60.213 | 3306 | 1551112544169298 | 0                       | Access denied for user 'monitor'@'172.16.60.220' (using password: YES) |
| 172.16.60.212 | 3306 | 1551112544161558 | 0                       | Access denied for user 'monitor'@'172.16.60.220' (using password: YES) |
+---------------+------+------------------+-------------------------+------------------------------------------------------------------------+
10 rows in set (0.000 sec)

從上面的錯誤日誌上看出是許可權的問題,proxysql的monitoring user使用者沒有足夠的許可權讀取資料。解決辦法如下:

MySQL [(none)]> select * from global_variables;
.........
.........
| mysql-monitor_username                              | monitor                          |
| mysql-monitor_password                              | monitor                          |
..........
..........

MySQL [(none)]> select * from MySQL_users;
+----------+-------------------------------------------+--------+---------+-------------------+----------------+---------------+------------------------+--------------+---------+----------+-----------------+
| username | password                                  | active | use_ssl | default_hostgroup | default_schema | schema_locked | transaction_persistent | fast_forward | backend | frontend | max_connections |
+----------+-------------------------------------------+--------+---------+-------------------+----------------+---------------+------------------------+--------------+---------+----------+-----------------+
| proxysql | *BF27B4C7AAD278126E228AA8427806E870F64F39 | 1      | 0       | 1                 |                | 0             | 1                      | 0            | 0       | 1        | 10000           |
| proxysql | *BF27B4C7AAD278126E228AA8427806E870F64F39 | 1      | 0       | 1                 |                | 0             | 1                      | 0            | 1       | 0        | 10000           |
+----------+-------------------------------------------+--------+---------+-------------------+----------------+---------------+------------------------+--------------+---------+----------+-----------------+
2 rows in set (0.000 sec)

原來同步到第二個例項節點172.16.60.220上的proxysql帳號沒有生效!!
[root@ProxySQL-node2 ~]# mysql -uadmin -padmin -h127.0.0.1 -P603
...............
...............
MySQL [(none)]> UPDATE global_variables SET variable_value='proxysql' where variable_name='mysql-monitor_username';
Query OK, 1 row affected (0.002 sec)

MySQL [(none)]> UPDATE global_variables SET variable_value='proxysql' where variable_name='mysql-monitor_password';
Query OK, 1 row affected (0.002 sec)

MySQL [(none)]> LOAD MYSQL SERVERS TO RUNTIME;
Query OK, 0 rows affected (0.006 sec)

MySQL [(none)]> SAVE MYSQL SERVERS TO DISK;
Query OK, 0 rows affected (0.418 sec)

MySQL [(none)]> select * from global_variables;
..........
..........
| mysql-monitor_username                              | proxysql                         |
| mysql-monitor_password                              | proxysql                         |
..........
..........

再次看出172.16.60.220例項節點上的資料狀態,發現就會出現"ONLINE"了
[root@ProxySQL-node2 ~]# mysql -uadmin -padmin -h127.0.0.1 -P603
...............
...............
MySQL [(none)]> select * from mysql_servers;
+--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| hostgroup_id | hostname      | port | status       | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment |
+--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| 1            | 172.16.60.211 | 3306 | ONLINE       | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 2            | 172.16.60.213 | 3306 | ONLINE       | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 2            | 172.16.60.212 | 3306 | ONLINE       | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 2            | 172.16.60.211 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 1            | 172.16.60.213 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 1            | 172.16.60.212 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
+--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
8 rows in set (0.000 sec)

以上就實現了簡單的ProxySQL Cluster雙節點叢集環境,兩個節點間資料自動同步。最後就可以結合Keepalived,利用VIP資源漂移來實現ProxySQL雙節點的無感知故障切換,即對外提供一個統一的vip地址,並且在keepalived.conf檔案中配置proxysql服務的監控指令碼,當當機或proxysql服務掛掉時就將vip資源漂移到另一個正常的節點上,從而使proxysql的代理層持續無感應地提供服務。

相關文章