MySQL Orchestrator自動導換+VIP切換

老楊伏櫪發表於2021-07-27

目錄

Orchestrator總體結構... 

  測試環境資訊... 

Orchestrator詳細配置... 

  SSH免密配置... 

  /etc/hosts配置... 

  visudo配置... 

  /etc/orchestrator.conf.json

  orch_hook.sh. 

  orch_vip.sh. 

MySQL Server配置... 

測試導換... 

  啟動orchestrator. 

  登入Web並發現例項... 

  切換前... 

  切換後... 

  Orchestrator是如何監測Master異常的...

總結... 

參考資料

 

Orchestrator是最近非常流行的MySQL複製管理工具。相關的資料也相當的多,但是總是缺點什麼。

那就是步驟不夠詳細,按照部落格的操作,很多都無法成功。說實話,看了N多部落格後,我倒騰了一週時間才成功。

特別是Orchestrator後臺也使用MySQL儲存,這樣容易把人繞進去了(分不清引數哪個是Orchestrator用,哪個是連線mysql例項用了)。

所以在這個部落格,我將使用Sqlite儲存。並把每一步操作都詳細列舉出來。

Orchestrator總體結構

一圖勝千言。上圖就是Orchestrator的架構圖。

圖中:Sqlite/MySQL,是Orchestrator的後端儲存資料庫(儲存監控的mysql複製例項的相關狀態資訊),可以選擇MySQL,也可以選擇SQLite。

/etc/orchestrator.conf.json是配置檔案,Orchestrator啟動時讀取。

下面部分是監控的MySQL的例項,本例中,我是搭建的一主,兩從。當然,實際上Orchestrator可以監控成百上千個MySQL複製叢集。

測試環境資訊

orch 192.168.56.130  

host01 192.168.56.103

host02 192.168.56.104

host03: 192.168.56.105

Orchestrator詳細配置

SSH免密配置

參考如下連結,在這裡,我建立的是orch帳號。建立免費登入的作用是shell指令碼通過ssh登入Mysql伺服器進行VIP切換。

 https://www.cnblogs.com/chasetimeyang/p/15064507.html

/etc/hosts配置

同樣,用於Shell指令碼。

[orch@orch orchestrator]$ cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.56.103 host01
192.168.56.104 host02
192.168.56.105 host03
192.168.56.130 orch

 

visudo配置

因為我的VIP shell導換指令碼內,用到了sudo執行高許可權命令。你可以通過如下方法把使用者加入sudo許可權。

$visudo

orch ALL=(ALL) NOPASSWD: ALL

 

/etc/orchestrator.conf.json

如下兩個引數是連線要監控的MySQL例項的連線使用者名稱和密碼
"MySQLTopologyUser": "orchestrator",
"MySQLTopologyPassword": "orc_topology_password",

如下說明是Sqlite作為後端儲存資料庫
"BackendDB": "sqlite",
"SQLite3DataFile": "/usr/local/orchestrator/orchestrator.sqlite3",

如下引數是例項導換後,多長時間內不允許再次導換。預設是60分鐘。
"FailureDetectionPeriodBlockMinutes": 5,

如下引數是指,恢復匹配任何例項。(你也可以配置匹配規則,如此滿足規則是會自動導換。不滿足的,出現問題時,只能手動導換)
"RecoverMasterClusterFilters": [
"*"
],
"RecoverIntermediateMasterClusterFilters": [
"*"
], 

是否自動導換。
"ApplyMySQLPromotionAfterMasterFailover": true,

這個是導換後執行的指令碼
"PostFailoverProcesses": [
"echo '(for all types) Recovered from {failureType} on {failureCluster}. Failed: {failedHost}:{failedPort}; Successor: {successorHost}:{successorPort}' >> /tmp/recovery.log",
"/home/orch/orch_hook.sh {failureType} {failureClusterAlias} {failedHost} {successorHost} >> /tmp/orch.log"
],


[orch@orch ~]$ cat /etc/orchestrator.conf.json
{
"Debug": true,
"EnableSyslog": false,
"ListenAddress": ":3000",
"MySQLTopologyUser": "orchestrator",
"MySQLTopologyPassword": "orc_topology_password",
"MySQLTopologyCredentialsConfigFile": "",
"MySQLTopologySSLPrivateKeyFile": "",
"MySQLTopologySSLCertFile": "",
"MySQLTopologySSLCAFile": "",
"MySQLTopologySSLSkipVerify": true,
"MySQLTopologyUseMutualTLS": false,
"BackendDB": "sqlite",
"SQLite3DataFile": "/usr/local/orchestrator/orchestrator.sqlite3",
"MySQLConnectTimeoutSeconds": 1,
"DefaultInstancePort": 3306,
"DiscoverByShowSlaveHosts": true,
"InstancePollSeconds": 5,
"DiscoveryIgnoreReplicaHostnameFilters": [
"a_host_i_want_to_ignore[.]example[.]com",
".*[.]ignore_all_hosts_from_this_domain[.]example[.]com",
"a_host_with_extra_port_i_want_to_ignore[.]example[.]com:3307"
],
"UnseenInstanceForgetHours": 240,
"SnapshotTopologiesIntervalHours": 0,
"InstanceBulkOperationsWaitTimeoutSeconds": 10,
"HostnameResolveMethod": "default",
"MySQLHostnameResolveMethod": "@@hostname",
"SkipBinlogServerUnresolveCheck": true,
"ExpiryHostnameResolvesMinutes": 60,
"RejectHostnameResolvePattern": "",
"ReasonableReplicationLagSeconds": 10,
"ProblemIgnoreHostnameFilters": [],
"VerifyReplicationFilters": false,
"ReasonableMaintenanceReplicationLagSeconds": 20,
"CandidateInstanceExpireMinutes": 1,
"AuditLogFile": "",
"AuditToSyslog": false,
"RemoveTextFromHostnameDisplay": ".mydomain.com:3306",
"ReadOnly": false,
"AuthenticationMethod": "",
"HTTPAuthUser": "",
"HTTPAuthPassword": "",
"AuthUserHeader": "",
"PowerAuthUsers": [
"*"
],
"ClusterNameToAlias": {
"127.0.0.1": "test suite"
},
"ReplicationLagQuery": "",
"DetectClusterAliasQuery": "SELECT SUBSTRING_INDEX(@@hostname, '.', 1)",
"DetectClusterDomainQuery": "",
"DetectInstanceAliasQuery": "",
"DetectPromotionRuleQuery": "",
"DataCenterPattern": "[.]([^.]+)[.][^.]+[.]mydomain[.]com",
"PhysicalEnvironmentPattern": "[.]([^.]+[.][^.]+)[.]mydomain[.]com",
"PromotionIgnoreHostnameFilters": [],
"DetectSemiSyncEnforcedQuery": "",
"ServeAgentsHttp": false,
"AgentsServerPort": ":3001",
"AgentsUseSSL": false,
"AgentsUseMutualTLS": false,
"AgentSSLSkipVerify": false,
"AgentSSLPrivateKeyFile": "",
"AgentSSLCertFile": "",
"AgentSSLCAFile": "",
"AgentSSLValidOUs": [],
"UseSSL": false,
"UseMutualTLS": false,
"SSLSkipVerify": false,
"SSLPrivateKeyFile": "",
"SSLCertFile": "",
"SSLCAFile": "",
"SSLValidOUs": [],
"URLPrefix": "",
"StatusEndpoint": "/api/status",
"StatusSimpleHealth": true,
"StatusOUVerify": false,
"AgentPollMinutes": 60,
"UnseenAgentForgetHours": 6,
"StaleSeedFailMinutes": 60,
"SeedAcceptableBytesDiff": 8192,
"PseudoGTIDPattern": "",
"PseudoGTIDPatternIsFixedSubstring": false,
"PseudoGTIDMonotonicHint": "asc:",
"DetectPseudoGTIDQuery": "",
"BinlogEventsChunkSize": 10000,
"SkipBinlogEventsContaining": [],
"ReduceReplicationAnalysisCount": true,
"FailureDetectionPeriodBlockMinutes": 5,
"FailMasterPromotionOnLagMinutes": 0,
"RecoveryPeriodBlockSeconds": 3600,
"RecoveryIgnoreHostnameFilters": [],
"RecoverMasterClusterFilters": [
"*"
],
"RecoverIntermediateMasterClusterFilters": [
"*"
],
"OnFailureDetectionProcesses": [
"echo 'Detected {failureType} on {failureCluster}. Affected replicas: {countSlaves}' >> /tmp/recovery.log"
],
"PreGracefulTakeoverProcesses": [
"echo 'Planned takeover about to take place on {failureCluster}. Master will switch to read_only' >> /tmp/recovery.log"
],
"PreFailoverProcesses": [
"echo 'Will recover from {failureType} on {failureCluster}' >> /tmp/recovery.log"
],
"PostFailoverProcesses": [
"echo '(for all types) Recovered from {failureType} on {failureCluster}. Failed: {failedHost}:{failedPort}; Successor: {successorHost}:{successorPort}' >> /tmp/recovery.log",
"/home/orch/orch_hook.sh {failureType} {failureClusterAlias} {failedHost} {successorHost} >> /tmp/orch.log"
],
"PostUnsuccessfulFailoverProcesses": [],
"PostMasterFailoverProcesses": [
"echo 'Recovered from {failureType} on {failureCluster}. Failed: {failedHost}:{failedPort}; Promoted: {successorHost}:{successorPort}' >> /tmp/recovery.log"
],
"PostIntermediateMasterFailoverProcesses": [
"echo 'Recovered from {failureType} on {failureCluster}. Failed: {failedHost}:{failedPort}; Successor: {successorHost}:{successorPort}' >> /tmp/recovery.log"
],
"PostGracefulTakeoverProcesses": [
"echo 'Planned takeover complete' >> /tmp/recovery.log"
],
"CoMasterRecoveryMustPromoteOtherCoMaster": true,
"DetachLostSlavesAfterMasterFailover": true,
"ApplyMySQLPromotionAfterMasterFailover": true,
"PreventCrossDataCenterMasterFailover": false,
"PreventCrossRegionMasterFailover": false,
"MasterFailoverDetachReplicaMasterHost": false,
"MasterFailoverLostInstancesDowntimeMinutes": 0,
"PostponeReplicaRecoveryOnLagMinutes": 0,
"OSCIgnoreHostnameFilters": [],
"GraphiteAddr": "",
"GraphitePath": "",
"GraphiteConvertHostnameDotsToUnderscores": true,
"ConsulAddress": "",
"ConsulAclToken": "",
"ConsulKVStoreProvider": "consul"
}

 

orch_hook.sh

#!/bin/bash
isitdead=$1
#cluster=$2
oldmaster=$3
newmaster=$4

ssh=$(which ssh)

logfile="/home/orch/orch_hook.log"
interface='enp0s3'
user=orch
#VIP=$($ssh -tt ${user}@${oldmaster} "sudo ip address show dev enp0s3|grep -w 'inet'|tail -n 1|awk '{print \$2}'|awk -F/ '{print \$1}'")
VIP='192.168.56.200'
VIP_TEMP=$($ssh -tt ${user}@${oldmaster} "sudo ip address|sed -nr 's#^.*inet (.*)/32.*#\1#gp'")
#remove '\r' at the end $'192.168.56.200\r'
VIP_TEMP=$(echo $VIP_TEMP|awk -F"\\r" '{print $1}')

if [ ${#VIP_TEMP} -gt 0 ]; then
    VIP=$VIP_TEMP
fi
echo ${VIP}
echo ${interface}

if [[ $isitdead == "DeadMaster" ]]; then
        if [ !-z ${!VIP} ] ; then

                echo $(date)
                echo "Revocering from: $isitdead"
                echo "New master is: $newmaster"
                echo "/home/orch/orch_vip.sh -d 1 -n $newmaster -i ${interface} -I $VIP -u ${user} -o ${oldmaster}"
                /home/orch/orch_vip.sh -d 1 -n $newmaster -i ${interface} -I $VIP -u ${user} -o ${oldmaster}
        else

                echo "Cluster does not exist!" | tee $logfile

        fi
fi

  

orch_vip.sh

#!/bin/bash

function usage {
  cat << EOF
 usage: $0 [-h] [-d master is dead] [-o old master ] [-s ssh options] [-n new master] [-i interface] [-I] [-u SSH user]

 OPTIONS:
    -h        Show this message
    -o string Old master hostname or IP address
    -d int    If master is dead should be 1 otherweise it is 0
    -s string SSH options
    -n string New master hostname or IP address
    -i string Interface exmple eth0:1
    -I string Virtual IP
    -u string SSH user
EOF

}

while getopts ho:d:s:n:i:I:u: flag; do
  case $flag in
    o)
      orig_master="$OPTARG";
      ;;
    d)
      isitdead="${OPTARG}";
      ;;
    s)
      ssh_options="${OPTARG}";
      ;;
    n)
      new_master="$OPTARG";
      ;;
    i)
      interface="$OPTARG";
      ;;
    I)
      vip="$OPTARG";
      ;;
    u)
      ssh_user="$OPTARG";
      ;;
    h)
      usage;
      exit 0;
      ;;
    *)
      usage;
      exit 1;
      ;;
  esac
done


if [ $OPTIND -eq 1 ]; then
    echo "No options were passed";
    usage;
fi

shift $(( OPTIND - 1 ));

# discover commands from our path
ssh=$(which ssh)
arping=$(which arping)
ip2util=$(which ip)
#ip2util='ip'

# command for adding our vip
cmd_vip_add="sudo -n $ip2util address add $vip dev $interface"
# command for deleting our vip
cmd_vip_del="sudo -n $ip2util address del $vip/32 dev $interface"
# command for discovering if our vip is enabled
cmd_vip_chk="sudo -n $ip2util address show dev $interface to ${vip%/*}/32"
# command for sending gratuitous arp to announce ip move
cmd_arp_fix="sudo -n $arping -c 1 -I ${interface} ${vip%/*}"
# command for sending gratuitous arp to announce ip move on current server
#cmd_local_arp_fix="sudo -n $arping -c 1 -I ${interface} ${vip%/*}"
cmd_local_arp_fix="$arping -c 1 -I ${interface} ${vip%/*}"

vip_stop() {
    rc=0
    echo $?

    echo "$ssh ${ssh_options} -tt ${ssh_user}@${orig_master} \
       \"[ -n \"\$(${cmd_vip_chk})\" ] && ${cmd_vip_del} && \
       sudo -n ${ip2util} route flush cache || [ -z \"\$(${cmd_vip_chk})\" ]\""

    # ensure the vip is removed
    $ssh ${ssh_options} -tt ${ssh_user}@${orig_master} \
       "[ -n \"\$(${cmd_vip_chk})\" ] && ${cmd_vip_del} && \
       sudo -n ${ip2util} route flush cache || [ -z \"\$(${cmd_vip_chk})\" ]"
    rc=$?
    return $rc
}

vip_start() {
    rc=0

    # ensure the vip is added
    # this command should exit with failure if we are unable to add the vip
    # if the vip already exists always exit 0 (whether or not we added it)
    echo "$ssh ${ssh_options} -tt ${ssh_user}@${new_master} \
     \"[ -z \"\$(${cmd_vip_chk})\" ] && ${cmd_vip_add} && ${cmd_arp_fix} || [ -n \"\$(${cmd_vip_chk})\" ]\""

    $ssh ${ssh_options} -tt ${ssh_user}@${new_master} \
     "[ -z \"\$(${cmd_vip_chk})\" ] && ${cmd_vip_add} && ${cmd_arp_fix} || [ -n \"\$(${cmd_vip_chk})\" ]"
    rc=$?
    echo "vip started"
    #$cmd_local_arp_fix
    return $rc
}

vip_status() {
    $arping -c 1 -I ${interface} ${vip%/*}
    echo "$arping -c 1 -I ${interface} ${vip%/*}"
    if ping -c 1 -W 1 "$vip"; then
        return 0
    else
        return 1
    fi
}

if [[ $isitdead == 0 ]]; then
    echo "Online failover"
    if vip_stop; then
        if vip_start; then
            echo "$vip is moved to $new_master."
        else
            echo "Can't add $vip on $new_master!"
            exit 1
        fi
    else
        echo $rc
        echo "Can't remove the $vip from orig_master!"
        exit 1
    fi
elif [[ $isitdead == 1 ]]; then
    echo "Master is dead, failover"
    # make sure the vip is not available
    if vip_status; then
        if vip_stop; then
            echo "$vip is removed from orig_master."
        else
            echo $rc
            echo "Couldn't remove $vip from orig_master."
            exit 1
        fi
    fi

    if vip_start; then
          echo "$vip is moved to $new_master."

    else
          echo "Can't add $vip on $new_master!"
          exit 1
    fi
else
    echo "Wrong argument, the master is dead or live?"

fi

  

 

MySQL Server配置

配置一下Master和不同的Slave的配置檔案 (/etc/my.cnf)。所有配置引數可以一樣,除了server_id。

需要注意的是每個MySQL的server_id必須不同。

[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock

log-error=/var/lib/mysql/mysqld.log
pid-file=/var/lib/mysql/mysqld.pid

sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES
port = 3306
#GTID:
server_id=135 #伺服器id
gtid_mode=on #開啟gtid模式
enforce_gtid_consistency=on #強制gtid一致性,開啟後對於特定create table不被支援
#binlog
log_bin=binlog
log-slave-updates=1
binlog_format=row #強烈建議,其他格式可能造成資料不一致
#relay log
skip_slave_start=1
max_connect_errors=1000
default_authentication_plugin = 'mysql_native_password'

slave_net_timeout = 4

 

 

從機連線主機,並開啟複製

mysql>change master to master_host='master', master_port=3306, master_user='repl', master_password='Xiaopang*803',

master_auto_position=1,MASTER_HEARTBEAT_PERIOD=2,MASTER_CONNECT_RETRY=1, MASTER_RETRY_COUNT=86400;

mysql>start slave;

 

複製啟動無誤後。在Master上執行如下命令建立orchestrator使用者並賦予許可權(這就是上面Orachestrator用來連線MySQL複製例項的使用者)

CREATE USER 'orchestrator'@'%' IDENTIFIED BY 'orc_topology_password';
GRANT SUPER, PROCESS, REPLICATION SLAVE, REPLICATION CLIENT, RELOAD ON *.* TO 'orchestrator'@'%';

 

測試導換

啟動orchestrator

[orch@orch orchestrator]$ pwd
/usr/local/orchestrator


[root@orch orchestrator]# ./orchestrator --debug --config=/etc/orchestrator.conf.json http

登入Web並發現例項

我的orchestrator的IP地址是130。

http://192.168.56.130:3000/

發現新的例項是,填上當前的Master的例項IP與埠。

  

切換前

  

手動給host03增加一個VIP地址 192.168.56.200

[root@host03 ~]# ip addr add dev enp0s3 192.168.56.200/32

檢視一下

[root@host03 ~]# ip addr show dev enp0s3

enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000

    link/ether 08:00:27:9a:25:16 brd ff:ff:ff:ff:ff:ff

    inet 192.168.56.105/24 brd 192.168.56.255 scope global enp0s3

       valid_lft forever preferred_lft forever

    inet 192.168.56.200/32 scope global enp0s3

       valid_lft forever preferred_lft forever

    inet6 fe80::bfdd:5d6a:d4f9:6f95/64 scope link

       valid_lft forever preferred_lft forever

    inet6 fe80::5a97:c3c9:a9df:466f/64 scope link tentative dadfailed

       valid_lft forever preferred_lft forever

切換後

停止host03 MySQL

[root@host03 ~]#service mysqld stop

 

停止後,過了1分多鐘,服務發了導換。新的拓撲結構變為如下

 

 檢視VIP是否也發生了切換。如下可以看到新的VIP已經切換到了新的主機host01上。

 [root@host01 ~]# ip addr show dev enp0s3

2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000

    link/ether 08:00:27:28:0d:72 brd ff:ff:ff:ff:ff:ff

    inet 192.168.56.103/24 brd 192.168.56.255 scope global enp0s3

       valid_lft forever preferred_lft forever

    inet 192.168.56.200/32 scope global enp0s3

       valid_lft forever preferred_lft forever

    inet6 fe80::8428:fc7:68fc:1079/64 scope link

       valid_lft forever preferred_lft forever

    inet6 fe80::bfdd:5d6a:d4f9:6f95/64 scope link tentative dadfailed

       valid_lft forever preferred_lft forever

    inet6 fe80::5a97:c3c9:a9df:466f/64 scope link tentative dadfailed

       valid_lft forever preferred_lft forever

 

[root@host03 ~]# ip addr show dev enp0s3

2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000

    link/ether 08:00:27:9a:25:16 brd ff:ff:ff:ff:ff:ff

    inet 192.168.56.105/24 brd 192.168.56.255 scope global enp0s3

       valid_lft forever preferred_lft forever

    inet6 fe80::bfdd:5d6a:d4f9:6f95/64 scope link

       valid_lft forever preferred_lft forever

    inet6 fe80::5a97:c3c9:a9df:466f/64 scope link tentative dadfailed

       valid_lft forever preferred_lft forever

 

Orchestrator是如何監測Master異常的

Orchestrator不僅僅只依靠Master獨立來判斷的。實際上它還會聯絡各個Slave。

當Slave達成共識Master掛掉了時,這時Orchestrator才會發生導換。

如此能夠保證更高的可靠性。

Failure detection

orchestrator uses a holistic approach to detect master and intermediate master failures.

In a naive approach, a monitoring tool would probe the master, for example, and alert when it cannot contact or query the master server. Such approach is susceptible to false positives caused by network glitches. The naive approach further mitigates false positives by running n tests spaced by t-long intervals. This reduces, in some cases, the chances of false positives, and then increases the response time in the event of real failure.

orchestrator harnesses the replication topology. It observes not only the server itself, but also its replicas. For example, to diagnose a dead master scenario, orchestrator must both:

  • Fail to contact said master
  • Be able to contact master's replicas, and confirm they, too, cannot see the master.

Instead of triaging the error by time, orchestrator triages by multiple observers, the replication topology servers themselves. In fact, when all of a master's replicas agree that they cannot contact their master, the replication topology is broken de-facto, and a failover is justified.

orchestrator's holistic failure detection approach is known to be very reliable in production.

總結

1)orch_hook指令碼是從網上找的。相對於自己的環境改了一些地方。

     這個肯定不完善,會存在一些bug。這個實驗的另一目的也是驗證Orchestrator的hook指令碼的工作是否有效。

2)通過自己寫指令碼的方案,感覺還是容易出問題。其實,在有現成的更好的方案下,儘量不要自己寫。

     比如說,我覺得采用半同步+keepbyalive實現VIP的方案可能更好。一個Master兩個Slave,一個配置為半同步(SlaveSync),其它的從結點配置為非同步。

     Master和SlaveSync,通過keepbyalive實現VIP切換(Orchestrator  Hook啟停keepbyalive服務實現VIP切換)

3)網上說有更佳的辦法通過ProxySQL+Orchestartor,下次試一下。

 

參考資料

列出部分參考資料

https://www.cnblogs.com/zhoujinyi/p/10394389.html

https://www.jianshu.com/p/91833222581a

https://github.com/openark/orchestrator/

相關文章