寫在前面
業界對系統的高可用有著基本的要求,簡單的說,這些要求可以總結為如下所示。
- 系統架構中不存在單點問題。
- 可以最大限度的保障服務的可用性。
一般情況下系統的高可用可以用幾個9來評估。所謂的幾個9就是系統可以保證對外提供的服務的時間達到總時間的百分比。例如如果需要達到99.99的高可用,則系統全年發生故障的總時間不能超過52分鐘。
系統高可用架構
我們既然需要實現系統的高可用架構,那麼,我們到底需要搭建一個什麼樣的系統架構呢?我們可以將需要搭建的系統架構簡化成下圖所示。
伺服器規劃
由於我電腦資源有限,我這裡在4臺伺服器上搭建高可用環境,大家可以按照本文將環境擴充套件到更多的伺服器,搭建步驟都是一樣的。
主機名 | IP地址 | 安裝的服務 |
---|---|---|
binghe151 | 192.168.175.151 | Mycat、Zookeeper、MySQL、HAProxy、Keepalived、Xinetd |
binghe152 | 192.168.175.152 | Zookeeper、MySQL |
binghe153 | 192.168.175.153 | Zookeeper、MySQL |
binghe154 | 192.168.175.154 | Mycat、MySQL、HAProxy、Keepalived、Xinetd |
binghe155 | 192.168.175.155 | MySQL |
注意:HAProxy和Keepalived最好和Mycat部署在同一臺伺服器上。
安裝MySQL
小夥伴們可以關注【冰河技術】微信公眾號,參考《MySQL之——原始碼編譯MySQL8.x+升級gcc+升級cmake(親測完整版)》
安裝JDK
由於Mycat和Zookeeper的執行需要JDK環境的支援,所有我們需要在每臺伺服器上安裝JDK環境。
這裡,我以在binghe151伺服器上安裝JDK為例,其他伺服器的安裝方式與在binghe151伺服器上的安裝方式相同。安裝步驟如下所示。
(1)到JDK官網下載JDK 1.8版本,JDK1.8的下載地址為:https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html。
注:我下載的JDK安裝包版本為:jdk-8u212-linux-x64.tar.gz,如果JDK版本已更新,大家下載對應的版本即可。
(2)將下載的jdk-8u212-linux-x64.tar.gz安裝包上傳到binghe151伺服器的/usr/local/src目錄下。
(3)解壓jdk-8u212-linux-x64.tar.gz檔案,如下所示。
tar -zxvf jdk-8u212-linux-x64.tar.gz
(4)將解壓的jdk1.8.0_212目錄移動到binghe151伺服器下的/usr/local目錄下,如下所示。
mv jdk1.8.0_212/ /usr/local/src/
(5)配置JDK系統環境變數,如下所示。
vim /etc/profile
JAVA_HOME=/usr/local/jdk1.8.0_212
CLASS_PATH=.:$JAVA_HOME/lib
PATH=$JAVA_HOME/bin:$PATH
export JAVA_HOME CLASS_PATH PATH
使系統環境變數生效,如下所示。
source /etc/profile
(6)檢視JDK版本,如下所示。
[root@binghe151 ~]# java -version
java version "1.8.0_212"
Java(TM) SE Runtime Environment (build 1.8.0_212-b10)
Java HotSpot(TM) 64-Bit Server VM (build 25.212-b10, mixed mode)
結果顯示,正確輸出了JDK的版本資訊,說明JDK安裝成功。
安裝Mycat
下載Mycat 1.6.7.4 Release版本,解壓到伺服器的/usr/local/mycat目錄下,並配置Mycat的系統環境變數,隨後,配置Mycat的配置檔案,Mycat的最終結果配置如下所示。
- schema.xml
<?xml version="1.0"?>
<!DOCTYPE mycat:schema SYSTEM "schema.dtd">
<mycat:schema xmlns:mycat="http://io.mycat/">
<schema name="shop" checkSQLschema="false" sqlMaxLimit="1000">
<!--<table name="order_master" primaryKey="order_id" dataNode = "ordb"/>-->
<table name="order_master" primaryKey="order_id" dataNode = "orderdb01,orderdb02,orderdb03,orderdb04" rule="order_master" autoIncrement="true">
<childTable name="order_detail" primaryKey="order_detail_id" joinKey="order_id" parentKey="order_id" autoIncrement="true"/>
</table>
<table name="order_cart" primaryKey="cart_id" dataNode = "ordb"/>
<table name="order_customer_addr" primaryKey="customer_addr_id" dataNode = "ordb"/>
<table name="region_info" primaryKey="region_id" dataNode = "ordb,prodb,custdb" type="global"/>
<table name="serial" primaryKey="id" dataNode = "ordb"/>
<table name="shipping_info" primaryKey="ship_id" dataNode = "ordb"/>
<table name="warehouse_info" primaryKey="w_id" dataNode = "ordb"/>
<table name="warehouse_proudct" primaryKey="wp_id" dataNode = "ordb"/>
<table name="product_brand_info" primaryKey="brand_id" dataNode = "prodb"/>
<table name="product_category" primaryKey="category_id" dataNode = "prodb"/>
<table name="product_comment" primaryKey="comment_id" dataNode = "prodb"/>
<table name="product_info" primaryKey="product_id" dataNode = "prodb"/>
<table name="product_pic_info" primaryKey="product_pic_id" dataNode = "prodb"/>
<table name="product_supplier_info" primaryKey="supplier_id" dataNode = "prodb"/>
<table name="customer_balance_log" primaryKey="balance_id" dataNode = "custdb"/>
<table name="customer_inf" primaryKey="customer_inf_id" dataNode = "custdb"/>
<table name="customer_level_inf" primaryKey="customer_level" dataNode = "custdb"/>
<table name="customer_login" primaryKey="customer_id" dataNode = "custdb"/>
<table name="customer_login_log" primaryKey="login_id" dataNode = "custdb"/>
<table name="customer_point_log" primaryKey="point_id" dataNode = "custdb"/>
</schema>
<dataNode name="mycat" dataHost="binghe151" database="mycat" />
<dataNode name="ordb" dataHost="binghe152" database="order_db" />
<dataNode name="prodb" dataHost="binghe153" database="product_db" />
<dataNode name="custdb" dataHost="binghe154" database="customer_db" />
<dataNode name="orderdb01" dataHost="binghe152" database="orderdb01" />
<dataNode name="orderdb02" dataHost="binghe152" database="orderdb02" />
<dataNode name="orderdb03" dataHost="binghe153" database="orderdb03" />
<dataNode name="orderdb04" dataHost="binghe153" database="orderdb04" />
<dataHost name="binghe151" maxCon="1000" minCon="10" balance="1"
writeType="0" dbType="mysql" dbDriver="native" switchType="1" slaveThreshold="100">
<heartbeat>select user()</heartbeat>
<writeHost host="binghe51" url="192.168.175.151:3306" user="mycat" password="mycat"/>
</dataHost>
<dataHost name="binghe152" maxCon="1000" minCon="10" balance="1"
writeType="0" dbType="mysql" dbDriver="native" switchType="1" slaveThreshold="100">
<heartbeat>select user()</heartbeat>
<writeHost host="binghe52" url="192.168.175.152:3306" user="mycat" password="mycat"/>
</dataHost>
<dataHost name="binghe153" maxCon="1000" minCon="10" balance="1"
writeType="0" dbType="mysql" dbDriver="native" switchType="1" slaveThreshold="100">
<heartbeat>select user()</heartbeat>
<writeHost host="binghe53" url="192.168.175.153:3306" user="mycat" password="mycat"/>
</dataHost>
<dataHost name="binghe154" maxCon="1000" minCon="10" balance="1"
writeType="0" dbType="mysql" dbDriver="native" switchType="1" slaveThreshold="100">
<heartbeat>select user()</heartbeat>
<writeHost host="binghe54" url="192.168.175.154:3306" user="mycat" password="mycat"/>
</dataHost>
</mycat:schema>
- server.xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE mycat:server SYSTEM "server.dtd">
<mycat:server xmlns:mycat="http://io.mycat/">
<system>
<property name="useHandshakeV10">1</property>
<property name="defaultSqlParser">druidparser</property>
<property name="serverPort">3307</property>
<property name="managerPort">3308</property>
<property name="nonePasswordLogin">0</property>
<property name="bindIp">0.0.0.0</property>
<property name="charset">utf8mb4</property>
<property name="frontWriteQueueSize">2048</property>
<property name="txIsolation">2</property>
<property name="processors">2</property>
<property name="idleTimeout">1800000</property>
<property name="sqlExecuteTimeout">300</property>
<property name="useSqlStat">0</property>
<property name="useGlobleTableCheck">0</property>
<property name="sequenceHandlerType">1</property>
<property name="defaultMaxLimit">1000</property>
<property name="maxPacketSize">104857600</property>
<property name="sqlInterceptor">
io.mycat.server.interceptor.impl.StatisticsSqlInterceptor
</property>
<property name="sqlInterceptorType">
UPDATE,DELETE,INSERT
</property>
<property name="sqlInterceptorFile">/tmp/sql.txt</property>
</system>
<firewall>
<whitehost>
<host user="mycat" host="192.168.175.151"></host>
</whitehost>
<blacklist check="true">
<property name="noneBaseStatementAllow">true</property>
<property name="deleteWhereNoneCheck">true</property>
</blacklist>
</firewall>
<user name="mycat" defaultAccount="true">
<property name="usingDecrypt">1</property>
<property name="password">cTwf23RrpBCEmalp/nx0BAKenNhvNs2NSr9nYiMzHADeEDEfwVWlI6hBDccJjNBJqJxnunHFp5ae63PPnMfGYA==</property>
<property name="schemas">shop</property>
</user>
</mycat:server>
- rule.xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE mycat:rule SYSTEM "rule.dtd">
<mycat:rule xmlns:mycat="http://io.mycat/">
<tableRule name="order_master">
<rule>
<columns>customer_id</columns>
<algorithm>mod-long</algorithm>
</rule>
</tableRule>
<function name="mod-long" class="io.mycat.route.function.PartitionByMod">
<property name="count">4</property>
</function>
</mycat:rule>
- sequence_db_conf.properties
#sequence stored in datanode
GLOBAL=mycat
ORDER_MASTER=mycat
ORDER_DETAIL=mycat
關於Mycat的配置,僅供大家參考,大家不一定非要按照我這裡配置,根據自身業務需要配置即可。本文的重點是實現Mycat的高可用環境搭建。
在MySQL中建立Mycat連線MySQL的賬戶,如下所示。
CREATE USER 'mycat'@'192.168.175.%' IDENTIFIED BY 'mycat';
ALTER USER 'mycat'@'192.168.175.%' IDENTIFIED WITH mysql_native_password BY 'mycat';
GRANT SELECT, INSERT, UPDATE, DELETE,EXECUTE ON *.* TO 'mycat'@'192.168.175.%';
FLUSH PRIVILEGES;
安裝Zookeeper叢集
安裝配置完JDK後,就需要搭建Zookeeper叢集了,根據對伺服器的規劃,現將Zookeeper叢集搭建在“binghe151”、“binghe152”、“binghe153”三臺伺服器上。
1.下載Zookeeper
到Apache官網去下載Zookeeper的安裝包,Zookeeper的安裝包下載地址為:https://mirrors.tuna.tsinghua.edu.cn/apache/zookeeper/。具體如下圖所示。
也可以在binghe151伺服器上執行如下命令直接下載zookeeper-3.5.5。
wget https://mirrors.tuna.tsinghua.edu.cn/apache/zookeeper/zookeeper-3.5.5/apache-zookeeper-3.5.5-bin.tar.gz
執行上述命令就可以直接把apache-zookeeper-3.5.5-bin.tar.gz安裝包下載到binghe151伺服器上。
2.安裝並配置Zookeeper
注意:(1)、(2)、(3)步都是在binghe152伺服器上執行的。
(1)解壓Zookeeper安裝包
在binghe151伺服器上執行如下命令,將Zookeeper解壓到“/usr/local/”目錄下,並將Zookeeper目錄修改為zookeeper-3.5.5。
tar -zxvf apache-zookeeper-3.5.5-bin.tar.gz
mv apache-zookeeper-3.5.5-bin zookeeper-3.5.5
(2)配置Zookeeper系統環境變數
同樣,需要在/etc/profile檔案中配置Zookeeper系統環境變數,如下:
ZOOKEEPER_HOME=/usr/local/zookeeper-3.5.5
PATH=$ZOOKEEPER_HOME/bin:$PATH
export ZOOKEEPER_HOME PATH
結合之前配置的JDK系統環境變數,/etc/profile,總體配置如下:
MYSQL_HOME=/usr/local/mysql
JAVA_HOME=/usr/local/jdk1.8.0_212
MYCAT_HOME=/usr/local/mycat
ZOOKEEPER_HOME=/usr/local/zookeeper-3.5.5
MPC_HOME=/usr/local/mpc-1.1.0
GMP_HOME=/usr/local/gmp-6.1.2
MPFR_HOME=/usr/local/mpfr-4.0.2
CLASS_PATH=.:$JAVA_HOME/lib
LD_LIBRARY_PATH=$MPC_LIB_HOME/lib:$GMP_HOME/lib:$MPFR_HOME/lib:$LD_LIBRARY_PATH
PATH=$MYSQL_HOME/bin:$JAVA_HOME/bin:$ZOOKEEPER_HOME/bin:$MYCAT_HOME/bin:$PATH
export JAVA_HOME ZOOKEEPER_HOME MYCAT_HOME CLASS_PATH MYSQL_HOME MPC_LIB_HOME GMP_HOME MPFR_HOME LD_LIBRARY_PATH PATH
(3)配置Zookeeper
首先,需要將$ZOOKEEPER_HOME/conf
($ZOOKEEPER_HOME為Zookeeper的安裝目錄)目錄下的zoo_sample.cfg檔案修改為zoo.cfg檔案。具體命令如下:
cd /usr/local/zookeeper-3.5.5/conf/
mv zoo_sample.cfg zoo.cfg
接下來修改zoo.cfg檔案,修改後的具體內容如下:
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/usr/local/zookeeper-3.5.5/data
dataLogDir=/usr/local/zookeeper-3.5.5/dataLog
clientPort=2181
server.1=binghe151:2888:3888
server.2=binghe152:2888:3888
server.3=binghe153:2888:3888
在Zookeeper的安裝目錄下建立data和dataLog兩個資料夾。
mkdir -p /usr/local/zookeeper-3.5.5/data
mkdir -p /usr/local/zookeeper-3.5.5/dataLog
切換到新建的data目錄下,建立myid檔案,具體內容為數字1,如下所示:
cd /usr/local/zookeeper-3.5.5/data
vim myid
將數字1寫入到檔案myid。
3.將Zookeeper和系統環境變數檔案複製到其他伺服器
注意:(1)、(2)步是在binghe151伺服器上執行的。
(1)複製Zookeeper到其他伺服器
根據對伺服器的規劃,現將Zookeeper複製到binghe152和binghe53伺服器,具體執行操作如下所示:
scp -r /usr/local/zookeeper-3.5.5/ binghe152:/usr/local/
scp -r /usr/local/zookeeper-3.5.5/ binghe153:/usr/local/
(2)複製系統環境變數檔案到其他伺服器
根據對伺服器的規劃,現將系統環境變數檔案/etc/profile複製到binghe152、binghe153伺服器,具體執行操作如下所示:
scp /etc/profile binghe152:/etc/
scp /etc/profile binghe153:/etc/
上述操作可能會要求輸入密碼,根據提示輸入密碼即可。
4.修改其他伺服器上的myid檔案
修改binghe152伺服器上Zookeeper的myid檔案內容為數字2,同時修改binghe153伺服器上Zookeeper的myid檔案內容為數字3。具體如下:
在binghe152伺服器上執行如下操作:
echo "2" > /usr/local/zookeeper-3.5.5/data/myid
cat /usr/local/zookeeper-3.5.5/data/myid
2
在binghe153伺服器上執行如下操作:
echo "3" > /usr/local/zookeeper-3.5.5/data/myid
cat /usr/local/zookeeper-3.5.5/data/myid
3
5.使環境變數生效
分別在binghe151、binghe152、binghe153上執行如下操作,使系統環境變數生效。
source /etc/profile
6.啟動Zookeeper叢集
分別在binghe151、binghe152、binghe153上執行如下操作,啟動Zookeeper叢集。
zkServer.sh start
7.檢視Zookeeper叢集的啟動狀態
- binghe151伺服器
[root@binghe151 ~]# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.5.5/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: follower
- binghe152伺服器
[root@binghe152 local]# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.5.5/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: leader
- binghe153伺服器
[root@binghe153 ~]# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.5.5/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: follower
可以看到,binghe151和binghe153伺服器上的Zookeeper角色為follower,binghe152伺服器上的Zookeeper角色為leader。
初始化Mycat配置到Zookeeper叢集
注意:初始化Zookeeper中的資料,是在binghe151伺服器上進行的,原因是之前我們已經在binghe151伺服器上安裝了Mycat。
1.檢視初始化指令碼
在Mycat安裝目錄下的bin目錄中提供了一個init_zk_data.sh指令碼檔案,如下所示。
[root@binghe151 ~]# ll /usr/local/mycat/bin/
total 384
-rwxr-xr-x 1 root root 3658 Feb 26 17:10 dataMigrate.sh
-rwxr-xr-x 1 root root 1272 Feb 26 17:10 init_zk_data.sh
-rwxr-xr-x 1 root root 15701 Feb 28 20:51 mycat
-rwxr-xr-x 1 root root 2986 Feb 26 17:10 rehash.sh
-rwxr-xr-x 1 root root 2526 Feb 26 17:10 startup_nowrap.sh
-rwxr-xr-x 1 root root 140198 Feb 28 20:51 wrapper-linux-ppc-64
-rwxr-xr-x 1 root root 99401 Feb 28 20:51 wrapper-linux-x86-32
-rwxr-xr-x 1 root root 111027 Feb 28 20:51 wrapper-linux-x86-64
init_zk_data.sh指令碼檔案就是用來向Zookeeper中初始化Mycat的配置的,這個檔案會通過讀取Mycat安裝目錄下的conf目錄下的配置檔案,將其初始化到Zookeeper叢集中。
2.複製Mycat配置檔案
首先,我們檢視下Mycat安裝目錄下的conf目錄下的檔案資訊,如下所示。
[root@binghe151 ~]# cd /usr/local/mycat/conf/
[root@binghe151 conf]# ll
total 108
-rwxrwxrwx 1 root root 92 Feb 26 17:10 autopartition-long.txt
-rwxrwxrwx 1 root root 51 Feb 26 17:10 auto-sharding-long.txt
-rwxrwxrwx 1 root root 67 Feb 26 17:10 auto-sharding-rang-mod.txt
-rwxrwxrwx 1 root root 340 Feb 26 17:10 cacheservice.properties
-rwxrwxrwx 1 root root 3338 Feb 26 17:10 dbseq.sql
-rwxrwxrwx 1 root root 3532 Feb 26 17:10 dbseq - utf8mb4.sql
-rw-r--r-- 1 root root 86 Mar 1 22:37 dnindex.properties
-rwxrwxrwx 1 root root 446 Feb 26 17:10 ehcache.xml
-rwxrwxrwx 1 root root 2454 Feb 26 17:10 index_to_charset.properties
-rwxrwxrwx 1 root root 1285 Feb 26 17:10 log4j2.xml
-rwxrwxrwx 1 root root 183 Feb 26 17:10 migrateTables.properties
-rwxrwxrwx 1 root root 271 Feb 26 17:10 myid.properties
-rwxrwxrwx 1 root root 16 Feb 26 17:10 partition-hash-int.txt
-rwxrwxrwx 1 root root 108 Feb 26 17:10 partition-range-mod.txt
-rwxrwxrwx 1 root root 988 Mar 1 16:59 rule.xml
-rwxrwxrwx 1 root root 3883 Mar 3 23:59 schema.xml
-rwxrwxrwx 1 root root 440 Feb 26 17:10 sequence_conf.properties
-rwxrwxrwx 1 root root 84 Mar 3 23:52 sequence_db_conf.properties
-rwxrwxrwx 1 root root 29 Feb 26 17:10 sequence_distributed_conf.properties
-rwxrwxrwx 1 root root 28 Feb 26 17:10 sequence_http_conf.properties
-rwxrwxrwx 1 root root 53 Feb 26 17:10 sequence_time_conf.properties
-rwxrwxrwx 1 root root 2420 Mar 4 15:14 server.xml
-rwxrwxrwx 1 root root 18 Feb 26 17:10 sharding-by-enum.txt
-rwxrwxrwx 1 root root 4251 Feb 28 20:51 wrapper.conf
drwxrwxrwx 2 root root 4096 Feb 28 21:17 zkconf
drwxrwxrwx 2 root root 4096 Feb 28 21:17 zkdownload
接下來,將Mycat安裝目錄下的conf目錄下的schema.xml檔案、server.xml檔案、rule.xml檔案和sequence_db_conf.properties檔案複製到conf目錄下的zkconf目錄下,如下所示。
cp schema.xml server.xml rule.xml sequence_db_conf.properties zkconf/
3.將Mycat配置資訊寫入Zookeeper叢集
執行init_zk_data.sh指令碼檔案,向Zookeeper叢集中初始化配置資訊,如下所示。
[root@binghe151 bin]# /usr/local/mycat/bin/init_zk_data.sh
o2020-03-08 20:03:13 INFO JAVA_CMD=/usr/local/jdk1.8.0_212/bin/java
o2020-03-08 20:03:13 INFO Start to initialize /mycat of ZooKeeper
o2020-03-08 20:03:14 INFO Done
根據以上資訊得知,Mycat向Zookeeper寫入初始化配置資訊成功。
4.驗證Mycat配置資訊是否成功寫入Mycat
我們可以使用Zookeeper的客戶端命令zkCli.sh 登入Zookeeper來驗證Mycat的配置資訊是否成功寫入Mycat。
首先,登入Zookeeper,如下所示。
[root@binghe151 ~]# zkCli.sh
Connecting to localhost:2181
###################此處省略N行輸出######################
Welcome to ZooKeeper!
WATCHER::
WatchedEvent state:SyncConnected type:None path:null
[zk: localhost:2181(CONNECTED) 0]
接下來,在Zookeeper命令列檢視mycat的資訊,如下所示。
[zk: localhost:2181(CONNECTED) 0] ls /
[mycat, zookeeper]
[zk: localhost:2181(CONNECTED) 1] ls /mycat
[mycat-cluster-1]
[zk: localhost:2181(CONNECTED) 2] ls /mycat/mycat-cluster-1
[cache, line, rules, schema, sequences, server]
[zk: localhost:2181(CONNECTED) 3]
可以看到,在/mycat/mycat-cluster-1下存在6個目錄,接下來,檢視下schema目錄下的資訊,如下所示。
[zk: localhost:2181(CONNECTED) 3] ls /mycat/mycat-cluster-1/schema
[dataHost, dataNode, schema]
接下來,我們檢視下dataHost的配置,如下所示。
[zk: localhost:2181(CONNECTED) 4] get /mycat/mycat-cluster-1/schema/dataHost
[{"balance":1,"maxCon":1000,"minCon":10,"name":"binghe151","writeType":0,"switchType":1,"slaveThreshold":100,"dbType":"mysql","dbDriver":"native","heartbeat":"select user()","writeHost":[{"host":"binghe51","url":"192.168.175.151:3306","password":"root","user":"root"}]},{"balance":1,"maxCon":1000,"minCon":10,"name":"binghe152","writeType":0,"switchType":1,"slaveThreshold":100,"dbType":"mysql","dbDriver":"native","heartbeat":"select user()","writeHost":[{"host":"binghe52","url":"192.168.175.152:3306","password":"root","user":"root"}]},{"balance":1,"maxCon":1000,"minCon":10,"name":"binghe153","writeType":0,"switchType":1,"slaveThreshold":100,"dbType":"mysql","dbDriver":"native","heartbeat":"select user()","writeHost":[{"host":"binghe53","url":"192.168.175.153:3306","password":"root","user":"root"}]},{"balance":1,"maxCon":1000,"minCon":10,"name":"binghe154","writeType":0,"switchType":1,"slaveThreshold":100,"dbType":"mysql","dbDriver":"native","heartbeat":"select user()","writeHost":[{"host":"binghe54","url":"192.168.175.154:3306","password":"root","user":"root"}]}]
上面的輸出資訊格式比較亂,但可以看出是Json格式的資訊,我們可以將輸出資訊進行格式化,格式化後的結果如下所示。
[
{
"balance": 1,
"maxCon": 1000,
"minCon": 10,
"name": "binghe151",
"writeType": 0,
"switchType": 1,
"slaveThreshold": 100,
"dbType": "mysql",
"dbDriver": "native",
"heartbeat": "select user()",
"writeHost": [
{
"host": "binghe51",
"url": "192.168.175.151:3306",
"password": "root",
"user": "root"
}
]
},
{
"balance": 1,
"maxCon": 1000,
"minCon": 10,
"name": "binghe152",
"writeType": 0,
"switchType": 1,
"slaveThreshold": 100,
"dbType": "mysql",
"dbDriver": "native",
"heartbeat": "select user()",
"writeHost": [
{
"host": "binghe52",
"url": "192.168.175.152:3306",
"password": "root",
"user": "root"
}
]
},
{
"balance": 1,
"maxCon": 1000,
"minCon": 10,
"name": "binghe153",
"writeType": 0,
"switchType": 1,
"slaveThreshold": 100,
"dbType": "mysql",
"dbDriver": "native",
"heartbeat": "select user()",
"writeHost": [
{
"host": "binghe53",
"url": "192.168.175.153:3306",
"password": "root",
"user": "root"
}
]
},
{
"balance": 1,
"maxCon": 1000,
"minCon": 10,
"name": "binghe154",
"writeType": 0,
"switchType": 1,
"slaveThreshold": 100,
"dbType": "mysql",
"dbDriver": "native",
"heartbeat": "select user()",
"writeHost": [
{
"host": "binghe54",
"url": "192.168.175.154:3306",
"password": "root",
"user": "root"
}
]
}
]
可以看到,我們在Mycat的schema.xml檔案中配置的dataHost節點的資訊,成功寫入到Zookeeper中了。
為了驗證Mycat的配置資訊,是否已經同步到Zookeeper的其他節點上,我們也可以在binghe152和binghe153伺服器上登入Zookeeper,檢視Mycat配置資訊是否寫入成功。
- binghe152伺服器
[root@binghe152 ~]# zkCli.sh
Connecting to localhost:2181
#################省略N行輸出資訊################
[zk: localhost:2181(CONNECTED) 0] get /mycat/mycat-cluster-1/schema/dataHost
[{"balance":1,"maxCon":1000,"minCon":10,"name":"binghe151","writeType":0,"switchType":1,"slaveThreshold":100,"dbType":"mysql","dbDriver":"native","heartbeat":"select user()","writeHost":[{"host":"binghe51","url":"192.168.175.151:3306","password":"root","user":"root"}]},{"balance":1,"maxCon":1000,"minCon":10,"name":"binghe152","writeType":0,"switchType":1,"slaveThreshold":100,"dbType":"mysql","dbDriver":"native","heartbeat":"select user()","writeHost":[{"host":"binghe52","url":"192.168.175.152:3306","password":"root","user":"root"}]},{"balance":1,"maxCon":1000,"minCon":10,"name":"binghe153","writeType":0,"switchType":1,"slaveThreshold":100,"dbType":"mysql","dbDriver":"native","heartbeat":"select user()","writeHost":[{"host":"binghe53","url":"192.168.175.153:3306","password":"root","user":"root"}]},{"balance":1,"maxCon":1000,"minCon":10,"name":"binghe154","writeType":0,"switchType":1,"slaveThreshold":100,"dbType":"mysql","dbDriver":"native","heartbeat":"select user()","writeHost":[{"host":"binghe54","url":"192.168.175.154:3306","password":"root","user":"root"}]}]
可以看到,Mycat的配置資訊成功同步到了binghe152伺服器上的Zookeeper中。
- binghe153伺服器
[root@binghe153 ~]# zkCli.sh
Connecting to localhost:2181
#####################此處省略N行輸出資訊#####################
[zk: localhost:2181(CONNECTED) 0] get /mycat/mycat-cluster-1/schema/dataHost
[{"balance":1,"maxCon":1000,"minCon":10,"name":"binghe151","writeType":0,"switchType":1,"slaveThreshold":100,"dbType":"mysql","dbDriver":"native","heartbeat":"select user()","writeHost":[{"host":"binghe51","url":"192.168.175.151:3306","password":"root","user":"root"}]},{"balance":1,"maxCon":1000,"minCon":10,"name":"binghe152","writeType":0,"switchType":1,"slaveThreshold":100,"dbType":"mysql","dbDriver":"native","heartbeat":"select user()","writeHost":[{"host":"binghe52","url":"192.168.175.152:3306","password":"root","user":"root"}]},{"balance":1,"maxCon":1000,"minCon":10,"name":"binghe153","writeType":0,"switchType":1,"slaveThreshold":100,"dbType":"mysql","dbDriver":"native","heartbeat":"select user()","writeHost":[{"host":"binghe53","url":"192.168.175.153:3306","password":"root","user":"root"}]},{"balance":1,"maxCon":1000,"minCon":10,"name":"binghe154","writeType":0,"switchType":1,"slaveThreshold":100,"dbType":"mysql","dbDriver":"native","heartbeat":"select user()","writeHost":[{"host":"binghe54","url":"192.168.175.154:3306","password":"root","user":"root"}]}]
可以看到,Mycat的配置資訊成功同步到了binghe153伺服器上的Zookeeper中。
配置Mycat支援Zookeeper啟動
1.在binghe151伺服器上配置Mycat
在binghe151伺服器上進入Mycat安裝目錄的conf目錄下,檢視檔案資訊,如下所示。
[root@binghe151 ~]# cd /usr/local/mycat/conf/
[root@binghe151 conf]# ll
total 108
-rwxrwxrwx 1 root root 92 Feb 26 17:10 autopartition-long.txt
-rwxrwxrwx 1 root root 51 Feb 26 17:10 auto-sharding-long.txt
-rwxrwxrwx 1 root root 67 Feb 26 17:10 auto-sharding-rang-mod.txt
-rwxrwxrwx 1 root root 340 Feb 26 17:10 cacheservice.properties
-rwxrwxrwx 1 root root 3338 Feb 26 17:10 dbseq.sql
-rwxrwxrwx 1 root root 3532 Feb 26 17:10 dbseq - utf8mb4.sql
-rw-r--r-- 1 root root 86 Mar 1 22:37 dnindex.properties
-rwxrwxrwx 1 root root 446 Feb 26 17:10 ehcache.xml
-rwxrwxrwx 1 root root 2454 Feb 26 17:10 index_to_charset.properties
-rwxrwxrwx 1 root root 1285 Feb 26 17:10 log4j2.xml
-rwxrwxrwx 1 root root 183 Feb 26 17:10 migrateTables.properties
-rwxrwxrwx 1 root root 271 Feb 26 17:10 myid.properties
-rwxrwxrwx 1 root root 16 Feb 26 17:10 partition-hash-int.txt
-rwxrwxrwx 1 root root 108 Feb 26 17:10 partition-range-mod.txt
-rwxrwxrwx 1 root root 988 Mar 1 16:59 rule.xml
-rwxrwxrwx 1 root root 3883 Mar 3 23:59 schema.xml
-rwxrwxrwx 1 root root 440 Feb 26 17:10 sequence_conf.properties
-rwxrwxrwx 1 root root 84 Mar 3 23:52 sequence_db_conf.properties
-rwxrwxrwx 1 root root 29 Feb 26 17:10 sequence_distributed_conf.properties
-rwxrwxrwx 1 root root 28 Feb 26 17:10 sequence_http_conf.properties
-rwxrwxrwx 1 root root 53 Feb 26 17:10 sequence_time_conf.properties
-rwxrwxrwx 1 root root 2420 Mar 4 15:14 server.xml
-rwxrwxrwx 1 root root 18 Feb 26 17:10 sharding-by-enum.txt
-rwxrwxrwx 1 root root 4251 Feb 28 20:51 wrapper.conf
drwxrwxrwx 2 root root 4096 Feb 28 21:17 zkconf
drwxrwxrwx 2 root root 4096 Feb 28 21:17 zkdownload
可以看到,在Mycat的conf目錄下,存在一個myid.properties檔案,接下來,使用vim編輯器編輯這個檔案,如下所示。
vim myid.properties
編輯後的myid.properties檔案的內容如下所示。
loadZk=true
zkURL=192.168.175.151:2181,192.168.175.152:2181,192.168.175.153:2181
clusterId=mycat-cluster-1
myid=mycat_151
clusterSize=2
clusterNodes=mycat_151,mycat_154
#server booster ; booster install on db same server,will reset all minCon to 2
type=server
boosterDataHosts=dataHost1
其中幾個重要的引數說明如下所示。
- loadZk:表示是否載入Zookeeper配置。true:是; false:否;
- zkURL:Zookeeper的連線地址,多個Zookeeper連線地址以逗號隔開;
- clusterId:當前Mycat叢集的Id標識,此標識需要與Zookeeper中/mycat目錄下的目錄名稱相同,如下所示。
[zk: localhost:2181(CONNECTED) 1] ls /mycat
[mycat-cluster-1]
- myid:當前Mycat節點的id,這裡我的命名方式為mycat_字首加上IP地址的最後三位;
- clusterSize:表示Mycat叢集中的Mycat節點個數,這裡,我們在binghe151和binghe154節點上部署Mycat,所以Mycat節點的個數為2。
- clusterNodes:Mycat叢集中,所有的Mycat節點,此處的節點需要配置myid中配置的Mycat節點id,多個節點之前以逗號分隔。這裡我配置的節點為:mycat_151,mycat_154。
2.在binghe154伺服器上安裝全新的Mycat
在binghe154伺服器上下載並安裝和binghe151伺服器上相同版本的Mycat,並將其解壓到binghe154伺服器上的/usr/local/mycat目錄下。
也可以在binghe151伺服器上直接輸入如下命令將Mycat的安裝目錄複製到binghe154伺服器上。
[root@binghe151 ~]# scp -r /usr/local/mycat binghe154:/usr/local
注意:別忘了在binghe154伺服器上配置Mycat的系統環境變數。
3.修改binghe154伺服器上的Mycat配置
在binghe154伺服器上修改Mycat安裝目錄下的conf目錄中的myid.properties檔案,如下所示。
vim /usr/local/mycat/conf/myid.properties
修改後的myid.properties檔案的內容如下所示。
loadZk=true
zkURL=192.168.175.151:2181,192.168.175.152:2181,192.168.175.153:2181
clusterId=mycat-cluster-1
myid=mycat_154
clusterSize=2
clusterNodes=mycat_151,mycat_154
#server booster ; booster install on db same server,will reset all minCon to 2
type=server
boosterDataHosts=dataHost1
4.重啟Mycat
分別重啟binghe151伺服器和binghe154伺服器上的Mycat,如下所示。
注意:先重啟
- binghe151伺服器
[root@binghe151 ~]# mycat restart
Stopping Mycat-server...
Stopped Mycat-server.
Starting Mycat-server...
- binghe154伺服器
[root@binghe154 ~]# mycat restart
Stopping Mycat-server...
Stopped Mycat-server.
Starting Mycat-server...
在binghe151和binghe154伺服器上分別檢視Mycat的啟動日誌,如下所示。
STATUS | wrapper | 2020/03/08 21:08:15 | <-- Wrapper Stopped
STATUS | wrapper | 2020/03/08 21:08:15 | --> Wrapper Started as Daemon
STATUS | wrapper | 2020/03/08 21:08:15 | Launching a JVM...
INFO | jvm 1 | 2020/03/08 21:08:16 | Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.org
INFO | jvm 1 | 2020/03/08 21:08:16 | Copyright 1999-2006 Tanuki Software, Inc. All Rights Reserved.
INFO | jvm 1 | 2020/03/08 21:08:16 |
INFO | jvm 1 | 2020/03/08 21:08:28 | MyCAT Server startup successfully. see logs in logs/mycat.log
從日誌的輸出結果可以看出,Mycat重啟成功。
此時,先重啟binghe151伺服器上的Mycat,再重啟binghe154伺服器上的Mycat之後,我們會發現binghe154伺服器上的Mycat的conf目錄下的schema.xml、server.xml、rule.xml和sequence_db_conf.properties檔案與binghe151伺服器上Mycat的配置檔案相同,這就是binghe154伺服器上的Mycat從Zookeeper上讀取配置檔案的結果。
以後,我們只需要修改Zookeeper中有關Mycat的配置,這些配置就會自動同步到Mycat中,這樣可以保證多個Mycat節點的配置是一致的。
配置虛擬IP
分別在binghe151和binghe154伺服器上配置虛擬IP,如下所示。
ifconfig eth0:1 192.168.175.110 broadcast 192.168.175.255 netmask 255.255.255.0 up
route add -host 192.168.175.110 dev eth0:1
配置完虛擬IP的效果如下所示,以binghe151伺服器為例。
[root@binghe151 ~]# ifconfig
eth0 Link encap:Ethernet HWaddr 00:0C:29:10:A1:45
inet addr:192.168.175.151 Bcast:192.168.175.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fe10:a145/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:116766 errors:0 dropped:0 overruns:0 frame:0
TX packets:85230 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:25559422 (24.3 MiB) TX bytes:55997016 (53.4 MiB)
eth0:1 Link encap:Ethernet HWaddr 00:0C:29:10:A1:45
inet addr:192.168.175.110 Bcast:192.168.175.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:51102 errors:0 dropped:0 overruns:0 frame:0
TX packets:51102 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:2934009 (2.7 MiB) TX bytes:2934009 (2.7 MiB)
注意:在命令列新增VIP後,當伺服器重啟後,VIP資訊會消失,所以,最好是將建立VIP的命令寫到一個指令碼檔案中,例如,將命令寫到/usr/local/script/vip.sh檔案中,如下所示。
mkdir /usr/local/script
vim /usr/local/script/vip.sh
檔案的內容如下所示。
ifconfig eth0:1 192.168.175.110 broadcast 192.168.175.255 netmask 255.255.255.0 up
route add -host 192.168.175.110 dev eth0:1
接下來,將/usr/local/script/vip.sh檔案新增到伺服器開機啟動項中,如下所示。
echo /usr/local/script/vip.sh >> /etc/rc.d/rc.local
配置IP轉發
在binghe151和binghe154伺服器上配置系統核心IP轉發功能,編輯/etc/sysctl.conf檔案,如下所示。
vim /etc/sysctl.conf
找到如下一行程式碼。
net.ipv4.ip_forward = 0
將其修改成如下所示的程式碼。
net.ipv4.ip_forward = 1
儲存並退出vim編輯器,並執行如下命令使配置生效。
sysctl -p
安裝並配置xinetd服務
我們需要在安裝HAProxy的伺服器上,也就是在binghe151和binghe154伺服器上安裝xinetd服務來開啟48700埠。
(1)在伺服器命令列執行如下命令安裝xinetd服務,如下所示。
yum install xinetd -y
(2)編輯/etc/xinetd.conf檔案,如下所示。
vim /etc/xinetd.conf
檢查檔案中是否存在如下配置。
includedir /etc/xinetd.d
如果/etc/xinetd.conf檔案中沒有以上配置,則在/etc/xinetd.conf檔案中新增以上配置;如果存在以上配置,則不用修改。
(3)建立/etc/xinetd.d目錄,如下所示。
mkdir /etc/xinetd.d
注意:如果/etc/xinetd.d目錄已經存在,建立目錄時會報如下錯誤。
mkdir: cannot create directory `/etc/xinetd.d': File exists
大家可不必理會此錯誤資訊。
(4)在/etc/xinetd.d目錄下新增Mycat狀態檢測伺服器的配置檔案mycat_status,如下所示。
touch /etc/xinetd.d/mycat_status
(5)編輯mycat_status檔案,如下所示。
vim /etc/xinetd.d/mycat_status
編輯後的mycat_status檔案中的內容如下所示。
service mycat_status
{
flags = REUSE
socket_type = stream
port = 48700
wait = no
user = root
server =/usr/local/bin/mycat_check.sh
log_on_failure += USERID
disable = no
}
部分xinetd配置引數說明如下所示。
- socket_type:表示封包處理方式,Stream為TCP資料包。
- port:表示xinetd服務監聽的埠號。
- wait:表示不需等待,即服務將以多執行緒的方式執行。
- user:執行xinted服務的使用者。
- server:需要啟動的服務指令碼。
- log_on_failure:記錄失敗的日誌內容。
- disable:需要啟動xinted服務時,需要將此配置項設定為no。
(6)在/usr/local/bin目錄下新增mycat_check.sh服務指令碼,如下所示。
touch /usr/local/bin/mycat_check.sh
(7)編輯/usr/local/bin/mycat_check.sh檔案,如下所示。
vim /usr/local/bin/mycat_check.sh
編輯後的檔案內容如下所示。
#!/bin/bash
mycat=`/usr/local/mycat/bin/mycat status | grep 'not running' | wc -l`
if [ "$mycat" = "0" ]; then
/bin/echo -e "HTTP/1.1 200 OK\r\n"
else
/bin/echo -e "HTTP/1.1 503 Service Unavailable\r\n"
/usr/local/mycat/bin/mycat start
fi
為mycat_check.sh檔案賦予可執行許可權,如下所示。
chmod a+x /usr/local/bin/mycat_check.sh
(8)編輯/etc/services檔案,如下所示。
vim /etc/services
在檔案末尾新增如下所示的內容。
mycat_status 48700/tcp # mycat_status
其中,埠號需要與在/etc/xinetd.d/mycat_status檔案中配置的埠號相同。
(9)重啟xinetd服務,如下所示。
service xinetd restart
(10)檢視mycat_status服務是否成功啟動,如下所示。
- binghe151伺服器
[root@binghe151 ~]# netstat -antup|grep 48700
tcp 0 0 :::48700 :::* LISTEN 2776/xinetd
- binghe154伺服器
[root@binghe154 ~]# netstat -antup|grep 48700
tcp 0 0 :::48700 :::* LISTEN 6654/xinetd
結果顯示,兩臺伺服器上的mycat_status伺服器啟動成功。
至此,xinetd服務安裝並配置成功,即Mycat狀態檢查服務安裝成功。
安裝並配置HAProxy
我們直接在binghe151和binghe154伺服器上使用如下命令安裝HAProxy。
yum install haproxy -y
安裝完成後,我們需要對HAProxy進行配置,HAProxy的配置檔案目錄為/etc/haproxy,我們檢視這個目錄下的檔案資訊,如下所示。
[root@binghe151 ~]# ll /etc/haproxy/
total 4
-rw-r--r-- 1 root root 3142 Oct 21 2016 haproxy.cfg
發現/etc/haproxy/目錄下存在一個haproxy.cfg檔案。接下來,我們就修改haproxy.cfg檔案,修改後的haproxy.cfg檔案的內容如下所示。
global
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
stats socket /var/lib/haproxy/stats
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
listen admin_status
bind 0.0.0.0:48800
stats uri /admin-status
stats auth admin:admin
listen allmycat_service
bind 0.0.0.0:3366
mode tcp
option tcplog
option httpchk OPTIONS * HTTP/1.1\r\nHost:\ www
balance roundrobin
server mycat_151 192.168.175.151:3307 check port 48700 inter 5s rise 2 fall 3
server mycat_154 192.168.175.154:3307 check port 48700 inter 5s rise 2 fall 3
listen allmycat_admin
bind 0.0.0.0:3377
mode tcp
option tcplog
option httpchk OPTIONS * HTTP/1.1\r\nHost:\ www
balance roundrobin
server mycat_151 192.168.175.151:3308 check port 48700 inter 5s rise 2 fall 3
server mycat_154 192.168.175.154:3308 check port 48700 inter 5s rise 2 fall 3
接下來,在binghe151伺服器和binghe154伺服器上啟動HAProxy,如下所示。
haproxy -f /etc/haproxy/haproxy.cfg
接下來,我們使用mysql命令連線HAProxy監聽的虛擬IP和埠來連線Mycat,如下所示。
[root@binghe151 ~]# mysql -umycat -pmycat -h192.168.175.110 -P3366 --default-auth=mysql_native_password
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.6.29-mycat-1.6.7.4-release-20200228205020 MyCat Server (OpenCloudDB)
Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql>
可以看到,連線Mycat成功。
安裝Keepalived
1.安裝並配置Keepalived
直接在binghe151和binghe154伺服器上輸入如下命令安裝Keepalived。
yum install keepalived -y
安裝成功後,會在/etc目錄下生成一個keepalived目錄,接下來,我們在/etc/keepalived目錄下配置keepalived.conf檔案,如下所示。
vim /etc/keepalived/keepalived.conf
- binghe151伺服器配置
! Configuration Fileforkeepalived
vrrp_script chk_http_port {
script "/etc/keepalived/check_haproxy.sh"
interval 2
weight 2
}
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 51
priority 150
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
track_script {
chk_http_port
}
virtual_ipaddress {
192.168.175.110 dev eth0 scope global
}
}
- binghe154伺服器配置
! Configuration Fileforkeepalived
vrrp_script chk_http_port {
script "/etc/keepalived/check_haproxy.sh"
interval 2
weight 2
}
vrrp_instance VI_1 {
state SLAVE
interface eth0
virtual_router_id 51
priority 120
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
track_script {
chk_http_port
}
virtual_ipaddress {
192.168.175.110 dev eth0 scope global
}
}
2.編寫檢測HAProxy的指令碼
接下來,需要分別在binghe151和binghe154伺服器上的/etc/keepalived目錄下建立check_haproxy.sh指令碼,指令碼內容如下所示。
#!/bin/bash
STARTHAPROXY="/usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg"
STOPKEEPALIVED="/etc/init.d/keepalived stop"
#STOPKEEPALIVED="/usr/bin/systemctl stop keepalived"
LOGFILE="/var/log/keepalived-haproxy-state.log"
echo "[check_haproxy status]" >> $LOGFILE
A=`ps -C haproxy --no-header |wc -l`
echo "[check_haproxy status]" >> $LOGFILE
date >> $LOGFILE
if [ $A -eq 0 ];then
echo $STARTHAPROXY >> $LOGFILE
$STARTHAPROXY >> $LOGFILE 2>&1
sleep 5
fi
if [ `ps -C haproxy --no-header |wc -l` -eq 0 ];then
exit 0
else
exit 1
fi
使用如下命令為check_haproxy.sh指令碼授予可執行許可權。
chmod a+x /etc/keepalived/check_haproxy.sh
3.啟動Keepalived
配置完成後,我們就可以啟動Keepalived了,分別在binghe151和binghe154伺服器上啟動Keepalived,如下所示。
/etc/init.d/keepalived start
檢視Keepalived是否啟動成功,如下所示。
- binghe151伺服器
[root@binghe151 ~]# ps -ef | grep keepalived
root 1221 1 0 20:06 ? 00:00:00 keepalived -D
root 1222 1221 0 20:06 ? 00:00:00 keepalived -D
root 1223 1221 0 20:06 ? 00:00:02 keepalived -D
root 93290 3787 0 21:42 pts/0 00:00:00 grep keepalived
- binghe154伺服器
[root@binghe154 ~]# ps -ef | grep keepalived
root 1224 1 0 20:06 ? 00:00:00 keepalived -D
root 1225 1224 0 20:06 ? 00:00:00 keepalived -D
root 1226 1224 0 20:06 ? 00:00:02 keepalived -D
root 94636 3798 0 21:43 pts/0 00:00:00 grep keepalived
可以看到,兩臺伺服器上的Keepalived服務啟動成功。
4.驗證Keepalived繫結的虛擬IP
接下來,我們分別檢視兩臺伺服器上的Keepalived是否繫結了虛擬IP。
- binghe151伺服器
[root@binghe151 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:10:a1:45 brd ff:ff:ff:ff:ff:ff
inet 192.168.175.151/24 brd 192.168.175.255 scope global eth0
inet 192.168.175.110/32 scope global eth0
inet 192.168.175.110/24 brd 192.168.175.255 scope global secondary eth0:1
inet6 fe80::20c:29ff:fe10:a145/64 scope link
valid_lft forever preferred_lft forever
可以看到如下一行程式碼。
inet 192.168.175.110/32 scope global eth0
說明binghe151伺服器上的Keepalived繫結了虛擬IP 192.168.175.110。
- binghe154伺服器
[root@binghe154 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:50:56:22:2a:75 brd ff:ff:ff:ff:ff:ff
inet 192.168.175.154/24 brd 192.168.175.255 scope global eth0
inet 192.168.175.110/24 brd 192.168.175.255 scope global secondary eth0:1
inet6 fe80::250:56ff:fe22:2a75/64 scope link
valid_lft forever preferred_lft forever
可以看到binghe154伺服器上的Keepalived並沒有繫結虛擬IP。
5.測試虛擬IP的漂移
如何測試虛擬IP的漂移呢?首先,我們停止binghe151伺服器上的Keepalived,如下所示。
/etc/init.d/keepalived stop
接下來,檢視binghe154伺服器上Keepalived繫結虛擬IP的情況,如下所示。
[root@binghe154 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:50:56:22:2a:75 brd ff:ff:ff:ff:ff:ff
inet 192.168.175.154/24 brd 192.168.175.255 scope global eth0
inet 192.168.175.110/32 scope global eth0
inet 192.168.175.110/24 brd 192.168.175.255 scope global secondary eth0:1
inet6 fe80::250:56ff:fe22:2a75/64 scope link
valid_lft forever preferred_lft forever
可以看到,在輸出的結果資訊中,存在如下一行資訊。
inet 192.168.175.110/32 scope global eth0
說明binghe154伺服器上的Keepalived繫結了虛擬IP 192.168.175.110,虛擬IP漂移到了binghe154伺服器上。
6.binghe151伺服器上的Keepalived搶佔虛擬IP
接下來,我們啟動binghe151伺服器上的Keepalived,如下所示。
/etc/init.d/keepalived start
啟動成功後,我們再次檢視虛擬IP的繫結情況,如下所示。
- binghe151伺服器
[root@binghe151 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:10:a1:45 brd ff:ff:ff:ff:ff:ff
inet 192.168.175.151/24 brd 192.168.175.255 scope global eth0
inet 192.168.175.110/32 scope global eth0
inet 192.168.175.110/24 brd 192.168.175.255 scope global secondary eth0:1
inet6 fe80::20c:29ff:fe10:a145/64 scope link
valid_lft forever preferred_lft forever
- binghe154伺服器
[root@binghe154 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:50:56:22:2a:75 brd ff:ff:ff:ff:ff:ff
inet 192.168.175.154/24 brd 192.168.175.255 scope global eth0
inet 192.168.175.110/24 brd 192.168.175.255 scope global secondary eth0:1
inet6 fe80::250:56ff:fe22:2a75/64 scope link
valid_lft forever preferred_lft forever
由於binghe151伺服器上配置的Keepalived優先順序要高於binghe154伺服器上的Keepalived,所以,再次啟動binghe151伺服器上的Keepalived後,binghe151伺服器上的Keepalived會搶佔虛擬IP。
配置MySQL主從複製
這裡,為了簡單,我將binghe154和binghe155伺服器上的MySQL配置成主從複製,大家也可以根據實際情況,自行配置其他伺服器上MySQL的主從複製(注意:我這裡配置的是一主一從模式)。
1.編輯my.cnf檔案
- binghe154伺服器
server_id = 154
log_bin = /data/mysql/log/bin_log/mysql-bin
binlog-ignore-db=mysql
binlog_format= mixed
sync_binlog=100
log_slave_updates = 1
binlog_cache_size=32m
max_binlog_cache_size=64m
max_binlog_size=512m
lower_case_table_names = 1
relay_log = /data/mysql/log/bin_log/relay-bin
relay_log_index = /data/mysql/log/bin_log/relay-bin.index
master_info_repository=TABLE
relay-log-info-repository=TABLE
relay-log-recovery
- binghe155伺服器
server_id = 155
log_bin = /data/mysql/log/bin_log/mysql-bin
binlog-ignore-db=mysql
binlog_format= mixed
sync_binlog=100
log_slave_updates = 1
binlog_cache_size=32m
max_binlog_cache_size=64m
max_binlog_size=512m
lower_case_table_names = 1
relay_log = /data/mysql/log/bin_log/relay-bin
relay_log_index = /data/mysql/log/bin_log/relay-bin.index
master_info_repository=TABLE
relay-log-info-repository=TABLE
relay-log-recovery
2.同步兩臺伺服器上MySQL的資料
在binghe154伺服器上只有一個customer_db資料庫,我們使用mysqldump命令匯出customer_db資料庫,如下所示。
[root@binghe154 ~]# mysqldump --master-data=2 --single-transaction -uroot -p --databases customer_db > binghe154.sql
Enter password:
接下來,我們檢視binghe154.sql檔案。
more binghe154.sql
在檔案中,我們可以找到如下資訊。
CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000042', MASTER_LOG_POS=995;
說明當前MySQL的二進位制日誌檔案為mysql-bin.000042,二進位制日誌檔案的位置為995。
接下來,我們將binghe154.sql檔案複製到binghe155伺服器上,如下所示。
scp binghe154.sql 192.168.175.155:/usr/local/src
在binghe155伺服器上,將binghe154.sql指令碼匯入到MySQL中,如下所示。
mysql -uroot -p < /usr/local/src/binghe154.sql
此時,完成了資料的初始化。
3.建立主從複製賬號
在binghe154伺服器的MySQL中,建立用於主從複製的MySQL賬號,如下所示。
mysql> CREATE USER 'repl'@'192.168.175.%' IDENTIFIED BY 'repl123456';
Query OK, 0 rows affected (0.01 sec)
mysql> ALTER USER 'repl'@'192.168.175.%' IDENTIFIED WITH mysql_native_password BY 'repl123456';
Query OK, 0 rows affected (0.00 sec)
mysql> GRANT REPLICATION SLAVE ON *.* TO 'repl'@'192.168.175.%';
Query OK, 0 rows affected (0.00 sec)
mysql> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)
4.配置複製鏈路
登入binghe155伺服器上的MySQL,並使用如下命令配置複製鏈路。
mysql> change master to
> master_host='192.168.175.154',
> master_port=3306,
> master_user='repl',
> master_password='repl123456',
> MASTER_LOG_FILE='mysql-bin.000042',
> MASTER_LOG_POS=995;
其中,MASTER_LOG_FILE='mysql-bin.000042', MASTER_LOG_POS=995 就是在binghe154.sql檔案中找到的。
5.啟動從庫
在binghe155伺服器的MySQL命令列啟動從庫,如下所示。
mysql> start slave;
檢視從庫是否啟動成功,如下所示。
mysql> SHOW slave STATUS \G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.175.151
Master_User: binghe152
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000007
Read_Master_Log_Pos: 1360
Relay_Log_File: relay-bin.000003
Relay_Log_Pos: 322
Relay_Master_Log_File: mysql-bin.000007
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
#################省略部分輸出結果資訊##################
結果顯示Slave_IO_Running選項和Slave_SQL_Running選項的值均為Yes,說明MySQL主從複製環境搭建成功。
最後,別忘了在binghe155伺服器的MySQL中建立Mycat連線MySQL的使用者,如下所示。
CREATE USER 'mycat'@'192.168.175.%' IDENTIFIED BY 'mycat';
ALTER USER 'mycat'@'192.168.175.%' IDENTIFIED WITH mysql_native_password BY 'mycat';
GRANT SELECT, INSERT, UPDATE, DELETE,EXECUTE ON *.* TO 'mycat'@'192.168.175.%';
FLUSH PRIVILEGES;
配置Mycat讀寫分離
修改Mycatd的schema.xml檔案,實現binghe154和binghe155伺服器上的MySQL讀寫分離。在Mycat安裝目錄的conf/zkconf目錄下,修改schema.xml檔案,修改後的schema.xml檔案如下所示。
<!DOCTYPE mycat:schema SYSTEM "schema.dtd">
<mycat:schema xmlns:mycat="http://io.mycat/">
<schema name="shop" checkSQLschema="true" sqlMaxLimit="1000">
<table name="order_master" dataNode="orderdb01,orderdb02,orderdb03,orderdb04" rule="order_master" primaryKey="order_id" autoIncrement="true">
<childTable name="order_detail" joinKey="order_id" parentKey="order_id" primaryKey="order_detail_id" autoIncrement="true"/>
</table>
<table name="order_cart" dataNode="ordb" primaryKey="cart_id"/>
<table name="order_customer_addr" dataNode="ordb" primaryKey="customer_addr_id"/>
<table name="region_info" dataNode="ordb,prodb,custdb" primaryKey="region_id" type="global"/>
<table name="serial" dataNode="ordb" primaryKey="id"/>
<table name="shipping_info" dataNode="ordb" primaryKey="ship_id"/>
<table name="warehouse_info" dataNode="ordb" primaryKey="w_id"/>
<table name="warehouse_proudct" dataNode="ordb" primaryKey="wp_id"/>
<table name="product_brand_info" dataNode="prodb" primaryKey="brand_id"/>
<table name="product_category" dataNode="prodb" primaryKey="category_id"/>
<table name="product_comment" dataNode="prodb" primaryKey="comment_id"/>
<table name="product_info" dataNode="prodb" primaryKey="product_id"/>
<table name="product_pic_info" dataNode="prodb" primaryKey="product_pic_id"/>
<table name="product_supplier_info" dataNode="prodb" primaryKey="supplier_id"/>
<table name="customer_balance_log" dataNode="custdb" primaryKey="balance_id"/>
<table name="customer_inf" dataNode="custdb" primaryKey="customer_inf_id"/>
<table name="customer_level_inf" dataNode="custdb" primaryKey="customer_level"/>
<table name="customer_login" dataNode="custdb" primaryKey="customer_id"/>
<table name="customer_login_log" dataNode="custdb" primaryKey="login_id"/>
<table name="customer_point_log" dataNode="custdb" primaryKey="point_id"/>
</schema>
<dataNode name="mycat" dataHost="binghe151" database="mycat"/>
<dataNode name="ordb" dataHost="binghe152" database="order_db"/>
<dataNode name="prodb" dataHost="binghe153" database="product_db"/>
<dataNode name="custdb" dataHost="binghe154" database="customer_db"/>
<dataNode name="orderdb01" dataHost="binghe152" database="orderdb01"/>
<dataNode name="orderdb02" dataHost="binghe152" database="orderdb02"/>
<dataNode name="orderdb03" dataHost="binghe153" database="orderdb03"/>
<dataNode name="orderdb04" dataHost="binghe153" database="orderdb04"/>
<dataHost balance="1" maxCon="1000" minCon="10" name="binghe151" writeType="0" switchType="1" slaveThreshold="100" dbType="mysql" dbDriver="native">
<heartbeat>select user()</heartbeat>
<writeHost host="binghe51" url="192.168.175.151:3306" password="mycat" user="mycat"/>
</dataHost>
<dataHost balance="1" maxCon="1000" minCon="10" name="binghe152" writeType="0" switchType="1" slaveThreshold="100" dbType="mysql" dbDriver="native">
<heartbeat>select user()</heartbeat>
<writeHost host="binghe52" url="192.168.175.152:3306" password="mycat" user="mycat"/>
</dataHost>
<dataHost balance="1" maxCon="1000" minCon="10" name="binghe153" writeType="0" switchType="1" slaveThreshold="100" dbType="mysql" dbDriver="native">
<heartbeat>select user()</heartbeat>
<writeHost host="binghe53" url="192.168.175.153:3306" password="mycat" user="mycat"/>
</dataHost>
<dataHost balance="1" maxCon="1000" minCon="10" name="binghe154" writeType="0" switchTymycate="1" slaveThreshold="100" dbType="mysql" dbDriver="native">
<heartbeat>select user()</heartbeat>
<writeHost host="binghe54" url="192.168.175.154:3306" password="mycat" user="mycat">
<readHost host="binghe55", url="192.168.175.155:3306" user="mycat" password="mycat"/>
</writeHost>
<writeHost host="binghe55" url="192.168.175.155:3306" password="mycat" user="mycat"/>
</dataHost>
</mycat:schema>
儲存並退出vim編輯器,接下來,初始化Zookeeper中的資料,如下所示。
/usr/local/mycat/bin/init_zk_data.sh
上述命令執行成功後,會自動將配置同步到binghe151和binghe154伺服器上的Mycat的安裝目錄下的conf目錄下的schema.xml中。
接下來,分別啟動binghe151和binghe154伺服器上的Mycat服務。
mycat restart
如何訪問高可用環境
此時,整個高可用環境配置完成,上層應用連線高可用環境時,需要連線HAProxy監聽的IP和埠。比如使用mysql命令連線高可用環境如下所示。
[root@binghe151 ~]# mysql -umycat -pmycat -h192.168.175.110 -P3366 --default-auth=mysql_native_password
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.6.29-mycat-1.6.7.4-release-20200228205020 MyCat Server (OpenCloudDB)
Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> show databases;
+----------+
| DATABASE |
+----------+
| shop |
+----------+
1 row in set (0.10 sec)
mysql> use shop;
Database changed
mysql> show tables;
+-----------------------+
| Tables in shop |
+-----------------------+
| customer_balance_log |
| customer_inf |
| customer_level_inf |
| customer_login |
| customer_login_log |
| customer_point_log |
| order_cart |
| order_customer_addr |
| order_detail |
| order_master |
| product_brand_info |
| product_category |
| product_comment |
| product_info |
| product_pic_info |
| product_supplier_info |
| region_info |
| serial |
| shipping_info |
| warehouse_info |
| warehouse_proudct |
+-----------------------+
21 rows in set (0.00 sec)
這裡,我只是對binghe154伺服器上的MySQL擴充套件了讀寫分離環境,大家也可以根據實際情況對其他伺服器的MySQL實現主從複製和讀寫分離,這樣,整個高可用環境就實現了HAProxy的高可用、Mycat的高可用、MySQL的高可用、Zookeeper的高可用和Keepalived的高可用。
好了,今天就到這兒吧,我是冰河,大家有啥問題可以在下方留言,也可以加我微信,一起交流技術,一起進階,一起牛逼~~