作者:任仲禹
愛可生 DBA 團隊成員,擅長故障分析和效能優化,文章相關技術問題,歡迎大家一起討論。
本文來源:原創投稿
*愛可生開源社群出品,原創內容未經授權不得隨意使用,轉載請聯絡小編並註明來源。
背景
源於某客戶的需求,存線上上某業務 MySQL 庫因為資料量及業務讀寫壓力較大,需要將業務資料遷移到 DBLE 分散式資料庫,但同時因為業務為 7x24h,能夠停機的時間視窗較短,所以需要考慮資料實時同步的方案。
過往 DBLE 的業務上線基本為全新部署,資料實時同步的情況極少實施,去年 DTLE 釋出後這一問題得到了些改善,今天我們來實踐下。
環境準備
1. 目標端 DBLE 叢集部署
安裝 DBLE 軟體、後端分片 MySQL 庫過程略
- DBLE 版本 3.20.10.8、MySQL版本 5.7.25
- sharding.xml
<?xml version="1.0"?>
<!DOCTYPE dble:sharding SYSTEM "sharding.dtd">
<dble:sharding xmlns:dble="http://dble.cloud/" version="4.0">
<schema name="dtle" sqlMaxLimit="-1" shardingNode="dn_01">
<singleTable name="gtid_executed_v4" shardingNode="dn_01" sqlMaxLimit="-1"></singleTable>
</schema>
<schema name="ren" sqlMaxLimit="-1" shardingNode="dn_01">
<shardingTable name="test" shardingNode="dn_01,dn_02,dn_03,dn_04" sqlMaxLimit="-1" shardingColumn="id" function="func_jumphash"></shardingTable>
</schema>
<shardingNode name="dn_03" dbGroup="dh-mysql-cluster02" database="dh_dn_03"></shardingNode>
<shardingNode name="dn_04" dbGroup="dh-mysql-cluster02" database="dh_dn_04"></shardingNode>
<shardingNode name="dn_02" dbGroup="dh-mysql-cluster01" database="dh_dn_02"></shardingNode>
<shardingNode name="dn_01" dbGroup="dh-mysql-cluster01" database="dh_dn_01"></shardingNode>
<function name="func_jumphash" class="jumpStringHash">
<property name="partitionCount">4</property>
<property name="hashSlice">0:-1</property>
</function>
</dble:sharding>
- db.xml
<?xml version="1.0"?>
<!DOCTYPE dble:db SYSTEM "db.dtd">
<dble:db xmlns:dble="http://dble.cloud/" version="4.0">
<dbGroup name="dh-mysql-cluster02" rwSplitMode="0" delayThreshold="-1">
<heartbeat timeout="0" errorRetryCount="0">show slave status</heartbeat>
<dbInstance name="10.186.61.13-3326-dh-1" url="10.186.61.13:3326" user="dbleuser" password="jpfmxIeMt1vxAJ6zd6Q10PGRRi+Qj023Dl+YXuOr3C4VXTdV5+GJaOIv5iVmWCwpXcucn/zi02HVlT7ADX+m6Q==" maxCon="100" minCon="10" primary="true" readWeight="0" id="mysql-i63009" usingDecrypt="true"></dbInstance>
</dbGroup>
<dbGroup name="dh-mysql-cluster01" rwSplitMode="0" delayThreshold="-1">
<heartbeat timeout="0" errorRetryCount="0">show slave status</heartbeat>
<dbInstance name="10.186.61.11-3316-dh-1" url="10.186.61.11:3316" user="dbleuser" password="QQWRF80AGNbx4jIAx/b2Ww7Myol1+ntlyzGmA1A3PXVISmRD/i5pgRnLLwYsXoLmH0jiv1qZAkqIBHv6Yg/XAg==" maxCon="100" minCon="10" primary="true" readWeight="0" id="mysql-47vn84" usingDecrypt="true"></dbInstance>
</dbGroup>
</dble:db>
- user.xml
<?xml version="1.0"?>
<!DOCTYPE dble:user SYSTEM "user.dtd">
<dble:user xmlns:dble="http://dble.cloud/" version="4.0">
<managerUser name="root" password="CrjpLhvVJkHk0EPW35Y07dUeTimf52zMqClYQkIAN3/dqiG1DVUe9Zr4JLh8Kl+1KH1zd7YTKu5w04QgdyQeDw==" usingDecrypt="true"></managerUser>
<shardingUser name="ren" schemas="ren,dtle" password="P+C2KazQiS3ZZ6uojBJ91MZIqYqGczspQ/ebyBZOC9xKAAkAFrqEDC9OPn/vObAyO4P8Zu3vHQJ+rljM040Kdg==" usingDecrypt="true" readOnly="false" maxCon="0" blacklist="default_black_list"></shardingUser>
</dble:user>
2. 源端和目標端測試表建立
- 源端 MySQL 資料庫軟體安裝略
- 源端MySQL與目標端DBLE都需要建立測試表名:test
use ren;
CREATE TABLE `test` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(20) COLLATE utf8mb4_bin DEFAULT NULL,
`city` varchar(20) COLLATE utf8mb4_bin DEFAULT NULL,
`dt` datetime DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (`id`),
KEY `idx_ctiy` (`city`)
) ENGINE=InnoDB;
3. 部署單節點DTLE
- DTLE社群版本GitHub下載地址:https://github.com/actiontech...
下載完成後直接rpm安裝(本示例使用內部QA驗證版本)
- rpm -ivh --prefix=/data/dtle dtle-ee-9.9.9.9.x86_64.rpm
- 安裝完成確認啟動正常
# curl -XGET "127.0.0.1:4646/v1/nodes" -s | jq
[
{
"Address": "127.0.0.1",
"ID": "223c31b4-05cd-a763-b3e7-dbea6d416576",
"Datacenter": "dc1",
"Name": "nomad0",
"NodeClass": "",
"Version": "1.1.2",
"Drain": false,
"SchedulingEligibility": "eligible",
"Status": "ready",
"StatusDescription": "",
"Drivers": {
"dtle": {
"Attributes": {
"driver.dtle.full_version": "9.9.9.9-master-a65ee13",
"driver.dtle": "true",
"driver.dtle.version": "9.9.9.9"
},
"Detected": true,
"Healthy": true,
"HealthDescription": "Healthy",
"UpdateTime": "2022-02-28T07:45:15.650289984Z"
·········
]
建立 MySQL-To-DBLE 任務
一、全量同步
1. 準備job檔案
# cat job.json
{
"Job": {
"ID": "mysqlToDBLE",
"Datacenters": ["dc1"],
"TaskGroups": [{
"Name": "src",
"Tasks": [{
"Name": "src",
"Driver": "dtle",
"Config": {
"Gtid": "",
"ReplicateDoDb": [{
"TableSchema": "ren",
"Tables": [{
"TableName": "test"
}]
}],
"ConnectionConfig": {
"Host": "10.186.61.11",
"Port": 3306,
"User": "root",
"Password": "root"
}
}
}]
}, {
"Name": "dest",
"Tasks": [{
"Name": "dest",
"Driver": "dtle",
"Config": {
"ConnectionConfig": {
"Host": "10.186.61.10",
"Port": 8066,
"User": "ren",
"Password": "ren"
}
}
}]
}]
}
}
2. 準備全量複製資料
- 源端 MySQL庫執行
mysql> insert into test values(1,'ren','sh',now());
mysql> insert into test values(2,'jack','bj',now());
mysql> insert into test values(3,'tom','sz',now());
3. 啟動同步任務
# curl -XPOST "http://127.0.0.1:4646/v1/jobs" -d @job.json -s| jq
{
"EvalID": "88ab4a42-98b7-696e-0f98-08c1fe3ee4bd",
"EvalCreateIndex": 12310,
"JobModifyIndex": 12310,
"Warnings": "",
"Index": 12310,
"LastContact": 0,
"KnownLeader": false
}
4. 檢查同步情況
- 確認全量資料同步完成
# 目標端 DBLE 中執行
mysql> use ren;
Database changed
mysql> show tables;
+------------------+
| Tables_in_ren |
+------------------+
| test |
| gtid_executed_v4 |
+------------------+
2 rows in set (0.01 sec)
mysql> select * from test;
+----+------+------+---------------------+
| id | name | city | dt |
+----+------+------+---------------------+
| 1 | ren | sh | 2022-03-07 06:53:30 |
| 2 | jack | bj | 2022-03-07 06:53:41 |
| 3 | tom | sz | 2022-03-07 06:53:59 |
+----+------+------+---------------------+
3 rows in set (0.01 sec)
# 源端 MySQL 寫入增量測試資料
mysql> insert into test select null,'mike','nj',now();
Query OK, 1 row affected (0.01 sec)
Records: 1 Duplicates: 0 Warnings: 0
········
mysql> insert into test select null,'mike4','nj',now();
Query OK, 1 row affected (0.01 sec)
Records: 1 Duplicates: 0 Warnings: 0
mysql> update test set city = 'sh' where name like 'mike%';
Query OK, 4 rows affected (0.01 sec)
Rows matched: 4 Changed: 4 Warnings: 0
mysql> select * from test;
+----+-------+------+---------------------+
| id | name | city | dt |
+----+-------+------+---------------------+
| 1 | ren | sh | 2022-03-07 06:53:30 |
| 2 | jack | bj | 2022-03-07 06:53:41 |
| 3 | tom | sz | 2022-03-07 06:53:59 |
| 45 | mike | sh | 2022-03-07 08:03:57 |
| 46 | mike2 | sh | 2022-03-07 08:04:02 |
| 47 | mike3 | sh | 2022-03-07 08:04:05 |
| 48 | mike4 | sh | 2022-03-07 08:04:09 |
+----+-------+------+---------------------+
7 rows in set (0.01 sec)
# 目標端 DBLE 檢查增量同步情況
mysql> select * from test;
+----+-------+------+---------------------+
| id | name | city | dt |
+----+-------+------+---------------------+
| 1 | ren | sh | 2022-03-07 06:53:30 |
| 2 | jack | bj | 2022-03-07 06:53:41 |
| 3 | tom | sz | 2022-03-07 06:53:59 |
| 45 | mike | sh | 2022-03-07 08:03:57 |
| 46 | mike2 | sh | 2022-03-07 08:04:02 |
| 47 | mike3 | sh | 2022-03-07 08:04:05 |
| 48 | mike4 | sh | 2022-03-07 08:04:09 |
+----+-------+------+---------------------+
7 rows in set (0.04 sec)
mysql> explain select * from test where id = 1;
+---------------+----------+----------------------------------+
| SHARDING_NODE | TYPE | SQL/REF |
+---------------+----------+----------------------------------+
| dn_01 | BASE SQL | select * from test where id = 1 |
+---------------+----------+----------------------------------+
1 row in set (0.03 sec)
二、基於GTID位點增量同步
1. 銷燬全量同步任務
# cd /data/dtle/usr/bin/
# ll
total 188836
-rwxr-xr-x 1 root root 107811060 Mar 17 2020 consul
-rwxr-xr-x 1 root root 85550512 Jun 22 2021 nomad
# ./nomad job status
ID Type Priority Status Submit Date
mysqlToDBLE service 50 running 2022-03-07T15:47:31+08:00
mysqltoMysql-sync service 50 running 2022-03-03T16:06:10+08:00
# ./nomad job stop -purge mysqlToDBLE
·······
⠙ Deployment "433ed3d4" successful
·······
# ./nomad job status
ID Type Priority Status Submit Date
mysqltoMysql-sync service 50 running 2022-03-03T16:06:10+08:00
2. 記錄源端GTID位點
# 記錄源端 MySQL 需要開始的GTID位點
mysql> show master status\G
*************************** 1. row ***************************
File: mysql-bin.000178
········
Executed_Gtid_Set: 442dbe92-00c3-11ec-a0cf-02000aba3d0b:1-49705119,
cdc6fb62-00c2-11ec-a259-02000aba3d0a:1-3555
1 row in set (0.01 sec)
# 插入增量資料(模擬業務新增資料)
mysql> insert into test select 88,'sync01','wh',now();
mysql> insert into test select 99,'sync02','wh',now();
# 源端 MySQL 確認資料已插入
mysql> select * from test;
+----+--------+------+---------------------+
| id | name | city | dt |
+----+--------+------+---------------------+
| 1 | ren | sh | 2022-03-07 06:53:30 |
········
| 48 | mike4 | sh | 2022-03-07 08:04:09 |
| 88 | sync01 | wh | 2022-03-07 08:24:20 |
| 99 | sync02 | wh | 2022-03-07 08:24:31 |
+----+--------+------+---------------------+
9 rows in set (0.00 sec)
# 目標端 DBLE 資料因同步job已銷燬,新插入資料未同步過來
mysql> select * from test;
+----+-------+------+---------------------+
| id | name | city | dt |
+----+-------+------+---------------------+
| 1 | ren | sh | 2022-03-07 06:53:30 |
········
| 48 | mike4 | sh | 2022-03-07 08:04:09 |
+----+-------+------+---------------------+
7 rows in set (0.00 sec)
3. 準備增量同步job檔案
# cat job.json
{
"Job": {
"ID": "mysqlToDBLE",
"Datacenters": ["dc1"],
"TaskGroups": [{
"Name": "src",
"Tasks": [{
"Name": "src",
"Driver": "dtle",
"Config": {
"Gtid": "442dbe92-00c3-11ec-a0cf-02000aba3d0b:1-49705119,cdc6fb62-00c2-11ec-a259-02000aba3d0a:1-3555",
"ReplicateDoDb": [{
"TableSchema": "ren",
"Tables": [{
"TableName": "test"
}]
}],
"ConnectionConfig": {
"Host": "10.186.61.11",
"Port": 3306,
"User": "root",
"Password": "root"
}
}
}]
}, {
"Name": "dest",
"Tasks": [{
"Name": "dest",
"Driver": "dtle",
"Config": {
"ConnectionConfig": {
"Host": "10.186.61.10",
"Port": 8066,
"User": "ren",
"Password": "ren"
}
}
}]
}]
}
}
4. 開始增量同步任務
# curl -XPOST "http://127.0.0.1:4646/v1/jobs" -d @job.json -s |jq
{
"EvalID": "cad6fb19-62d3-67aa-6f5c-fbb79f8016d2",
"EvalCreateIndex": 12855,
"JobModifyIndex": 12855,
"Warnings": "",
"Index": 12855,
"LastContact": 0,
"KnownLeader": false
}
5. 檢查同步情況
# 目標端 DBLE 中檢視到 GTID 位點之後的資料已同步過來
mysql> select * from test;
+-----+--------+------+---------------------+
| id | name | city | dt |
+-----+--------+------+---------------------+
| 1 | ren | sh | 2022-03-07 06:53:30 |
| 48 | mike4 | sh | 2022-03-07 08:04:09 |
·········
| 88 | sync01 | wh | 2022-03-07 08:24:20 |
| 99 | sync02 | wh | 2022-03-07 08:24:31 |
+-----+--------+------+---------------------+
11 rows in set (0.06 sec)
6. 其它DML及DDL同步
- 驗證下其它update、delete語句及DDL語句同步情況
# 源端 MySQL 執行操作
mysql> delete from test where id >= 100;
Query OK, 2 rows affected (0.01 sec)
mysql> delete from test where id > 3;
Query OK, 6 rows affected (0.01 sec)
mysql> update test set name = 'actionsky' where id = 3;
Query OK, 1 row affected (0.00 sec)
Rows matched: 1 Changed: 1 Warnings: 0
# 目標端 DBLE 檢查同步情況
mysql> select * from test;
+----+-----------+------+---------------------+
| id | name | city | dt |
+----+-----------+------+---------------------+
| 1 | ren | sh | 2022-03-07 06:53:30 |
| 2 | jack | bj | 2022-03-07 06:53:41 |
| 3 | actionsky | sz | 2022-03-07 06:53:59 |
+----+-----------+------+---------------------+
3 rows in set (0.01 sec)
# 源端 MySQL 執行 DDL操作
mysql> alter table test add column info varchar(20) default 'hello';
mysql> update test set info = 'thanks' where id = 3;
mysql> alter table test add index idx_info(`info`);
# 目標端 DBLE 可以進行DDL同步(篇幅所限,實際上DBLE相容的DDL語句都能同步成功)
mysql> select * from test;
+----+-----------+------+---------------------+--------+
| id | name | city | dt | info |
+----+-----------+------+---------------------+--------+
| 1 | ren | sh | 2022-03-07 06:53:30 | hello |
| 2 | jack | bj | 2022-03-07 06:53:41 | hello |
| 3 | actionsky | sz | 2022-03-07 06:53:59 | thanks |
+----+-----------+------+---------------------+--------+
3 rows in set (0.02 sec)
mysql> show create table test\G
*************************** 1. row ***************************
Table: test
Create Table: CREATE TABLE `test` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(20) COLLATE utf8mb4_bin DEFAULT NULL,
`city` varchar(20) COLLATE utf8mb4_bin DEFAULT NULL,
`dt` datetime DEFAULT CURRENT_TIMESTAMP,
`info` varchar(20) COLLATE utf8mb4_bin DEFAULT 'hello',
PRIMARY KEY (`id`),
KEY `idx_ctiy` (`city`),
KEY `idx_info` (`info`)
) ENGINE=InnoDB AUTO_INCREMENT=89 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_bin
1 row in set (0.01 sec)
配置 MySQL-To-DBLE 注意事項
1. 檢測連線失敗問題
- 問題描述:DTLE配置JOB過程中,“檢測連線”失敗,nomad日誌報錯ERROR 1064 (HY000): java.sql.SQLSyntaxErrorException: illegal value[TRUE]
- 原因:DTLE下發的檢測客戶端語句set autocommit=true,在DBLE某些版本中不支援
- 解決:升級DBLE到3.20.10.6版本及之後
2. 任務啟動後同步失敗報'dtle'不存在
- 問題描述:DTLE同步任務啟動後報錯, nomad 日誌出現Can't create database 'dtle' that doesn’t exists.
原因:
- DTLE To MySQL ,不會出現該種報錯
- DTLE To DBLE ,由於DBLE中介軟體中schema的建立方式與普通MySQL不一致,所以該create語法不支援
解決:
- 需要對DBLE進行額外的 Schema/Table 配置,參考前文 sharding.xml 和 user.xml中相關配置
3. 任務啟動後同步失敗報'Data too long'
- 問題描述:DTLE同步任務啟動後報錯,nomad日誌出現“applier error/restart: insert gno: Error 1406: Data too long for column 'source_uuid' at row 1”
原因:
- DTLE在DBLE中建立的表gtid_executed_v4 中,欄位source_uuid的Binary資料型別長度不夠
- 也可通過排查DBLE中介軟體日誌(core/log/dble.log),報錯資訊為“execute sql err : errNo:1406 Data too long for column 'source_uuid' at row 1”
解決:
- DBLE中,修改欄位
- alter table gtid_executed_v4 modify column source_uuid binary(60);
結論
DTLE 目前功能基本可以滿足 MySQL -> DBLE 間資料實施同步需求,不過需要注意的是,不建議採用本文所提到的 全量同步 方式
- 生產環境實施由於 MySQL 老庫資料量較大,可以先將資料全量邏輯備份出來(需記錄GTID位點),再通過 DBLE 自帶的 split 工具進行拆分後進行匯入,然後再使用 DTLE 基於GTID位點增量同步 的方式進行資料同步
- DTLE 建立 To-DBLE 任務前需要關注下前文所示注意事項,尤其是提前準備好 sharding.xml、user.xml檔案並建立好 DTLE的後設資料表gtid_executed_v4。