使用canal.adapter同步資料到MySQL
使用canal.adapter同步資料到MySQL
通過canal,同步一張表的增量資料,從ming.test01到tt.test01下面。
1.安裝canal.adapter
從github上下載 canal.adapter-1.1.4.tar.gz安裝包,解壓。
[root@mdb01 canal-adapter]# ll
total 8
drwxr-xr-x 2 root root 95 Apr 6 14:53 bin
drwxrwxrwx 6 root root 119 Apr 6 14:53 conf
drwxr-xr-x 2 root root 4096 Apr 6 14:29 lib
drwxrwxrwx 3 root root 21 Apr 6 14:49 logs
drwxrwxrwx 2 root root 253 Sep 2 2019 plugin
2.啟動canal.adapter
修改conf/application.yml
server:
port: 8081
spring:
jackson:
date-format: yyyy-MM-dd HH:mm:ss
time-zone: GMT+8
default-property-inclusion: non_null
canal.conf:
mode: tcp # kafka rocketMQ
canalServerHost: 192.168.61.16:11111 --canal server 資訊
batchSize: 500
syncBatchSize: 1000
retries: 0
timeout:
accessKey:
secretKey:
srcDataSources: --源端資料庫連線資訊,db是ming
defaultDS:
url: jdbc:mysql://192.168.61.16:3306/ming?useUnicode=true
username: canal
password: oracle
canalAdapters:
- instance: example # canal instance Name or mq topic name
groups:
- groupId: g1
outerAdapters:
- name: logger
- name: rdb
key: mysql1
properties: --目標端資料庫連線資訊,db是tt
jdbc.driverClassName: com.mysql.jdbc.Driver
jdbc.url: jdbc:mysql://192.168.61.16:3306/tt?useUnicode=true
jdbc.username: root
jdbc.password: oracle
修改conf/rdb/mytest_user.yml
dataSourceKey: defaultDS
destination: example
groupId: g1
outerAdapterKey: mysql1
concurrent: true
dbMapping:
database: ming
table: test01
targetTable: tt.test01
targetPk:
c1: c1
mapAll: true
commitBatch: 3000 # 批量提交的大小
兩張表的結構是一樣的話,mapAll直接設定為true
如果表結構不一致的話,可以用targetColumns設定 : 從表欄位名字: 主表欄位名字
配置完成後,啟動adapter
sh start.sh
3.canal配置
修改canal instance.properties,並重啟canal
# table regex
canal.instance.filter.regex=ming.test01
4.測試
源端
mysql> insert into test01 select 13,13,13,now();
Query OK, 1 row affected (0.30 sec)
Records: 1 Duplicates: 0 Warnings: 0
mysql> update test01 set c4=now() where c1=13;
Query OK, 1 row affected (0.03 sec)
Rows matched: 1 Changed: 1 Warnings: 0
mysql> delete from test01 where c1=13;
Query OK, 1 row affected (0.03 sec)
目標端
mysql> select * from test01;
+----+------+------+---------------------+
| c1 | c2 | c3 | c4 |
+----+------+------+---------------------+
| 13 | 13 | 13 | 2021-04-06 14:54:11 |
+----+------+------+---------------------+
1 row in set (0.00 sec)
mysql> select * from test01;
+----+------+------+---------------------+
| c1 | c2 | c3 | c4 |
+----+------+------+---------------------+
| 13 | 13 | 13 | 2021-04-06 14:56:03 |
+----+------+------+---------------------+
1 row in set (0.00 sec)
mysql> select * from test01;
Empty set (0.00 sec)
對應的adapeter日誌
2021-04-06 14:54:12.145 [pool-8-thread-1] INFO c.a.o.canal.client.adapter.logger.LoggerAdapterExample - DML: {"data":[{"c1":13,"c2":13,"c3":"13","c4":1617692051000}],"database":"ming","destination":"example","es":1617692051000,"groupId":null,"isDdl":false,"old":null,"pkNames":["c1"],"sql":"","table":"test01","ts":1617692051862,"type":"INSERT"}
2021-04-06 14:54:12.244 [pool-3-thread-1] DEBUG c.a.o.canal.client.adapter.rdb.service.RdbSyncService - DML: {"data":{"c1":13,"c2":13,"c3":"13","c4":1617692051000},"database":"ming","destination":"example","old":null,"table":"test01","type":"INSERT"}
2021-04-06 14:56:03.453 [pool-8-thread-1] INFO c.a.o.canal.client.adapter.logger.LoggerAdapterExample - DML: {"data":[{"c1":13,"c2":13,"c3":"13","c4":1617692163000}],"database":"ming","destination":"example","es":1617692163000,"groupId":null,"isDdl":false,"old":[{"c4":1617692051000}],"pkNames":["c1"],"sql":"","table":"test01","ts":1617692163452,"type":"UPDATE"}
2021-04-06 14:56:03.468 [pool-3-thread-1] DEBUG c.a.o.canal.client.adapter.rdb.service.RdbSyncService - DML: {"data":{"c1":13,"c2":13,"c3":"13","c4":1617692163000},"database":"ming","destination":"example","old":{"c4":1617692051000},"table":"test01","type":"UPDATE"}
2021-04-06 14:59:07.631 [pool-8-thread-1] INFO c.a.o.canal.client.adapter.logger.LoggerAdapterExample - DML: {"data":[{"c1":13,"c2":13,"c3":"13","c4":1617692163000}],"database":"ming","destination":"example","es":1617692347000,"groupId":null,"isDdl":false,"old":null,"pkNames":["c1"],"sql":"","table":"test01","ts":1617692347629,"type":"DELETE"}
5.欄位對映
# mapAll: true
targetColumns:
c1: c1
c2: c2
c3: c3
c5: c4
c1列在targetPk中已經指定了,可以不指定;其他列即使名字沒有改變,比如c2 c3,也要寫進去,否則這幾列抓不到資料。
如果只同步部分列的資料,那麼就可以不用寫所有的列對映關係了。
重啟
sh restart.sh
其他注意事項
canal-adapter/conf/rdb/mytest_user.yml的groupId應該和canal-adapter/conf/application.yml中的保持一致
如果要同步多張表或者多個不同資料來源,只要在canal-adapter/conf/rdb/mytest_user.yml和canal-adapter/conf/application.yml中再增加一個groupId即可。
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/31480688/viewspace-2766751/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- 使用goldengate從mysql同步資料到oracleGoMySqlOracle
- mysql 如何毫秒級同步資料到 elasticsearchMySqlElasticsearch
- flinkcdc同步mysql資料到selectdbMySql
- Logstash7.6.2同步Mysql資料到ElasticSearchMySqlElasticsearch
- 使用SeaTunnel從InfluxDB同步資料到DorisUX
- 使用mysqlimport匯入資料到mysqlMySqlImport
- docker搭建Elasticsearch、Kibana、Logstash 同步mysql資料到ESDockerElasticsearchMySql
- 使用load data匯入資料到mysqlMySql
- 使用外部表關聯MySQL資料到OracleMySqlOracle
- 使用DataX同步MaxCompute資料到TableStore(原OTS)最佳化指南
- 從物件儲存服務同步資料到Elasticsearch物件Elasticsearch
- Flink同步Kafka資料到ClickHouse分散式表Kafka分散式
- ogg 同步pg資料到oracle--步驟Oracle
- SQLServer匯出匯入資料到MySQLServerMySql
- ORACLE(Linux版本)實時同步資料到MYSQL(Linux版本)解決方案:OGGOracleLinuxMySql
- Django資料從sqlite遷移資料到MySQLDjangoSQLiteMySql
- Centos8 部署 ElasticSearch 叢集並搭建 ELK,基於Logstash同步MySQL資料到ElasticSearchCentOSElasticsearchMySql
- Python批量匯入Excel資料到MySQLPythonExcelMySql
- 【Mysql】匯出資料到excel檔案中MySqlExcel
- mysqldump從mysql遷移資料到OceanBaseMySql
- spark sql與mysql 資料載入與匯出資料到mysqlSparkMySql
- 資料倉儲中從mysql導資料到oracleMySqlOracle
- Flinkx實時和離線同步Postgresql資料到KafkaSQLKafka
- 使用Flume消費Kafka資料到HDFSKafka
- 使用sqlldr匯入文字資料到oracleSQLOracle
- KunlunDB 快速入門 4.0(從Oracle實時同步資料到kunlunDB)Oracle
- Mysql主主同步-配置資料同步MySql
- 利用跳板機連線mysql,匯出資料到csvMySql
- clickhouse 同步mysql資料MySql
- 使用canal增量同步mysql資料庫資訊到ElasticSearchMySql資料庫Elasticsearch
- CDC實戰:MySQL實時同步資料到Elasticsearch之陣列集合(array)如何處理【CDC實戰系列十二】MySqlElasticsearch陣列
- GoldenGate12.2從DataGuard備庫同步資料到其他Oracle資料庫GoOracle資料庫
- 高速遷移MySQL資料到分散式時序資料庫DolphinDBMySql分散式資料庫
- MySQL 資料主從同步MySql主從同步
- clickhouse-mysql資料同步MySql
- mysql資料同步至redisMySqlRedis
- 使用DataLakeAnalytics從OSS清洗資料到AnalyticDB
- 使用 EMQX Cloud 橋接資料到 GCP Pub/SubMQCloud橋接GC