Navicat、into outfile、mysql命令、mysqldump、mysqlpump、mydumper匯出匯入資料
Navicat、into outfile、mysql命令、mysqldump、mysqlpump、mydumper匯出匯入資料
--總資料量大約70G,行數為1632840301 1、Navicat 匯出26G後報錯 效率:505308501行/41421s=12200行/s 或 1.7M/s 2、使用mysql匯出,花費約3小時30分鐘,約12600秒,效率為1632840301行/12600秒=129590行/s 或 5.7M/s [root@OCPLHR data]# date Fri Dec 13 09:03:47 CST 2019 [root@OCPLHR data]# mysql -h192.168.1.35 -uroot -pLHR -D business_db_jingbain --execute="select * from sensor;" -q > sensor_mysql.txt [root@OCPLHR data]# date Fri Dec 13 12:32:32 CST 2019 [root@OCPLHR data]# ll total 71000516 -rw-r--r-- 1 root root 72633519676 Dec 13 12:32 sensor.txt [root@OCPLHR data]# ll -h total 68G -rw-r--r-- 1 root root 68G Dec 13 12:32 sensor.txt 注意:mysql -h192.168.1.1 -uroot -p123 -D dbxx --execute="select * from XXT;" > sensor111.txt 其中XXT表一共70G,如果用這種方式匯出資料,那麼一定需要加上-q引數,否則這個命令會把記憶體撐爆。先耗記憶體,然後耗swap空間,直到OS夯住。 -q, --quick Don't cache result, print it row by row. This may slow down the server if the output is suspended. Doesn't use history file. 3、使用mysqldump匯出,花費約200分鐘,基本和mysql匯出效能差不多 /var/lib/mysql57/mysql5719/bin/mysqldump -h192.168.1.35 -uroot -pLHR --databases business_db_jingbain --set-gtid-purged=OFF --tables sensor > sensor_mysqldump.sql [root@LHRDB data]# time /var/lib/mysql57/mysql5719/bin/mysqldump -h192.168.1.35 -uroot -pLHR --databases business_db_jingbain --set-gtid-purged=OFF --tables sensor > sensor_mysqldump.dmp mysqldump: [Warning] Using a password on the command line interface can be insecure. real 200m8.739s user 38m21.922s sys 12m21.432s [root@LHRDB data]# ll total 77311652 -rw-r--r-- 1 root root 79167122033 Dec 13 18:00 sensor_mysqldump.dmp [root@LHRDB data]# ll -h total 74G -rw-r--r-- 1 root root 74G Dec 13 18:00 sensor_mysqldump.dmp 4、使用mysqlpump匯出 mysqlpump -h192.168.1.35 -uroot -pLHR database business_db_jingbain sensor --set-gtid-purged=OFF > sensor_mysqlpump.sql /var/lib/mysql57/mysql5719/bin/mysqldump -S/var/lib/mysql57/mysql5719/data57193310/mysql57193310.sock business_db_jingbain sensor --set-gtid-purged=OFF > sensor_mysqlpump.sql 4、使用mydumper匯出,開10個執行緒,花費約2小時,效率為1632840301行/3600秒=453566行/s 或 21M/s mydumper -h 192.168.1.35 -u root -p LHR -B business_db_jingbain -T sensor -l 14400 -r 150000000 -t 12 -k -o /data/ myloader -h localhost -u root -p lhr -B business_db_jingbain -o sensor -t 12 -d /data/ -v 3 mydumper -S /var/lib/mysql57/mysql5719/data57193310/mysql57193310.sock -B business_db_jingbain -T sensor -l 14400 -r 60000000 -t 12 -k -o /data/datatmp/ [root@LHRDB data]# mydumper -h 192.168.1.35 -u root -p LHR -B business_db_jingbain -T sensor -l 14400 -r 150000000 -t 12 -k -o /data/ ** (mydumper:28001): WARNING **: Executing in no-locks mode, snapshot will notbe consistent [root@LHRDB data]# ll -h total 74G -rw-r--r-- 1 root root 82 Dec 16 12:39 business_db_jingbain-schema-create.sql -rw-r--r-- 1 root root 25M Dec 16 12:40 business_db_jingbain.sensor.00000.sql -rw-r--r-- 1 root root 24M Dec 16 12:40 business_db_jingbain.sensor.00001.sql -rw-r--r-- 1 root root 21M Dec 16 12:40 business_db_jingbain.sensor.00002.sql -rw-r--r-- 1 root root 23M Dec 16 12:40 business_db_jingbain.sensor.00003.sql -rw-r--r-- 1 root root 24M Dec 16 12:40 business_db_jingbain.sensor.00004.sql -rw-r--r-- 1 root root 20M Dec 16 12:40 business_db_jingbain.sensor.00005.sql -rw-r--r-- 1 root root 21M Dec 16 12:40 business_db_jingbain.sensor.00006.sql -rw-r--r-- 1 root root 22M Dec 16 12:40 business_db_jingbain.sensor.00007.sql -rw-r--r-- 1 root root 21M Dec 16 12:40 business_db_jingbain.sensor.00008.sql -rw-r--r-- 1 root root 21M Dec 16 12:40 business_db_jingbain.sensor.00009.sql -rw-r--r-- 1 root root 467 Dec 16 12:39 business_db_jingbain.sensor-schema.sql -rw-r--r-- 1 root root 150 Dec 16 12:39 metadata.partial [root@LHRDB data]# ll -h total 134G -rw-r--r-- 1 root root 82 Dec 16 12:39 business_db_jingbain-schema-create.sql -rw-r--r-- 1 root root 5.9G Dec 16 14:06 business_db_jingbain.sensor.00000.sql -rw-r--r-- 1 root root 5.6G Dec 16 14:06 business_db_jingbain.sensor.00001.sql -rw-r--r-- 1 root root 5.9G Dec 16 14:06 business_db_jingbain.sensor.00002.sql -rw-r--r-- 1 root root 6.1G Dec 16 14:06 business_db_jingbain.sensor.00003.sql -rw-r--r-- 1 root root 6.2G Dec 16 14:06 business_db_jingbain.sensor.00004.sql -rw-r--r-- 1 root root 6.0G Dec 16 14:06 business_db_jingbain.sensor.00005.sql -rw-r--r-- 1 root root 5.8G Dec 16 14:06 business_db_jingbain.sensor.00006.sql -rw-r--r-- 1 root root 6.6G Dec 16 14:06 business_db_jingbain.sensor.00007.sql -rw-r--r-- 1 root root 6.0G Dec 16 14:06 business_db_jingbain.sensor.00008.sql -rw-r--r-- 1 root root 5.8G Dec 16 14:06 business_db_jingbain.sensor.00009.sql -rw-r--r-- 1 root root 467 Dec 16 12:39 business_db_jingbain.sensor-schema.sql -rw-r--r-- 1 root root 150 Dec 16 12:39 metadata [root@LHRDB data]# ll -h total 149G -rw-r--r-- 1 root root 82 Dec 16 12:39 business_db_jingbain-schema-create.sql -rw-r--r-- 1 root root 7.5G Dec 16 14:30 business_db_jingbain.sensor.00000.sql -rw-r--r-- 1 root root 7.5G Dec 16 14:32 business_db_jingbain.sensor.00001.sql -rw-r--r-- 1 root root 7.5G Dec 16 14:30 business_db_jingbain.sensor.00002.sql -rw-r--r-- 1 root root 7.5G Dec 16 14:28 business_db_jingbain.sensor.00003.sql -rw-r--r-- 1 root root 7.5G Dec 16 14:27 business_db_jingbain.sensor.00004.sql -rw-r--r-- 1 root root 7.5G Dec 16 14:30 business_db_jingbain.sensor.00005.sql -rw-r--r-- 1 root root 7.6G Dec 16 14:32 business_db_jingbain.sensor.00006.sql -rw-r--r-- 1 root root 7.7G Dec 16 14:23 business_db_jingbain.sensor.00007.sql -rw-r--r-- 1 root root 7.7G Dec 16 14:30 business_db_jingbain.sensor.00008.sql -rw-r--r-- 1 root root 7.7G Dec 16 14:32 business_db_jingbain.sensor.00009.sql -rw-r--r-- 1 root root 467 Dec 16 12:39 business_db_jingbain.sensor-schema.sql -rw-r--r-- 1 root root 188 Dec 16 14:32 metadata ------------sqlldr匯入 CREATE TABLE xxt.sensor ( id int , record_date date , value varchar2(300), sid int , gid int ) tablespace xxt nologging; options(SKIP=1,errors=1000) --options(SKIP=1,ROWS=1000,errors=1000) UNRECOVERABLE load data LENGTH CHARACTER infile '/data/sensor_mysql.txt' APPEND into table xxt.sensor fields terminated by x'09' trailing nullcols ( id, record_date, decode(value,'NULL','') , sid , gid ) sqlldr xxt/lhr control= xxt.ctl parallel=y log='log.txt' bad='bad.bad' direct=true readsize=510430400 streamsize=510430400 multithreading=y -----------結果 Bind array size not used in direct path. Column array rows : 5000 Stream buffer bytes:16777216 Read buffer bytes:510430400 Total logical records skipped: 1 Total logical records read: 1632840300 Total logical records rejected: 0 Total logical records discarded: 0 Total stream buffers loaded by SQL*Loader main thread: 326640 Total stream buffers loaded by SQL*Loader load thread: 0 Run began on Mon Dec 16 11:32:18 2019 Run ended on Mon Dec 16 13:05:23 2019 Elapsed time was: 01:33:04.33 CPU time was: 01:19:30.76 ------------------- select count(*) from sensor; --653136040行,44G csv檔案,花費37181s,csv檔案大約28g ------------- mysqldump mysqldump -uroot -plhr -h192.168.1.35 --single-transaction --hex-blob --set-gtid-purged=OFF --databases jl_hotel > jl_hotel.sql -- http://blog.itpub.net/26736162/viewspace-2686075/ mysqldump --databases jl_hotel --single-transaction --order-by-primary --hex-blob --no-data --routines --events --set-gtid-purged=OFF -uroot -plhr -h192.168.1.35 -P3306 |sed -e 's/DEFINER[ ]*=[ ]*[^*]*\*/\*/' -e 's/DEFINER[ ]*=.*FUNCTION/FUNCTION/' -e 's/DEFINER[ ]*=.*PROCEDURE/PROCEDURE/' -e 's/DEFINER[ ]*=.*TRIGGER/TRIGGER/' -e 's/DEFINER[ ]*=.*EVENT/EVENT/' -e 's/DEFINER[ ]*=.*SQL SECURITY DEFINER/SQL SECURITY DEFINER/' > jl_hotel.sql mysqldump --databases jl_hotel --single-transaction --hex-blob --set-gtid-purged=OFF -uroot -plhr -h192.168.1.35 -P3306 > jl_hotel_data.sql mysqldump --databases jl_hotel -uroot -plhr -h192.168.1.35 -P3306 > jl_hotel_data.sql mysql -f -h192.168.1.35 -P 3306 -u root -plhr < jl_hotel.sql [root@lhrcentos76 mysql]# mysqldump --databases jl_hotel --single-transaction --hex-blob --set-gtid-purged=OFF --no-create-info --skip-triggers -uroot -plhr -h192.168.1.35 -P3306 -r jl_hotel_data.sql mysqldump: [Warning] Using a password on the command line interface can be insecure. mysqldump: Error 2013: Lost connection to MySQL server during query when dumping table `sys_api_log` at row: 112614 mysqldump匯出比較慢,報錯:mysqldump: Error 2013: Lost connection to MySQL server during query when dumping table `sys_api_log` at row: 112614 ---匯出時設定 set global wait_timeout=28800000; set global net_read_timeout=28800; set global net_write_timeout=28800; set global max_allowed_packet=2147483648; ---匯入 set sql_log_bin=0; set global innodb_flush_log_at_trx_commit = 2; set global sync_binlog = 20000; set global max_allowed_packet=100000000; set global net_buffer_length=100000; set global interactive_timeout=28800000; set global wait_timeout=28800000; mysqldump --single-transaction --hex-blob --set-gtid-purged=OFF -uroot -plhr -h192.168.1.35 -P3350 --databases sbtest > sbtest_data.sql mysql -f -h192.168.1.35 -P3340 -u root -plhr < sbtest_data.sql mysql -f -uroot -plhr -h192.168.1.35 -P3309 -D ehr_dev < C:\Users\lhrxxt\Desktop\ehr_dev_datafull.sql --- https://support.huaweicloud.com/bestpractice-rds/rds_02_0010.html
mysqldump
--- 觸發器 set GLOBAL log_bin_trust_function_creators=on; DROP FUNCTION if exists rand_string; delimiter // CREATE DEFINER=`root`@`%` FUNCTION lhrdb.`rand_string`(n INT) RETURNS varchar(255) CHARSET utf8 BEGIN DECLARE chars_str varchar(100) DEFAULT 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789'; DECLARE return_str varchar(255) DEFAULT ''; DECLARE i INT DEFAULT 0; WHILE i < n DO SET return_str = concat(return_str,substring(chars_str , FLOOR(1 + RAND()*62 ),1)); SET i = i +1; END WHILE; RETURN return_str; END;// delimiter ; select lhrdb.rand_string(5); -- 儲存過程 delimiter // DROP PROCEDURE IF EXISTS mock_isam// CREATE PROCEDURE mock_isam (IN rowCount int) BEGIN DECLARE insertCount int; SET insertCount = 0; DROP TABLE IF EXISTS `isam_table`; /*刪掉之前建立的表,因為下面要建這個名字的表*/ /*自己按需求修改以下建表語句構造需要的表*/ CREATE TABLE `isam_table` ( `id` int(11) NOT NULL AUTO_INCREMENT, `name` varchar(20) NOT NULL, `age` tinyint(3) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=1; START TRANSACTION; loopHandler : LOOP /*插入資料*/ INSERT INTO `isam_table`(`name`,`age`) VALUES( rand_string(20), CEIL(RAND() * 110) ); SET insertCount = insertCount + 1; IF (insertCount >= rowCount) THEN LEAVE loopHandler; END IF; END LOOP loopHandler; COMMIT; END // delimiter ; delimiter // DROP PROCEDURE IF EXISTS mock_innodb// CREATE PROCEDURE mock_innodb (IN rowCount int) BEGIN DECLARE insertCount int; SET insertCount = 0; DROP TABLE IF EXISTS `innodb_table`; /*刪掉之前建立的表,因為下面要建這個名字的表*/ /*自己按需求修改以下建表語句構造需要的表*/ CREATE TABLE `innodb_table` ( `id` int(11) NOT NULL AUTO_INCREMENT, `name` varchar(20) NOT NULL, `age` tinyint(3) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=innodb DEFAULT CHARSET=utf8 AUTO_INCREMENT=1; START TRANSACTION; loopHandler : LOOP /*插入資料*/ INSERT INTO `innodb_table`(`name`,`age`) VALUES( rand_string(20), CEIL(RAND() * 110) ); SET insertCount = insertCount + 1; IF (insertCount >= rowCount) THEN LEAVE loopHandler; END IF; END LOOP loopHandler; COMMIT; END // delimiter ; call mock_innodb(20000); -- 66s call mock_isam(20000); -- 108s select * from innodb_table; select * from isam_table; select * from mysql.proc where db = 'lhrdb' and `type` = 'PROCEDURE' ; //儲存過程 select * from mysql.proc where db = 'lhrdb' and `type` = 'FUNCTION' ; //函式 -- 觸發器 create table lhrdb.time (time varchar(100)); CREATE TRIGGER lhrdb.trig1 AFTER INSERT ON isam_table FOR EACH ROW INSERT INTO time VALUES(NOW()); SHOW TRIGGERS from lhrdb; select * from information_schema.`TRIGGERS` where trigger_schema='lhrdb' ; -- event事件 drop table if exists lhrdb.events_list; create table lhrdb.events_list(id int PRIMARY key auto_increment, event_name varchar(20) not null, event_started timestamp not null); drop event lhrdb.event_minute; create event lhrdb.event_minute on schedule every 10 minute do insert into lhrdb.events_list(event_name,event_started) values('event_now', now()); set global event_scheduler =1; show processlist; show events; select * from information_schema.`EVENTS` where event_schema='lhrdb' ; select * from events_list; -- 檢視 create or replace view vw_pro_tri_fun_event_lhr as select 'EVENTS' type,event_name NAME,DEFINER from information_schema.`EVENTS` where event_schema='lhrdb' union all select 'TRIGGER',trigger_name,DEFINER from information_schema.`TRIGGERS` where trigger_schema='lhrdb' union all select 'PROCEDURE',NAME,DEFINER from mysql.proc where db = 'lhrdb' and `type` = 'PROCEDURE' union all select 'FUNCTION',NAME,DEFINER from mysql.proc where db = 'lhrdb' and `type` = 'FUNCTION' union all select 'VIEW',TABLE_name,DEFINER from information_schema.VIEWS where TABLE_SCHEMA = 'lhrdb' ; select * from vw_pro_tri_fun_event_lhr; ------------- mysqldump --- MySQL替換掉“DEFINER=`root`@`xxx`”,避免RDS中錯誤:[Err] 1227 - Access denied; you need (at least one of) the SUPER privilege(s) for this operation mysqldump --databases lhrdb --single-transaction --order-by-primary --hex-blob --no-data --routines --events --set-gtid-purged=OFF -u root -plhr@zr123 -h 121.36.12.84 -P 3306 |sed -e 's/DEFINER[ ]*=[ ]*[^*]*\*/\*/' -e 's/DEFINER[ ]*=.*FUNCTION/FUNCTION/' -e 's/DEFINER[ ]*=.*PROCEDURE/PROCEDURE/' -e 's/DEFINER[ ]*=.*TRIGGER/TRIGGER/' -e 's/DEFINER[ ]*=.*EVENT/EVENT/' -e 's/DEFINER[ ]*=.*SQL SECURITY DEFINER/SQL SECURITY DEFINER/' > d:\C.sql cat a.sql |sed -e 's/DEFINER[ ]*=[ ]*[^*]*\*/\*/' -e 's/DEFINER[ ]*=.*FUNCTION/FUNCTION/' -e 's/DEFINER[ ]*=.*PROCEDURE/PROCEDURE/' -e 's/DEFINER[ ]*=.*TRIGGER/TRIGGER/' -e 's/DEFINER[ ]*=.*EVENT/EVENT/' -e 's/DEFINER[ ]*=.*SQL SECURITY DEFINER/SQL SECURITY DEFINER/' > b.sql ------------- mysqldump -- 直接匯出表結構和資料 mysqldump -uroot -plhr -h192.168.1.35 -P3306 --single-transaction --hex-blob --routines --events --triggers --set-gtid-purged=OFF --databases jl_hotel > jl_hotel.sql --匯出表結構 mysqldump -uroot -plhr -h192.168.1.35 -P3306 --databases jl_hotel --single-transaction --hex-blob --no-data --routines --events --triggers --set-gtid-purged=OFF | sed -e 's/DEFINER[ ]*=[ ]*[^*]*\*/\*/' -e 's/DEFINER[ ]*=.*FUNCTION/FUNCTION/' -e 's/DEFINER[ ]*=.*PROCEDURE/PROCEDURE/' -e 's/DEFINER[ ]*=.*TRIGGER/TRIGGER/' -e 's/DEFINER[ ]*=.*EVENT/EVENT/' -e 's/DEFINER[ ]*=.*SQL SECURITY DEFINER/SQL SECURITY DEFINER/' > jl_hotel_ddl.sql --匯出資料 mysqldump -uroot -plhr -h192.168.1.35 -P3306 --databases jl_hotel --single-transaction --hex-blob --no-create-info --skip-triggers --set-gtid-purged=OFF > jl_hotel_data.sql --匯入表結構和資料 mysql -f -h192.168.1.35 -P 3306 -u root -plhr < jl_hotel_ddl.sql mysql -f -h192.168.1.35 -P 3306 -u root -plhr -D ehr_dev < jl_hotel_data.sql ---匯出時設定 set global wait_timeout=28800000; set global net_read_timeout=28800; set global net_write_timeout=28800; set global max_allowed_packet=2147483648; ---匯入時設定 set sql_log_bin=0; set global innodb_flush_log_at_trx_commit = 2; set global sync_binlog = 20000; set global max_allowed_packet=100000000; set global net_buffer_length=100000; set global interactive_timeout=28800000; set global wait_timeout=28800000; 1、如果單獨匯出表,若表上有觸發器,則會自動匯出觸發器。
About Me
........................................................................................................................ ● 本文作者:小麥苗,部分內容整理自網路,若有侵權請聯絡小麥苗刪除 ● 本文在itpub、部落格園、CSDN和個人微 信公眾號( DB寶)上有同步更新 ● 本文itpub地址: http://blog.itpub.net/26736162 ● 本文部落格園地址: http://www.cnblogs.com/lhrbest ● 本文CSDN地址: https://blog.csdn.net/lihuarongaini ● 本文pdf版、個人簡介及小麥苗雲盤地址: http://blog.itpub.net/26736162/viewspace-1624453/ ● 資料庫筆試面試題庫及解答: http://blog.itpub.net/26736162/viewspace-2134706/ ● DBA寶典今日頭條號地址: ........................................................................................................................ ● QQ群號: 230161599 、618766405 ● 微 信群:可加我微 信(lhrbestxh),我拉大家進群,非誠勿擾 ● 聯絡我請加QQ好友 ( 646634621 ),註明新增緣由 ● 於 2020-05-01 06:00 ~ 2020-05-30 24:00 在西安完成 ● 最新修改時間:2020-05-01 06:00 ~ 2020-05-30 24:00 ● 文章內容來源於小麥苗的學習筆記,部分整理自網路,若有侵權或不當之處還請諒解 ● 版權所有,歡迎分享本文,轉載請保留出處 ........................................................................................................................ ● 小麥苗的微店: ● 小麥苗出版的資料庫類叢書: http://blog.itpub.net/26736162/viewspace-2142121/ ● 小麥苗OCP、OCM、高可用網路班: http://blog.itpub.net/26736162/viewspace-2148098/ ● 小麥苗騰訊課堂主頁: https://lhr.ke.qq.com/ ........................................................................................................................ 請掃描下面的二維碼來關注小麥苗的微 信公眾號( DB寶)及QQ群(230161599、618766405)、新增小麥苗微 信(lhrbestxh), 學習最實用的資料庫技術。
........................................................................................................................ |
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/26736162/viewspace-2695049/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- mysqldump匯入匯出mysql資料庫MySql資料庫
- mysqldump匯入匯出表資料MySql
- MySQL mysqldump資料匯出詳解MySql
- mysql 命令匯入匯出MySql
- mysql 匯入、匯出命令MySql
- mysqldump匯出資料MySql
- mysql資料匯入匯出MySql
- mysql資料匯出匯入MySql
- Oracle資料庫匯入匯出。imp匯入命令和exp匯出命令Oracle資料庫
- 用Navicat把SQLServer資料匯入MySQLServerMySql
- mysql 資料庫匯入匯出MySql資料庫
- MySQL資料庫匯入匯出MySql資料庫
- 資料庫 MySQL 資料匯入匯出資料庫MySql
- oracle資料庫匯入匯出命令!Oracle資料庫
- MySQL入門--匯出和匯入資料MySql
- 如何用Navicat匯出MySQL的資料字典MySql
- 【mysql】資料庫匯出和匯入MySql資料庫
- mysql匯入匯出.csv格式資料MySql
- MySQL表資料匯入與匯出MySql
- Mysql 資料庫匯入與匯出MySql資料庫
- Oracle資料匯入匯出imp/exp命令Oracle
- 使用navicat匯出查詢大量資料結果集並匯入到其他資料庫(mysql)資料庫MySql
- MySQL資料匯入匯出牛刀小試MySql
- Oracle資料庫的匯入和匯出命令Oracle資料庫
- Oracle資料匯入匯出imp/exp命令(轉)Oracle
- mysql mysqldump只匯出表結構或只匯出資料的實現方法MySql
- Mysql匯出表結構及表資料 mysqldump用法MySql
- Navicat如何匯入和匯出sql檔案SQL
- MySQL資料匯入匯出亂碼問題MySql
- SQLServer匯出匯入資料到MySQLServerMySql
- Mysql匯入&匯出MySql
- Mysql匯入匯出MySql
- mongodb使用自帶命令工具匯出匯入資料MongoDB
- expdp impdp 資料庫匯入匯出命令詳解資料庫
- Mysql資料庫使用Navicat Mysql匯入sql檔案報錯MySql資料庫
- 資料泵匯出匯入
- Oracle 資料匯入匯出Oracle
- Oracle資料匯入匯出Oracle