SQOOP安裝及使用
- SQOOP安裝及使用
- SQOOP安裝
- 1、上傳並解壓
- 2、修改配置檔案
- 3、修改環境變數
- 4、新增MySQL連線驅動
- 5、測試
- 準備MySQL資料
- 登入MySQL資料庫
- 建立student資料庫
- 切換資料庫並匯入資料
- 另外一種匯入資料的方式
- 使用Navicat執行SQL檔案
- 匯出MySQL資料庫
- import
- MySQLToHDFS
- 編寫指令碼,儲存為MySQLToHDFS.conf
- 執行指令碼
- 注意事項:
- MySQLToHive
- 編寫指令碼,並儲存為MySQLToHIVE.conf檔案
- 在Hive中建立testsqoop庫
- 執行指令碼
- --direct
- -e引數的使用
- MySQLToHBase
- 編寫指令碼,並儲存為MySQLToHBase.conf
- 在HBase中建立student表
- 執行指令碼
- MySQLToHDFS
- export
- HDFSToMySQL
- 編寫指令碼,並儲存為HDFSToMySQL.conf
- 先清空MySQL student表中的資料,不然會造成主鍵衝突
- 執行指令碼
- 檢視sqoop help
- HDFSToMySQL
- 分割槽匯入
- 增量匯入
- SQOOP安裝
SQOOP安裝
1、上傳並解壓
tar -zxvf sqoop-1.4.7_hadoop3.X.tar.gz
2、修改配置檔案
# 切換到sqoop配置檔案目錄
cd /usr/local/soft/sqoop-1.4.7/conf
# 複製配置檔案並重新命名
cp sqoop-env-template.sh sqoop-env.sh
# vim sqoop-env.sh 編輯配置檔案,並加入以下內容
export HADOOP_COMMON_HOME=/usr/local/soft/hadoop-3.1.2
export HADOOP_MAPRED_HOME=/usr/local/soft/hadoop-3.1.2/share/hadoop/mapreduce
export HBASE_HOME=/usr/local/soft/hbase-2.2.7
export HIVE_HOME=/usr/local/soft/hive-3.1.2
export ZOOCFGDIR=/usr/local/soft/zookeeper-3.5.7/conf
export ZOOKEEPER_HOME=/usr/local/soft/zookeeper-3.5.7
# 切換到bin目錄
cd /usr/local/soft/sqoop-1.4.7/bin
# vim configure-sqoop 修改配置檔案,註釋掉沒用的內容(就是為了去掉警告資訊)
3、修改環境變數
vim /etc/profile
# 將sqoop的目錄加入環境變數
4、新增MySQL連線驅動
# 從HIVE中複製MySQL連線驅動到$SQOOP_HOME/lib
cp /usr/local/soft/hive-1.2.1/lib/mysql-connector-java-5.1.49.jar /usr/local/soft/sqoop-1.4.7/lib/
5、測試
# 列印sqoop版本
sqoop version
# 測試MySQL連通性
sqoop list-databases -connect jdbc:mysql://master:3306?useSSL=false -username root -password 123456
準備MySQL資料
登入MySQL資料庫
mysql -u root -p123456;
建立student資料庫
create database student;
切換資料庫並匯入資料
# mysql shell中執行
use student;
source /usr/local/soft/shell/student.sql;
另外一種匯入資料的方式
# linux shell中執行
mysql -u root -p123456 student</usr/local/soft/shell/student.sql
mysql -u root -p123456 student</root/score.sql
使用Navicat執行SQL檔案
也可以透過Navicat匯入
匯出MySQL資料庫
mysqldump -u root -p123456 資料庫名>任意一個檔名.sql
import
從傳統的關係型資料庫匯入HDFS、HIVE、HBASE......
MySQLToHDFS
編寫指令碼,儲存為MySQLToHDFS.conf
import
--connect
jdbc:mysql://master:3306/student?useSSL=false
--username
root
--password
123456
--table
student
--m
2
--split-by
age
--target-dir
/sqoop/data/student1
--fields-terminated-by
','
執行指令碼
sqoop --options-file MySQLToHDFS.conf
注意事項:
1、--m 表示指定生成多少個Map任務,不是越多越好,因為MySQL Server的承載能力有限
2、當指定的Map任務數>1,那麼需要結合--split-by
引數,指定分割鍵,以確定每個map任務到底讀取哪一部分資料,最好指定數值型的列,最好指定主鍵(或者分佈均勻的列=>避免每個map任務處理的資料量差別過大)
3、如果指定的分割鍵資料分佈不均,可能導致資料傾斜問題
4、分割的鍵最好指定數值型的,而且欄位的型別為int、bigint這樣的數值型
5、編寫指令碼的時候,注意:例如:--username
引數,引數值不能和引數名同一行
--username root // 錯誤的
// 應該分成兩行
--username
root
6、執行的時候會報錯InterruptedException,hadoop2.7.6自帶的問題,忽略即可
21/01/25 14:32:32 WARN hdfs.DFSClient: Caught exception
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1252)
at java.lang.Thread.join(Thread.java:1326)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:716)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:476)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:652)
7、實際上sqoop在讀取mysql資料的時候,用的是JDBC的方式,所以當資料量大的時候,效率不是很高
8、sqoop底層透過MapReduce完成資料匯入匯出,只需要Map任務,不需要Reduce任務
9、每個Map任務會生成一個檔案
MySQLToHive
先會將MySQL的資料匯出來並在HDFS上找個目錄臨時存放,預設為:/user/使用者名稱/表名
然後再將資料載入到Hive中,載入完成後,會將臨時存放的目錄刪除
編寫指令碼,並儲存為MySQLToHIVE.conf檔案
import
--connect
jdbc:mysql://master:3306/student?useSSL=false
--username
root
--password
123456
--table
score
--fields-terminated-by
"\t"
--lines-terminated-by
"\n"
--m
3
--split-by
student_id
--hive-import
--hive-overwrite
--create-hive-table
--hive-database
bigdata
--hive-table
score_sqoop
--delete-target-dir
在Hive中建立testsqoop庫
hive> create database testsqoop;
如果重新匯入表存在,那麼就需要將--create-hive-table 選項去除
如果遇到如下錯誤
22/04/18 09:08:42 ERROR tool.ImportTool: Import failed: java.io.IOException: java.la ng.ClassNotFoundException: org.apache.hadoop.hive.conf.HiveConf
at org.apache.sqoop.hive.HiveConfig.getHiveConf(HiveConfig.java:50)
at org.apache.sqoop.hive.HiveImport.getHiveArgs(HiveImport.java:392)
at org.apache.sqoop.hive.HiveImport.executeExternalHiveScript(HiveImport.jav a:379)
at org.apache.sqoop.hive.HiveImport.executeScript(HiveImport.java:337)
at org.apache.sqoop.hive.HiveImport.importTable(HiveImport.java:241)
at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:537)
at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:628)
at org.apache.sqoop.Sqoop.run(Sqoop.java:147)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:243)
at org.apache.sqoop.Sqoop.main(Sqoop.java:252)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.conf.HiveConf
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at org.apache.sqoop.hive.HiveConfig.getHiveConf(HiveConfig.java:44)
... 12 more
解決辦法:
cp /usr/local/soft/hive-1.2.1/lib/hive-common-1.2.1.jar /usr/local/soft/sqoop-1.4.7/lib/
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/hive/shims/ShimLoader
at org.apache.hadoop.hive.conf.HiveConf$ConfVars.<clinit>(HiveConf.java:368)
at org.apache.hadoop.hive.conf.HiveConf.<clinit>(HiveConf.java:105)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at org.apache.sqoop.hive.HiveConfig.getHiveConf(HiveConfig.java:44)
at org.apache.sqoop.hive.HiveImport.getHiveArgs(HiveImport.java:392)
at org.apache.sqoop.hive.HiveImport.executeExternalHiveScript(HiveImport.java:379)
at org.apache.sqoop.hive.HiveImport.executeScript(HiveImport.java:337)
at org.apache.sqoop.hive.HiveImport.importTable(HiveImport.java:241)
at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:537)
at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:628)
at org.apache.sqoop.Sqoop.run(Sqoop.java:147)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:243)
at org.apache.sqoop.Sqoop.main(Sqoop.java:252)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.shims.ShimLoader
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 17 more
cp /usr/local/soft/hive-1.2.1/lib/hive-shims* /usr/local/soft/sqoop-1.4.7/lib/
執行指令碼
sqoop --options-file MySQLToHIVE.conf
--direct
加上這個引數,可以在匯出MySQL資料的時候,使用MySQL提供的匯出工具mysqldump,加快匯出速度,提高效率
錯誤資訊
22/04/18 09:42:33 INFO mapreduce.Job: Task Id : attempt_1650084984186_0048_m_000002_2, Status : FAILED
Error: java.io.IOException: Cannot run program "mysqldump": error=2, 沒有那個檔案或目錄
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
at java.lang.Runtime.exec(Runtime.java:620)
at java.lang.Runtime.exec(Runtime.java:485)
at org.apache.sqoop.mapreduce.MySQLDumpMapper.map(MySQLDumpMapper.java:405)
at org.apache.sqoop.mapreduce.MySQLDumpMapper.map(MySQLDumpMapper.java:49)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1758)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.io.IOException: error=2, 沒有那個檔案或目錄
at java.lang.UNIXProcess.forkAndExec(Native Method)
at java.lang.UNIXProcess.<init>(UNIXProcess.java:247)
at java.lang.ProcessImpl.start(ProcessImpl.java:134)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)
... 12 more
解決辦法:
需要將master上的/usr/bin/mysqldump分發至 node1、node2的/usr/bin目錄下
scp /usr/bin/mysqldump node1:/usr/bin/
scp /usr/bin/mysqldump node2:/usr/bin/
-e引數的使用
import
--connect
jdbc:mysql://master:3306/student?useSSL=false
--username
root
--password
123456
--m
2
--split-by
student_id
--e
"select * from score where student_id=1500100011 and $CONDITIONS"
--fields-terminated-by
"\t"
--lines-terminated-by
"\n"
--target-dir
/testQ
--hive-import
--hive-overwrite
--create-hive-table
--hive-database
testsqoop
--hive-table
score2
分析匯入日誌:
22/04/18 10:24:34 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
22/04/18 10:24:34 INFO tool.CodeGenTool: Beginning code generation
Mon Apr 18 10:24:34 CST 2022 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
22/04/18 10:24:34 INFO manager.SqlManager: Executing SQL statement: select * from score where student_id=1500100011 and (1 = 0)
22/04/18 10:24:34 INFO manager.SqlManager: Executing SQL statement: select * from score where student_id=1500100011 and (1 = 0)
22/04/18 10:24:34 INFO manager.SqlManager: Executing SQL statement: select * from score where student_id=1500100011 and (1 = 0)
22/04/18 10:24:34 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/local/soft/hadoop-2.7.6
解釋:
其中的三條SQL語句是為了檢查SQL語句是否正確
BoundingValsQuery: SELECT MIN(student_id), MAX(student_id) FROM (select * from score where student_id=1500100011 and (1 = 1) ) AS t1
MySQLToHBase
編寫指令碼,並儲存為MySQLToHBase.conf
import
--connect
jdbc:mysql://master:3306/student?useSSL=false
--username
root
--password
123456
--table
student
--hbase-table
stu
--hbase-create-table
--hbase-row-key
id
--m
1
--column-family
cf1
在HBase中建立student表
create 'stu','cf1'
執行指令碼
sqoop --options-file MySQLToHBase.conf
export
HDFSToMySQL
編寫指令碼,並儲存為HDFSToMySQL.conf
export
--connect
jdbc:mysql://master:3306/student?useSSL=false&useUnicode=true&characterEncoding=utf-8
--username
root
--password
123456
--table
student2
-m
1
--columns
id,name,age,gender,clazz
--export-dir
/sqoop/data/student1
--fields-terminated-by
','
先清空MySQL student表中的資料,不然會造成主鍵衝突
執行指令碼
sqoop --options-file HDFSToMySQL.conf
檢視sqoop help
sqoop help
21/04/26 15:50:36 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6
usage: sqoop COMMAND [ARGS]
Available commands:
codegen Generate code to interact with database records
create-hive-table Import a table definition into Hive
eval Evaluate a SQL statement and display the results
export Export an HDFS directory to a database table
help List available commands
import Import a table from a database to HDFS
import-all-tables Import tables from a database to HDFS
import-mainframe Import datasets from a mainframe server to HDFS
job Work with saved jobs
list-databases List available databases on a server
list-tables List available tables in a database
merge Merge results of incremental imports
metastore Run a standalone Sqoop metastore
version Display version information
See 'sqoop help COMMAND' for information on a specific command.
# 檢視import的詳細幫助
sqoop import --help
分割槽匯入
-- 建立分割槽表
CREATE TABLE `partition_student`(
`id` int,
`name` string,
`age` int,
`gender` string,
`clazz` string,
`last_mod` string)
COMMENT 'Imported by sqoop on 2023/10/20 14:52:43'
partitioned by(dt string)
ROW FORMAT SERDE
'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'
WITH SERDEPROPERTIES (
'field.delim'='\t',
'line.delim'='\n',
'serialization.format'='\t');
import
--connect
jdbc:mysql://master:3306/student?useSSL=false
--username
root
--password
123456
--m
2
--split-by
id
--e
"SELECT * FROM student2 WHERE substr(last_mod,1,11) = '2023-10-21' and $CONDITIONS"
--fields-terminated-by
"\t"
--lines-terminated-by
"\n"
--target-dir
/testQ
--hive-import
--hive-database
testsqoop
--hive-table
partition_student
--hive-partition-key
dt
--hive-partition-value
2023-10-21
sqoop --options-file insert_10_21_partition_mysql2hive.conf
增量匯入
# 模擬做全量資料匯入:
sqoop import \
--connect 'jdbc:mysql://master:3306/student?useSSL=false' \
--username root \
--password 123456 \
--e "select * from student2 where id <= 1500100990 and \$CONDITIONS" \
--target-dir /sqoop/incremental/student \
--m 2 \
--split-by id \
--fields-terminated-by ',' \
# 增量匯入
sqoop import \
--connect 'jdbc:mysql://master:3306/student?useSSL=false' \
--username root \
--password 123456 \
--table student2 \
--target-dir /sqoop/incremental/student \
--m 2 \
--split-by id \
--fields-terminated-by ',' \
--incremental append \
--check-column id \
--last-value 1500100990
**注意:對於sqoop可以直接使用命令模式加上引數,同時引數和引數值在同一行 末尾使用 \ 表示當前行的命令未結束,下一行繼續編輯 同時 --e 引數中的 $ 需要使用反斜槓將其取消轉義 **
# 模擬做全量資料匯入:
sqoop import \
--connect 'jdbc:mysql://master:3306/student?useSSL=false' \
--username root \
--password 123456 \
--e "select * from student2 where last_mod <= '2023-10-20 14:34:12' and \$CONDITIONS" \
--target-dir /sqoop/incremental/student2 \
--m 2 \
--split-by id \
--fields-terminated-by ','
# 針對時間欄位進行做增量合併操作
sqoop import \
--connect 'jdbc:mysql://master:3306/student?useSSL=false' \
--username root \
--driver com.mysql.jdbc.Driver \
--password 123456 \
--table student2 \
--target-dir /sqoop/incremental/student2 \
--split-by id \
--m 2 \
--fields-terminated-by ',' \
--incremental lastmodified \
--check-column last_mod \
--last-value "2023-10-20 14:34:20" \
--merge-key \
id
**merge-key是針對某個ID進行合併 將時間戳最大的進行儲存 --last-value 指定上一次最大時間 在做查詢時日誌中 SQL查詢語句為where last_mode >= '2023-10-20 14:34:20' **