sqoop1.4.7環境搭建及mysql資料匯入匯出到hive
sqoop文件:
在hive建立表和匯入資料時必須新增分隔符,否則資料匯出時會報錯
1.下載安裝
[root@node1 ~]# wget
[root@node1 ~]# tar xvf sqoop-1.4.7.bin__hadoop-2.6.0.tar.gz -C /opt/
[root@node1 ~]# cd /opt/
[root@node1 opt]# mv sqoop-1.4.7.bin__hadoop-2.6.0/ sqoop-1.4.7
[root@node1 opt]# vim /etc/profile
export SQOOP_HOME=/opt/sqoop-1.4.7
export HADOOP_HOME=/opt/hadoop-2.8.5
export HADOOP_CLASSPATH=/opt/hive-2.3.4/lib/*
export HCAT_HOME=/opt/sqoop-1.4.7/testdata/hcatalog
export ACCUMULO_HOME=/opt/sqoop-1.4.7/src/java/org/apache/sqoop/accumulo
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$JAVA_HOME/bin:$SQOOP_HOME/bin
[root@node1 opt]# source /etc/profile
[root@node1 opt]# sqoop help --幫助資訊
[root@node1 opt]# sqoop import --help --引數幫助資訊
2.修改yarn配置檔案
[root@node1 ~]# vim /opt/hadoop-2.8.5/etc/hadoop/yarn-site.xml
<property> <name>yarn.nodemanager.resource.memory-mb</name> <value>2048</value> </property> <property> <name>yarn.nodemanager.resource.cpu-vcores</name> <value>2</value> </property>
[root@node1 ~]# scp /opt/hadoop-2.8.5/etc/hadoop/yarn-site.xml node2:/opt/hadoop-2.8.5/etc/hadoop/ --將配置檔案複製到各節點
yarn-site.xml 100% 1414 804.3KB/s 00:00
[root@node1 ~]# scp /opt/hive-2.3.4/conf/hive-site.xml /opt/sqoop-1.4.7/conf/ --hive的配置檔案也要放在sqoop下面,因為sqoop要呼叫hive
[root@node1 ~]# stop-all.sh
[root@node1 ~]# start-all.sh
3.將mysql資料匯入到HDFS
引數解釋:
--append 追加資料
--as-textfile 匯入後形成文字檔案
--columns 只匯入哪些欄位
--delete-target-dir --如果匯入的目錄存在先刪除再匯入
--fetch-size <n> --每次讀多少資料
-m --起多少任務
-e --查詢語句(select)
--table <table-name> --表名
--target-dir dir --指定HDFS目錄
--warehouse-dir dir --匯入的表將在此目錄之下(表名與目錄名一至)
--where where clause --where條件
-z --資料壓縮
--direct --繞過mysql資料庫,直接匯入(憂化引數)
[root@node1 ~]# sqoop import --connect jdbc:mysql://172.16.9.100/hive --username hive --password system --table TBL_PRIVS --target-dir /user/sqoop --direct -m 1 --fields-terminated-by '\t'
[root@node1 ~]# hdfs dfs -ls /user/sqoop --檢視匯入的目錄
Found 2 items
-rw-r--r-- 3 root supergroup 0 2019-03-19 12:43 /user/sqoop/_SUCCESS
-rw-r--r-- 3 root supergroup 176 2019-03-19 12:43 /user/sqoop/part-m-00000
[root@node1 ~]# hdfs dfs -cat /user/sqoop/part-m-00000 --檢視匯入的資料
6,1552878877,1,root,USER,root,USER,INSERT,6
7,1552878877,1,root,USER,root,USER,SELECT,6
8,1552878877,1,root,USER,root,USER,UPDATE,6
9,1552878877,1,root,USER,root,USER,DELETE,6
[root@node1 ~]#
4.將mysql資料匯入到hive中
引數詳解:
--hive-home dir 指定hive目錄
--hive-import 匯入到hive
--hive-database 匯入指定的庫
--hive-overwrite 覆蓋到hive
--create-hive-table 在hive中建立表
--hive-table table-name 指定hive表名
--hive-partition-value v hive分割槽
[root@node1 ~]# sqoop import --connect jdbc:mysql://172.16.9.100/hive --username hive --password system --table TBL_PRIVS --target-dir /user/tmp --hive-import --hive-table tt -m 1 --create-hive-table --delete-target-dir --direct --fields-terminated-by '\t'
[root@node1 conf]# hive
Logging initialized using configuration in jar:file:/opt/hive-2.3.4/lib/hive-common-2.3.4.jar!/hive-log4j2.properties Async: true
Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
hive> show tables;
OK
tt
Time taken: 11.464 seconds, Fetched: 1 row(s)
hive> select * from tt;
OK
6 1552878877 1 root USER root USER INSERT 6
7 1552878877 1 root USER root USER SELECT 6
8 1552878877 1 root USER root USER UPDATE 6
9 1552878877 1 root USER root USER DELETE 6
Time taken: 3.978 seconds, Fetched: 4 row(s)
hive>
5.將mysql資料匯入到hive指定的庫中
[root@node1 ~]# sqoop import --connect jdbc:mysql://172.16.9.100/hive --username hive --password system --table TABLE_PARAMS --hive-import --hive-table tt1 -m 1 --create-hive-table --hive-database tong --direct --fields-terminated-by '\t'
[root@node1 conf]# hive
Logging initialized using configuration in jar:file:/opt/hive-2.3.4/lib/hive-common-2.3.4.jar!/hive-log4j2.properties Async: true
Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
hive> use tong;
OK
Time taken: 14.34 seconds
hive> show tables;
OK
tt1
Time taken: 0.374 seconds, Fetched: 1 row(s)
hive> select * from tt1;
OK
6 numFiles 1
6 numRows 0
6 rawDataSize 0
6 totalSize 8
6 transient_lastDdlTime 1552878901
11 comment Imported by sqoop on 2019/03/19 15:36:21
11 numFiles 1
11 numRows 0
11 rawDataSize 0
11 totalSize 176
11 transient_lastDdlTime 1552981011
16 comment Imported by sqoop on 2019/03/19 16:04:22
16 numFiles 1
16 numRows 0
16 rawDataSize 0
16 totalSize 239
16 transient_lastDdlTime 1552982688
Time taken: 3.004 seconds, Fetched: 17 row(s)
hive>
6.將HDFS的資料匯入到mysql中
[root@node1 ~]# hdfs dfs -cat /user/tmp/part-m-00000
1 2
3 4
5 6
[root@node1 ~]# sqoop export --connect jdbc:mysql://172.16.9.100/tong --username tong --password system --export-dir /user/tmp/part-m-00000 --table t1 --direct --fields-terminated-by '\t'
[root@node1 ~]# mysql -u root -psystem
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MySQL connection id is 1006876
Server version: 5.6.35 MySQL Community Server (GPL)
Copyright (c) 2000, 2017, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MySQL [(none)]> use tong
MySQL [tong]> select * from t1;
+------+------+
| a | b |
+------+------+
| 3 | 4 |
| 5 | 6 |
| 1 | 2 |
+------+------+
3 rows in set (0.00 sec)
MySQL [tong]>
報錯資訊:(卡在Running job不動,不向下執行)
19/03/19 11:20:09 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1552965562217_0001
19/03/19 11:20:10 INFO impl.YarnClientImpl: Submitted application application_1552965562217_0001
19/03/19 11:20:10 INFO mapreduce.Job: The url to track the job:
19/03/19 11:20:10 INFO mapreduce.Job: Running job: job_1552965562217_0001
解決方法:
[root@node1 ~]# vim /opt/hadoop-2.8.5/etc/hadoop/yarn-site.xml --限制記憶體,cpu的資源,並將配置檔案同步到其它node,重啟hadoop服務
<property> <name>yarn.nodemanager.resource.memory-mb</name> <value>2048</value> </property> <property> <name>yarn.nodemanager.resource.cpu-vcores</name> <value>2</value> </property>
[root@node1 ~]#
報錯資訊:(mysql匯入到hive中)
19/03/19 14:34:25 INFO hive.HiveImport: Loading uploaded data into Hive
19/03/19 14:34:25 ERROR hive.HiveConfig: Could not load org.apache.hadoop.hive.conf.HiveConf. Make sure HIVE_CONF_DIR is set correctly.
19/03/19 14:34:25 ERROR tool.ImportTool: Import failed: java.io.IOException: java.lang.ClassNotFoundException: org.apache.hadoop.hive.conf.HiveConf
at org.apache.sqoop.hive.HiveConfig.getHiveConf(HiveConfig.java:50)
at org.apache.sqoop.hive.HiveImport.getHiveArgs(HiveImport.java:392)
at org.apache.sqoop.hive.HiveImport.executeExternalHiveScript(HiveImport.java:379)
解決方法:
[root@node1 ~]# vim /etc/profile --新增lib變數
export HADOOP_CLASSPATH=/opt/hive-2.3.4/lib/*
[root@node1 ~]# source /etc/profile
報錯資訊:(是因為sqoop和hive的jackson包衝突)
19/03/19 15:32:11 INFO ql.Driver: Concurrency mode is disabled, not creating a lock manager
19/03/19 15:32:11 INFO ql.Driver: Executing command(queryId=root_20190319153153_63feddd9-a2c8-4217-97d4-23dd9840a54b): CREATE TABLE `tt` ( `TBL_GRANT_ID` BIGINT, `CREATE_TIME` INT,
`GRANT_OPTION` INT, `GRANTOR` STRING, `GRANTOR_TYPE` STRING, `PRINCIPAL_NAME` STRING, `PRINCIPAL_TYPE` STRING, `TBL_PRIV` STRING, `TBL_ID` BIGINT) COMMENT 'Imported by sqoop on 2019/03/19
15:31:49' ROW FORMAT DELIMITED FIELDS TERMINATED BY '\001' LINES TERMINATED BY '\012' STORED AS TEXTFILE
19/03/19 15:32:11 INFO ql.Driver: Starting task [Stage-0:DDL] in serial mode
19/03/19 15:32:12 ERROR exec.DDLTask: java.lang.NoSuchMethodError: com.fasterxml.jackson.databind.ObjectMapper.readerFor(Ljava/lang/Class;)Lcom/fasterxml/jackson/databind/ObjectReader;
at org.apache.hadoop.hive.common.StatsSetupConst$ColumnStatsAccurate.<clinit>(StatsSetupConst.java:165)
at org.apache.hadoop.hive.common.StatsSetupConst.parseStatsAcc(StatsSetupConst.java:297)
at org.apache.hadoop.hive.common.StatsSetupConst.setBasicStatsState(StatsSetupConst.java:230)
at org.apache.hadoop.hive.common.StatsSetupConst.setBasicStatsStateForCreateTable(StatsSetupConst.java:292)
解決方法:
[root@node1 ~]# mv /opt/sqoop-1.4.7/lib/jackson-* /home/
[root@node1 ~]# cp -a /opt/hive-2.3.4/lib/jackson-* /opt/sqoop-1.4.7/lib/
報錯資訊:
19/03/19 18:38:40 INFO metastore.HiveMetaStore: 0: Done cleaning up thread local RawStore
19/03/19 18:38:40 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=Done cleaning up thread local RawStore
19/03/19 18:38:40 ERROR tool.ImportTool: Import failed: java.io.IOException: Hive CliDriver exited with status=1
at org.apache.sqoop.hive.HiveImport.executeScript(HiveImport.java:355)
at org.apache.sqoop.hive.HiveImport.importTable(HiveImport.java:241)
at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:537)
at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:628)
解決方法:
create table t1(a int,b int) row format delimited fields terminated by '\t'; --建立表時必須加分隔符
sqoop import --connect jdbc:mysql://172.16.9.100/hive --username hive --password system --table TBL_PRIVS --target-dir /user/sqoop --direct -m 1 --fields-terminated-by '\t'
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/25854343/viewspace-2565219/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- hive匯出到csv hive匯出到excelHiveExcel
- MSSQL資料匯出到MYSQLMySql
- sqoop用法之mysql與hive資料匯入匯出OOPMySqlHive
- 大文字資料,匯入匯出到資料庫資料庫
- Sqoop將MySQL資料匯入到hive中OOPMySqlHive
- HIVE資料匯入基礎Hive
- 資料庫 MySQL 資料匯入匯出資料庫MySql
- MySQL入門--匯出和匯入資料MySql
- 將資料匯出到ExcelExcel
- Mysql 資料庫匯入與匯出MySql資料庫
- 將資料匯入kudu表(建立臨時hive表,從hive匯入kudu)步驟Hive
- MySQL資料的匯入MySql
- SQLServer匯出匯入資料到MySQLServerMySql
- 【MySQL】白話說MySQL(五),資料的匯出與匯入MySql
- 001-深度學習Pytorch環境搭建(Anaconda , PyCharm匯入)深度學習PyTorchPyCharm
- Mysql匯入&匯出MySql
- 2024版Pycharm匯入conda環境PyCharm
- MySQL匯入百萬資料實踐MySql
- Windows 下 MySQL 資料匯入 RedisWindowsMySqlRedis
- 如何將資料熱匯出到檔案
- Atlas2.2.0編譯、安裝及使用(整合ElasticSearch,匯入Hive資料)編譯ElasticsearchHive
- MYSQL資料匯出備份、匯入的幾種方式MySql
- Navicat、into outfile、mysql命令、mysqldump、mysqlpump、mydumper匯出匯入資料MySql
- sqoop資料匯入匯出OOP
- Oracle 資料匯入匯出Oracle
- 資料泵匯出匯入
- Oracle資料匯入匯出Oracle
- phpMyAdmin匯入/匯出資料PHP
- Nebula Exchange 工具 Hive 資料匯入的踩坑之旅Hive
- MySQL Shell import_table資料匯入MySqlImport
- 用Navicat把SQLServer資料匯入MySQLServerMySql
- Hive資料匯入HBase引起資料膨脹引發的思考Hive
- java 匯出到EXCELJavaExcel
- 一次運維-堡壘機多次跳轉匯出及匯入mysql資料庫運維MySql資料庫
- mysql匯出資料MySql
- Oracle資料庫匯入匯出。imp匯入命令和exp匯出命令Oracle資料庫
- Linux雲伺服器手動匯入匯出mysql資料庫Linux伺服器MySql資料庫
- Hadoop+hive環境搭建HadoopHive