hive學習筆記之八:Sqoop

程式設計師欣宸發表於2021-07-07

歡迎訪問我的GitHub

https://github.com/zq2599/blog_demos

內容:所有原創文章分類彙總及配套原始碼,涉及Java、Docker、Kubernetes、DevOPS等;

關於Sqoop

Sqoop是Apache開源專案,用於在Hadoop和關係型資料庫之間高效傳輸大量資料,本文將與您一起實踐以下內容:

  1. 部署Sqoop
  2. 用Sqoop將hive表資料匯出至MySQL
  3. 用Sqoop將MySQL資料匯入到hive表

部署

  1. 在hadoop賬號的家目錄下載Sqoop的1.4.7版本:
wget https://mirror.bit.edu.cn/apache/sqoop/1.4.7/sqoop-1.4.7.bin__hadoop-2.6.0.tar.gz
  1. 解壓:
tar -zxvf sqoop-1.4.7.bin__hadoop-2.6.0.tar.gz
  1. 解壓後得到資料夾sqoop-1.4.7.bin__hadoop-2.6.0,將mysql-connector-java-5.1.47.jar複製到sqoop-1.4.7.bin__hadoop-2.6.0/lib目錄下
  2. 進入目錄sqoop-1.4.7.bin__hadoop-2.6.0/conf,將sqoop-env-template.sh改名為sqoop-env.sh
mv sqoop-env-template.sh sqoop-env.sh
  1. 用編輯器開啟sqoop-env.sh,增加下面三個配置,HADOOP_COMMON_HOMEHADOOP_MAPRED_HOME是完整的hadoop路徑,HIVE_HOME是完整的hive路徑:
export HADOOP_COMMON_HOME=/home/hadoop/hadoop-2.7.7
export HADOOP_MAPRED_HOME=/home/hadoop/hadoop-2.7.7
export HIVE_HOME=/home/hadoop/apache-hive-1.2.2-bin
  1. 安裝和配置完成了,進入sqoop-1.4.7.bin__hadoop-2.6.0/bin,執行./sqoop version檢視sqoop版本,如下所示,可見是1.4.7版本(有些環境變數沒配置會輸出告警,在此先忽略):
[hadoop@node0 bin]$ ./sqoop version
Warning: /home/hadoop/sqoop-1.4.7.bin__hadoop-2.6.0/bin/../../hbase does not exist! HBase imports will fail.
Please set $HBASE_HOME to the root of your HBase installation.
Warning: /home/hadoop/sqoop-1.4.7.bin__hadoop-2.6.0/bin/../../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /home/hadoop/sqoop-1.4.7.bin__hadoop-2.6.0/bin/../../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
Warning: /home/hadoop/sqoop-1.4.7.bin__hadoop-2.6.0/bin/../../zookeeper does not exist! Accumulo imports will fail.
Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation.
20/11/02 12:02:58 INFO sqoop.Sqoop: Running Sqoop version: 1.4.7
Sqoop 1.4.7
git commit id 2328971411f57f0cb683dfb79d19d4d19d185dd8
Compiled by maugli on Thu Dec 21 15:59:58 STD 2017
  • sqoop裝好之後,接下來體驗其功能

MySQL準備

為了接下來的實戰,需要把MySQL準備好,這裡給出的MySQL的配置供您參考:

  1. MySQL版本:5.7.29
  2. MySQL伺服器IP:192.168.50.43
  3. MySQL服務埠:3306
  4. 賬號:root
  5. 密碼:123456
  6. 資料庫名:sqoop

關於MySQL部署,我這為了省事兒,是用docker部署的,參考《群暉DS218+部署mysql》

從hive匯入MySQL(export)

  • 執行以下命令,將hive的資料匯入到MySQL:
./sqoop export \
--connect jdbc:mysql://192.168.50.43:3306/sqoop \
--table address \
--username root \
--password 123456 \
--export-dir '/user/hive/warehouse/address' \
--fields-terminated-by ','
  • 檢視address表,資料已經匯入:

在這裡插入圖片描述

從MySQL匯入hive(import)

  1. 在hive的命令列模式執行以下語句,新建名為address2的表結構和address一模一樣:
create table address2 (addressid int, province string, city string) 
row format delimited 
fields terminated by ',';
  1. 執行以下命令,將MySQL的address表的資料匯入到hive的address2表,-m 2表示啟動2個map任務:
./sqoop import \
--connect jdbc:mysql://192.168.50.43:3306/sqoop \
--table address \
--username root \
--password 123456 \
--target-dir '/user/hive/warehouse/address2' \
-m 2
  1. 執行完畢後,控制檯輸入類似以下內容:
		Virtual memory (bytes) snapshot=4169867264
		Total committed heap usage (bytes)=121765888
	File Input Format Counters 
		Bytes Read=0
	File Output Format Counters 
		Bytes Written=94
20/11/02 16:09:22 INFO mapreduce.ImportJobBase: Transferred 94 bytes in 16.8683 seconds (5.5726 bytes/sec)
20/11/02 16:09:22 INFO mapreduce.ImportJobBase: Retrieved 5 records.
  1. 去檢視hive的address2表,可見資料已經成功匯入:
hive> select * from address2;
OK
1	guangdong	guangzhou
2	guangdong	shenzhen
3	shanxi	xian
4	shanxi	hanzhong
6	jiangshu	nanjing
Time taken: 0.049 seconds, Fetched: 5 row(s)
  • 至此,Sqoop工具的部署和基本操作已經體驗完成,希望您在執行資料匯入匯出操作時,此文能給您一些參考;

你不孤單,欣宸原創一路相伴

  1. Java系列
  2. Spring系列
  3. Docker系列
  4. kubernetes系列
  5. 資料庫+中介軟體系列
  6. DevOps系列

歡迎關注公眾號:程式設計師欣宸

微信搜尋「程式設計師欣宸」,我是欣宸,期待與您一同暢遊Java世界...
https://github.com/zq2599/blog_demos

相關文章