[Hadoop]Sqoop 1.4.2中文文件(一)之資料匯入

大搜車-自娛發表於2012-10-24
[size=large][b]一、Sqoop Help[/b][/size]
$ sqoop help
usage: sqoop COMMAND [ARGS]

Available commands:
codegen Generate code to interact with database records
create-hive-table Import a table definition into Hive
eval Evaluate a SQL statement and display the results
export Export an HDFS directory to a database table
help List available commands
import Import a table from a database to HDFS
import-all-tables Import tables from a database to HDFS
list-databases List available databases on a server
list-tables List available tables in a database
version Display version information

See 'sqoop help COMMAND' for information on a specific command.


你可以使用sqoop help (tool-name)也可以使用sqoop (tool-name)--help來使用幫助。
例如: sqoop help import. sqoop import --help.

[size=large][b]二、Sqoop的別名[/b][/size]
例如:sqoop import --help 等同於 sqoop-import --help,即sqoop-import是sqoop import的別名。

[size=large][b]三、sqoop-import[/b][/size]
$ sqoop help import
usage: sqoop import [GENERIC-ARGS] [TOOL-ARGS]

Common arguments:
--connect <jdbc-uri> Specify JDBC connect string
--connect-manager <jdbc-uri> Specify connection manager class to use
--driver <class-name> Manually specify JDBC driver class to use
--hadoop-home <dir> Override $HADOOP_HOME
--help Print usage instructions
-P Read password from console
--password <password> Set authentication password
--username <username> Set authentication username
--verbose Print more information while working

[...]

Generic Hadoop command-line arguments:
(must preceed any tool-specific arguments)
Generic options supported are
-conf <configuration file> specify an application configuration file
-D <property=value> use value for given property
-fs <local|namenode:port> specify a namenode
-jt <local|jobtracker:port> specify a job tracker
-files <comma separated list of files> specify comma separated files to be copied to the map reduce cluster
-libjars <comma separated list of jars> specify comma separated jar files to include in the classpath.
-archives <comma separated list of archives> specify comma separated archives to be unarchived on the compute machines.

The general command line syntax is
bin/hadoop command [genericOptions] [commandOptions]


其中Generic option的設定要在Common arguments之前,-conf,-fs,-jt-,-D都是對hadoop服務進行設定的,例如 -D mapred.job.name=<job_name>能夠制定job的名字,如果不指定的話Job的名字將以用到的Jar包作為Job的名字。
例如:
User: hdfs
Job Name: cn_opda_a_phonoalbumshoushou_json_120901.jar
Job File: hdfs://vm-nba01.in.dx:9000/home/hdfs/tmp/mapred/staging/hdfs/.staging/job_201210171559_0391/job.xml
Submit Host: vm-nba01.in.dx
Submit Host Address: 10.18.102.101
Job-ACLs: All users are allowed
Job Setup: Successful
Status: Succeeded
Started at: Tue Oct 23 15:18:41 CST 2012
Finished at: Tue Oct 23 15:23:20 CST 2012
Finished in: 4mins, 39sec
Job Cleanup: Successful


而files、libjars 、archives 選項則不具有代表性質,因為這些選項在Hadoop內部命令中已經被支援了,可以檢視hadoop job的幫助。
[work@vm-nba01 ~]$ hadoop job
Usage: JobClient <command> <args>
[-submit <job-file>]
[-status <job-id>]
[-counter <job-id> <group-name> <counter-name>]
[-kill <job-id>]
[-set-priority <job-id> <priority>]. Valid values for priorities are: VERY_HIGH HIGH NORMAL LOW VERY_LOW
[-events <job-id> <from-event-#> <#-of-events>]
[-history <jobOutputDir>]
[-list [all]]
[-list-active-trackers]
[-list-blacklisted-trackers]
[-list-attempt-ids <job-id> <task-type> <task-state>]

[-kill-task <task-id>]
[-fail-task <task-id>]

Generic options supported are
-conf <configuration file> specify an application configuration file
-D <property=value> use value for given property
-fs <local|namenode:port> specify a namenode
-jt <local|jobtracker:port> specify a job tracker
-files <comma separated list of files> specify comma separated files to be copied to the map reduce cluster
-libjars <comma separated list of jars> specify comma separated jar files to include in the classpath.
-archives <comma separated list of archives> specify comma separated archives to be unarchived on the compute machines.


[size=large][b]四、sqoop指令碼[/b][/size]
舉例:
$ sqoop import --connect jdbc:mysql://localhost/db --username foo --table TEST

下面把這些選項引數做成指令碼進行傳遞:(import.txt)
$ sqoop --options-file /users/homer/work/import.txt --table TEST

那麼import.txt中的引數要按照行來進行分隔,內容如下:
#
# Options file for Sqoop import
#

# Specifies the tool being invoked
import

# Connect parameter and value
--connect
jdbc:mysql://localhost/db

# Username parameter and value
--username
foo

#
# Remaining options should be specified in the command line.
#


舉個sqoop連線資料庫,將資料庫內的資料匯入到HDFS中的例子:
sqoop import --connect jdbc:mysql://database.example.com/employees \
--username aaron --password 12345

這樣連線例子需要把mysql driver的jar包放到你的環境Path中,否則請這樣使用:
$ sqoop import --driver com.microsoft.jdbc.sqlserver.SQLServerDriver \
--connect <connect-string> ...


sqoop-import控制引數:
usage: sqoop import [GENERIC-ARGS] [TOOL-ARGS]

Common arguments:
--connect <jdbc-uri> Specify JDBC connect
string
--connection-manager <class-name> Specify connection manager
class name
--connection-param-file <properties-file> Specify connection
parameters file
--driver <class-name> Manually specify JDBC
driver class to use
--hadoop-home <dir> Override $HADOOP_HOME
--help Print usage instructions
-P Read password from console
--password <password> Set authentication
password
--username <username> Set authentication
username
--verbose Print more information
while working

Import control arguments:
--append Imports data in append mode
--as-avrodatafile Imports data to Avro data files
--as-sequencefile Imports data to SequenceFiles
--as-textfile Imports data as plain text (default)
--boundary-query <statement> Set boundary query for retrieving max
and min value of the primary key
--columns <col,col,col...> Columns to import from table
--compression-codec <codec> Compression codec to use for import
--direct Use direct import fast path
--direct-split-size <n> Split the input stream every 'n' bytes
when importing in direct mode
-e,--query <statement> Import results of SQL 'statement'
--fetch-size <n> Set number 'n' of rows to fetch from
the database when more rows are needed
--inline-lob-limit <n> Set the maximum size for an inline LOB
-m,--num-mappers <n> Use 'n' map tasks to import in parallel
--split-by <column-name> Column of the table used to split work
units
--table <table-name> Table to read
--target-dir <dir> HDFS plain table destination
--warehouse-dir <dir> HDFS parent for table destination
--where <where clause> WHERE clause to use during import
-z,--compress Enable compression

Incremental import arguments:
--check-column <column> Source column to check for incremental
change
--incremental <import-type> Define an incremental import of type
'append' or 'lastmodified'
--last-value <value> Last imported value in the incremental
check column

Output line formatting arguments:
--enclosed-by <char> Sets a required field enclosing
character
--escaped-by <char> Sets the escape character
--fields-terminated-by <char> Sets the field separator character
--lines-terminated-by <char> Sets the end-of-line character
--mysql-delimiters Uses MySQL's default delimiter set:
fields: , lines: \n escaped-by: \
optionally-enclosed-by: '
--optionally-enclosed-by <char> Sets a field enclosing character

Input parsing arguments:
--input-enclosed-by <char> Sets a required field encloser
--input-escaped-by <char> Sets the input escape
character
--input-fields-terminated-by <char> Sets the input field separator
--input-lines-terminated-by <char> Sets the input end-of-line
char
--input-optionally-enclosed-by <char> Sets a field enclosing
character

Hive arguments:
--create-hive-table Fail if the target hive
table exists
--hive-delims-replacement <arg> Replace Hive record \0x01
and row delimiters (\n\r)
from imported string fields
with user-defined string
--hive-drop-import-delims Drop Hive record \0x01 and
row delimiters (\n\r) from
imported string fields
--hive-home <dir> Override $HIVE_HOME
--hive-import Import tables into Hive
(Uses Hive's default
delimiters if none are
set.)
--hive-overwrite Overwrite existing data in
the Hive table
--hive-partition-key <partition-key> Sets the partition key to
use when importing to hive
--hive-partition-value <partition-value> Sets the partition value to
use when importing to hive
--hive-table <table-name> Sets the table name to use
when importing to hive
--map-column-hive <arg> Override mapping for
specific column to hive
types.

HBase arguments:
--column-family <family> Sets the target column family for the
import
--hbase-create-table If specified, create missing HBase tables
--hbase-row-key <col> Specifies which input column to use as the
row key
--hbase-table <table> Import to <table> in HBase

Code generation arguments:
--bindir <dir> Output directory for compiled
objects
--class-name <name> Sets the generated class name.
This overrides --package-name.
When combined with --jar-file,
sets the input class.
--input-null-non-string <null-str> Input null non-string
representation
--input-null-string <null-str> Input null string representation
--jar-file <file> Disable code generation; use
specified jar
--map-column-java <arg> Override mapping for specific
columns to java types
--null-non-string <null-str> Null non-string representation
--null-string <null-str> Null string representation
--outdir <dir> Output directory for generated
code
--package-name <name> Put auto-generated classes in
this package

Generic Hadoop command-line arguments:
(must preceed any tool-specific arguments)
Generic options supported are
-conf <configuration file> specify an application configuration file
-D <property=value> use value for given property
-fs <local|namenode:port> specify a namenode
-jt <local|jobtracker:port> specify a job tracker
-files <comma separated list of files> specify comma separated files to be copied to the map reduce cluster
-libjars <comma separated list of jars> specify comma separated jar files to include in the classpath.
-archives <comma separated list of archives> specify comma separated archives to be unarchived on the compute machines.

The general command line syntax is
bin/hadoop command [genericOptions] [commandOptions]


[size=large][b]五、利用查詢結果作為sqoop的匯入內容[/b][/size]

舉例:其中split-by是匯入後的資料按照a.id進行分割,--target-dir目標地址,查詢後的結果將放入這個檔案
$ sqoop import \
--query 'SELECT a.*, b.* FROM a JOIN b on (a.id == b.id) WHERE $CONDITIONS' \
--split-by a.id --target-dir /user/foo/joinresults

舉例:m代表只查詢一次並且邊查詢邊匯入
$ sqoop import \
--query 'SELECT a.*, b.* FROM a JOIN b on (a.id == b.id) WHERE $CONDITIONS' \
-m 1 --target-dir /user/foo/joinresults


匯入時候可以控制分割檔案大小,或者字串轉義例如:--direct-split-size 以及--warehouse-dir ,--default-character-set

例如:
$ sqoop import --connect jdbc:mysql://server.foo.com/db --table bar \
--direct -- --default-character-set=latin1
$ sqoop import --connnect <connect-str> --table foo --warehouse-dir /shared \
...


sqoop對java以及hive提供支援,所以你可以匯入key/value這樣的map資料,例如:
Argument	 Description
--map-column-java <mapping> Override mapping from SQL to Java type for configured columns.
--map-column-hive <mapping> Override mapping from SQL to Hive type for configured columns.
$ sqoop import ... --map-column-java id=String,value=Integer


[size=large][b]六、sqoop的增量匯入[/b][/size]
Argument	 Description
--check-column (col) Specifies the column to be examined when determining which rows to import.
--incremental (mode) Specifies how Sqoop determines which rows are new. Legal values for mode include append and lastmodified.
--last-value (value) Specifies the maximum value of the check column from the previous import.


通過增量匯入你可以只匯入一個已經存在表的增列值,或者表後面的值。
增量匯入需要給定必要的引數,詳情[url=http://blog.csdn.net/awayyao/article/details/7635094]一個增量匯入的例子[/url]。

當然你也可以指定匯入到Hive後的檔案格式:有2種
1.--as-textfile
這個引數你可以檢視到hive內的原資料就是文字檔案模式沒有壓縮
2.-z or --compress or --compression-codec
這個引數有3種寫法不過匯入到hive內的資料就是壓縮過的了

[size=large][b]七、sqoop的Hive與Hbase的資料匯入[/b][/size]
前六點都能看完後,Hive與Hbase的匯入也就很簡單了,其實就是多了一步匯入的資料放在哪裡而已。

Hive舉例:

sqoop import --verbose --connect jdbc:mysql://10.18.102.133/tjss_opda --username tongji --password dx_tj --table opda_start_120604 --hive-import --hive-table opda_start_120604_incr --hive-overwrite --direct

sqoop job --create opda_start_120604 -- import --connect jdbc:mysql://10.18.102.133/tjss_opda --username tongji --password dx_tj --table opda_start_120604 --hive-import --hive-table opda_start_120604_incr --check-column id --incremental append --last-value 0 -m 8 --hive-overwrite --hive-delims-replacement="\t"


注意事項:

1.注意Sqoop是在Hadoop上跑的,所以jdbc url不要寫localhost,--direct的要求同理。
2.Sqoop也有metastore,目前看來,沒有啟動metastore service時其不是執行緒安全的。另外就是它只能使用hsqldb,暫不支援其他資料庫,對hsqldb可靠性沒有太高信心。
3.Metastore裡password是明文儲存的,所以它不建議儲存password。
4.Sqoop有bug,目前資料庫裡有特殊表名時有問題。
5.Sqoop匯入到hive裡的表只能是TEXTFILE,不過可以選擇壓縮格式
6.Sqoop可以匯入到一個已經存在的空hive表,但是是使用Load data匯入資料,所以目標表的schema實際上是被無視了。
7.Sqoop匯入hive若不加hive-overwirte,會留下hadoop目錄,對下次若執行相同任務有影響。
8.注意加入delims-replace,否則很容易因為分隔符問題出現錯誤行。
9.Hive進行dynamic partition時,一次partition數量過多有Bug,必須加distribute by
10.Hive對大小寫不區分,儘量全小寫,否則有潛在bug
11.Sqoop要求執行時當前目錄可寫(code-gen)。
12.只要有jdbc驅動,所有jdbc相容的資料庫均可匯入


匯入時除了用到sqoop相關的hive與Hbase的引數外,還會用到匯入時候專用的引數:
Input parsing arguments:
--input-enclosed-by <char> Sets a required field encloser
--input-escaped-by <char> Sets the input escape
character
--input-fields-terminated-by <char> Sets the input field separator
--input-lines-terminated-by <char> Sets the input end-of-line
char
--input-optionally-enclosed-by <char> Sets a field enclosing
character


這個部分的引數有可能你會用到的。另外如果匯入語句沒有新增目的表或者地址則匯入的內容會寫在HDFS當前的操作目錄下。


[size=large][b]八、sqoop匯入全部表和資料[/b][/size]

舉個例子,其他引數均與sqoop help引數相同:
$ sqoop import-all-tables --connect jdbc:mysql://db.foo.com/corp


驗證結果:
$ hadoop fs -ls
Found 4 items
drwxr-xr-x - someuser somegrp 0 2010-04-27 17:15 /user/someuser/EMPLOYEES
drwxr-xr-x - someuser somegrp 0 2010-04-27 17:15 /user/someuser/PAYCHECKS
drwxr-xr-x - someuser somegrp 0 2010-04-27 17:15 /user/someuser/DEPARTMENTS
drwxr-xr-x - someuser somegrp 0 2010-04-27 17:15 /user/someuser/OFFICE_SUPPLIES

相關文章