sqoop 的使用

風的王子發表於2013-07-05

(讀這篇文章:bingdongguke的提醒。有可能有的檔案連結下載不了,可以到apache自行找連結與版本)

下載sqoop

[root@localhost download]# wget http://cloud.github.com/downloads/cloudera/sqoop/sqoop-1.3.0.tar.gz

 

wget http://labs.mop.com/apache-mirror//incubator/sqoop/stable/sqoop-1.4.1-incubating__hadoop-0.20.tar.gz

 

 

同步各個節點時間

詳見 http://zhengzhuangjie.iteye.com/blog/1409336

 

 

有可能需要把hadoop-core-0.20.203.0.jar和commons-collections-3.2.1.jar拷貝到sqoop-1.4.1-incubating__hadoop-0.20/lib下面

 

設定要訪問的遠端MySQL伺服器許可權(10.0.30.67):

[root@localhost ~]# mysql -uroot -p123456

 

mysql> GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' WITH GRANT OPTION

Query OK, 0 rows affected (0.01 sec)

 

配置sqoop.xml

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements.  See the NOTICE file
distributed with this work for additional information
regarding copyright ownership.  The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License.  You may obtain a copy of the License at

  http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied.  See the License for the
specific language governing permissions and limitations
under the License.
-->

<!-- Put Sqoop-specific properties in this file. -->

<configuration>

  <!--
    Set the value of this property to explicitly enable third-party
    ManagerFactory plugins.

    If this is not used, you can alternately specify a set of ManagerFactories
    in the $SQOOP_CONF_DIR/managers.d/ subdirectory.  Each file should contain
    one or more lines like:
      manager.class.name[=/path/to/containing.jar]

    Files will be consulted in lexicographical order only if this property
    is unset.
  -->

  <property>
    <name>sqoop.connection.factories</name>
    <value>com.cloudera.sqoop.manager.DefaultManagerFactory</value>
    <description>A comma-delimited list of ManagerFactory implementations
      which are consulted, in order, to instantiate ConnManager instances
      used to drive connections to databases.
    </description>
  </property>


  <!--
    Set the value of this property to enable third-party tools.

    If this is not used, you can alternately specify a set of ToolPlugins
    in the $SQOOP_CONF_DIR/tools.d/ subdirectory.  Each file should contain
    one or more lines like:
      plugin.class.name[=/path/to/containing.jar]

    Files will be consulted in lexicographical order only if this property
    is unset.
  -->
  <!--
  <property>
    <name>sqoop.tool.plugins</name>
    <value></value>
    <description>A comma-delimited list of ToolPlugin implementations
      which are consulted, in order, to register SqoopTool instances which
      allow third-party tools to be used.
    </description>
  </property>
  -->

  <!--
    By default, the Sqoop metastore will auto-connect to a local embedded
    database stored in ~/.sqoop/. To disable metastore auto-connect, uncomment
    this next property.
  -->

  <property>
    <name>sqoop.metastore.client.enable.autoconnect</name>
    <value>false</value>
    <description>If true, Sqoop will connect to a local metastore
      for job management when no other metastore arguments are
      provided.
    </description>
  </property>


  <!--
    The auto-connect metastore is stored in ~/.sqoop/. Uncomment
    these next arguments to control the auto-connect process with
    greater precision.
  -->

  <property>
    <name>sqoop.metastore.client.autoconnect.url</name>
    <value>jdbc:mysql://10.0.30.67/cncloud?useUnicode=true&amp;characterEncoding=utf-8</value>
    <description>The connect string to use when connecting to a
      job-management metastore. If unspecified, uses ~/.sqoop/.
      You can specify a different path here.
    </description>
  </property>
  <property>
    <name>sqoop.metastore.client.autoconnect.username</name>
    <value>root</value>
    <description>The username to bind to the metastore.
    </description>
  </property>
  <property>
    <name>sqoop.metastore.client.autoconnect.password</name>
    <value>123456</value>
    <description>The password to bind to the metastore.
    </description>
  </property>


  <!--
    For security reasons, by default your database password will not be stored in
    the Sqoop metastore. When executing a saved job, you will need to
    reenter the database password. Uncomment this setting to enable saved
    password storage. (INSECURE!)
  -->

  <property>
    <name>sqoop.metastore.client.record.password</name>
    <value>true</value>
    <description>If true, allow saved passwords in the metastore.
    </description>
  </property>

 

  <!--
    SERVER CONFIGURATION: If you plan to run a Sqoop metastore on this machine,
    you should uncomment and set these parameters appropriately.

    You should then configure clients with:
       sqoop.metastore.client.autoconnect.url =
       jdbc:hsqldb:hsql://&lt;server-name&gt;:&lt;port&gt;/sqoop
  -->

  <property>
    <name>sqoop.metastore.server.location</name>
    <value>/tmp/sqoop-metastore/shared.db</value>
    <description>Path to the shared metastore database files.
    If this is not set, it will be placed in ~/.sqoop/.
    </description>
  </property>

  <property>
    <name>sqoop.metastore.server.port</name>
    <value>16000</value>
    <description>Port that this metastore should listen on.
    </description>
  </property>


</configuration>

 

[root@nn0001 bin]# ./sqoop help
Error: /usr/lib/hadoop does not exist!
Please set $HADOOP_HOME to the root of your Hadoop installation.
[root@nn0001 bin]# vim /etc/profile

HADOOP_HOME=/download/hadoop-0.20.203.0
HBASE_HOME=/download/hbase-0.90.5

export HADOOP_HOME
export HBASE_HOME

 

[root@nn0001 bin]# source /etc/profile

 

[root@nn0001 bin]# ./sqoop import --help

 

 

sqoop匯入mysql資料執行過程中依賴mysql-connector-java-*.jar,所以你需要下載mysql-connector-java-*.jar並複製到sqoop-1.4.1-incubating__hadoop-0.20/lib中。

 

[root@nn0001 bin]# ./sqoop list-tables --connect jdbc:mysql://10.0.30.67/cncloud --username root
12/02/29 11:32:49 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
T_AUTHORITY
T_GROUP
T_GROUP_SHARE
T_GROUP_USER
T_HFILE
T_OLINK
T_OLINK_HFILE
T_RESOURCE
T_RESOURCE_AUTHORITY
T_ROLE
T_ROLE_AUTHORITY
T_USER
T_USER_ROLE
T_USER_SHARE

 

 

[root@nn0001 bin]# ./sqoop import --connect jdbc:mysql://10.0.30.67/cncloud --username root --table cncloud 
12/02/29 11:19:29 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
12/02/29 11:19:29 INFO tool.CodeGenTool: Beginning code generation
12/02/29 11:19:29 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `cncloud` AS t LIMIT 1
12/02/29 11:19:29 ERROR manager.SqlManager: Error executing statement: com.mysql.jdbc.exceptions.MySQLSyntaxErrorException: Table 'cncloud.cncloud' doesn't exist
com.mysql.jdbc.exceptions.MySQLSyntaxErrorException: Table 'cncloud.cncloud' doesn't exist
        at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:936)
        at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2985)
        at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1631)
        at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:1723)
        at com.mysql.jdbc.Connection.execSQL(Connection.java:3283)
        at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:1332)
        at com.mysql.jdbc.PreparedStatement.executeQuery(PreparedStatement.java:1467)
        at org.apache.sqoop.manager.SqlManager.execute(SqlManager.java:487)
        at org.apache.sqoop.manager.SqlManager.execute(SqlManager.java:496)
        at org.apache.sqoop.manager.SqlManager.getColumnTypesForRawQuery(SqlManager.java:194)
        at org.apache.sqoop.manager.SqlManager.getColumnTypes(SqlManager.java:178)
        at org.apache.sqoop.manager.ConnManager.getColumnTypes(ConnManager.java:114)
        at org.apache.sqoop.orm.ClassWriter.getColumnTypes(ClassWriter.java:1235)
        at org.apache.sqoop.orm.ClassWriter.generate(ClassWriter.java:1060)
        at org.apache.sqoop.tool.CodeGenTool.generateORM(CodeGenTool.java:82)
        at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:367)
        at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:453)
        at org.apache.sqoop.Sqoop.run(Sqoop.java:145)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
        at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:181)
        at org.apache.sqoop.Sqoop.runTool(Sqoop.java:220)
        at org.apache.sqoop.Sqoop.runTool(Sqoop.java:229)
        at org.apache.sqoop.Sqoop.main(Sqoop.java:238)
        at com.cloudera.sqoop.Sqoop.main(Sqoop.java:57)
12/02/29 11:19:29 ERROR tool.ImportTool: Imported Failed: Attempted to generate class with no columns!

 

[root@nn0001 bin]# ./sqoop import --connect jdbc:mysql://10.0.30.67/cncloud --username root --table T_USER

12/02/29 11:23:44 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
12/02/29 11:23:44 INFO tool.CodeGenTool: Beginning code generation
12/02/29 11:23:45 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `T_USER` AS t LIMIT 1
12/02/29 11:23:46 INFO orm.CompilationManager: HADOOP_HOME is /download/hadoop-0.20.203.0/bin/..
Note: /tmp/sqoop-root/compile/3e41a89256b74520169e9f08b714538c/T_USER.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
12/02/29 11:23:59 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-root/compile/3e41a89256b74520169e9f08b714538c/T_USER.jar
12/02/29 11:23:59 WARN manager.MySQLManager: It looks like you are importing from mysql.
12/02/29 11:23:59 WARN manager.MySQLManager: This transfer can be faster! Use the --direct
12/02/29 11:23:59 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
12/02/29 11:23:59 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
12/02/29 11:23:59 INFO mapreduce.ImportJobBase: Beginning import of T_USER
12/02/29 11:24:02 INFO ipc.Client: Retrying connect to server: nn0001/10.0.30.235:9001. Already tried 0 time(s).
12/02/29 11:24:03 INFO ipc.Client: Retrying connect to server: nn0001/10.0.30.235:9001. Already tried 1 time(s).
12/02/29 11:24:04 INFO ipc.Client: Retrying connect to server: nn0001/10.0.30.235:9001. Already tried 2 time(s).
12/02/29 11:24:05 INFO ipc.Client: Retrying connect to server: nn0001/10.0.30.235:9001. Already tried 3 time(s).
12/02/29 11:24:06 INFO ipc.Client: Retrying connect to server: nn0001/10.0.30.235:9001. Already tried 4 time(s).
12/02/29 11:24:07 INFO ipc.Client: Retrying connect to server: nn0001/10.0.30.235:9001. Already tried 5 time(s).
12/02/29 11:24:08 INFO ipc.Client: Retrying connect to server: nn0001/10.0.30.235:9001. Already tried 6 time(s).
12/02/29 11:24:09 INFO ipc.Client: Retrying connect to server: nn0001/10.0.30.235:9001. Already tried 7 time(s).
12/02/29 11:24:10 INFO ipc.Client: Retrying connect to server: nn0001/10.0.30.235:9001. Already tried 8 time(s).
12/02/29 11:24:11 INFO ipc.Client: Retrying connect to server: nn0001/10.0.30.235:9001. Already tried 9 time(s).
12/02/29 11:24:11 ERROR tool.ImportTool: Encountered IOException running import job: java.net.ConnectException: Call to nn0001/10.0.30.235:9001 failed on connection exception: java.net.ConnectException: Connection refused
        at org.apache.hadoop.ipc.Client.wrapException(Client.java:1057)
        at org.apache.hadoop.ipc.Client.call(Client.java:1033)
        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:224)
        at org.apache.hadoop.mapred.$Proxy1.getProtocolVersion(Unknown Source)
        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:364)
        at org.apache.hadoop.mapred.JobClient.createRPCProxy(JobClient.java:460)
        at org.apache.hadoop.mapred.JobClient.init(JobClient.java:454)
        at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:437)
        at org.apache.hadoop.mapreduce.Job$1.run(Job.java:477)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
        at org.apache.hadoop.mapreduce.Job.connect(Job.java:475)
        at org.apache.hadoop.mapreduce.Job.submit(Job.java:464)
        at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:494)
        at org.apache.sqoop.mapreduce.ImportJobBase.runJob(ImportJobBase.java:141)
        at org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:201)
        at org.apache.sqoop.manager.SqlManager.importTable(SqlManager.java:413)
        at org.apache.sqoop.manager.MySQLManager.importTable(MySQLManager.java:97)
        at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:380)
        at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:453)
        at org.apache.sqoop.Sqoop.run(Sqoop.java:145)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
        at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:181)
        at org.apache.sqoop.Sqoop.runTool(Sqoop.java:220)
        at org.apache.sqoop.Sqoop.runTool(Sqoop.java:229)
        at org.apache.sqoop.Sqoop.main(Sqoop.java:238)
        at com.cloudera.sqoop.Sqoop.main(Sqoop.java:57)
Caused by: java.net.ConnectException: Connection refused
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
        at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:406)
        at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:414)
        at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:527)
        at org.apache.hadoop.ipc.Client$Connection.access$1800(Client.java:187)
        at org.apache.hadoop.ipc.Client.getConnection(Client.java:1164)
        at org.apache.hadoop.ipc.Client.call(Client.java:1010)
        ... 26 more

 

 如果執行失敗,再次執行,會出現錯誤:

12/02/29 11:27:04 ERROR orm.CompilationManager: Could not rename /tmp/sqoop-root/compile/35b5389c7196008f8e1a66b0061f0cd5/T_USER.java to /download/sqoop-1.4.1-incubating__hadoop-0.20/bin/./T_USER.java
java.io.IOException: Destination '/download/sqoop-1.4.1-incubating__hadoop-0.20/bin/./T_USER.java' already exists
        at org.apache.commons.io.FileUtils.moveFile(FileUtils.java:1811)
        at org.apache.sqoop.orm.CompilationManager.compile(CompilationManager.java:227)
        at org.apache.sqoop.tool.CodeGenTool.generateORM(CodeGenTool.java:83)
        at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:367)
        at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:453)
        at org.apache.sqoop.Sqoop.run(Sqoop.java:145)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
        at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:181)
        at org.apache.sqoop.Sqoop.runTool(Sqoop.java:220)
        at org.apache.sqoop.Sqoop.runTool(Sqoop.java:229)
        at org.apache.sqoop.Sqoop.main(Sqoop.java:238)
        at com.cloudera.sqoop.Sqoop.main(Sqoop.java:57)

 

使用./hadoop dfs -rmr xxx刪除即可

[root@nn0001 bin]# rm /tmp/sqoop-root/compile/35b5389c7196008f8e1a66b0061f0cd5/T_USER.java
[root@nn0001 bin]# rm /tmp/sqoop-root/compile/35b5389c7196008f8e1a66b0061f0cd5/T_USER.class
[root@nn0001 bin]# rm /tmp/sqoop-root/compile/35b5389c7196008f8e1a66b0061f0cd5/T_USER.jar

 引起上面問題的原因是因為沒有啟動mapreduce

[root@nn0001 bin]# ./start-mapred.sh 
starting jobtracker, logging to /download/hadoop-0.20.203.0/bin/../logs/hadoop-root-jobtracker-nn0001.out
dn0001: starting tasktracker, logging to /download/hadoop-0.20.203.0/bin/../logs/hadoop-root-tasktracker-dn0001.out
dn0003: starting tasktracker, logging to /download/hadoop-0.20.203.0/bin/../logs/hadoop-root-tasktracker-dn0003.out
dn0002: starting tasktracker, logging to /download/hadoop-0.20.203.0/bin/../logs/hadoop-root-tasktracker-dn0002.out

 

[root@nn0001 bin]# ./sqoop import --connect jdbc:mysql://10.0.30.67/cncloud --username root --table T_USER
12/02/29 12:37:09 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
12/02/29 12:37:09 INFO tool.CodeGenTool: Beginning code generation
12/02/29 12:37:10 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `T_USER` AS t LIMIT 1
12/02/29 12:37:10 INFO orm.CompilationManager: HADOOP_HOME is /download/hadoop-0.20.203.0/bin/..
Note: /tmp/sqoop-root/compile/7354a8be9b8fae6811150fd2fffac7a0/T_USER.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
12/02/29 12:37:21 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-root/compile/7354a8be9b8fae6811150fd2fffac7a0/T_USER.jar
12/02/29 12:37:21 WARN manager.MySQLManager: It looks like you are importing from mysql.
12/02/29 12:37:21 WARN manager.MySQLManager: This transfer can be faster! Use the --direct
12/02/29 12:37:21 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
12/02/29 12:37:21 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
12/02/29 12:37:21 INFO mapreduce.ImportJobBase: Beginning import of T_USER
12/02/29 12:37:30 INFO db.DataDrivenDBInputFormat: BoundingValsQuery: SELECT MIN(`ID`), MAX(`ID`) FROM `T_USER`
12/02/29 12:37:30 WARN db.TextSplitter: Generating splits for a textual index column.
12/02/29 12:37:30 WARN db.TextSplitter: If your database sorts in a case-insensitive order, this may result in a partial import or duplicate records.
12/02/29 12:37:30 WARN db.TextSplitter: You are strongly encouraged to choose an integral split column.
12/02/29 12:37:32 INFO mapred.JobClient: Running job: job_201202291233_0001
12/02/29 12:37:33 INFO mapred.JobClient:  map 0% reduce 0%
12/02/29 12:38:22 INFO mapred.JobClient:  map 25% reduce 0%
12/02/29 12:38:25 INFO mapred.JobClient:  map 50% reduce 0%
12/02/29 12:38:35 INFO mapred.JobClient:  map 75% reduce 0%
12/02/29 12:38:37 INFO mapred.JobClient:  map 100% reduce 0%
12/02/29 12:38:48 INFO mapred.JobClient: Job complete: job_201202291233_0001
12/02/29 12:38:49 INFO mapred.JobClient: Counters: 14
12/02/29 12:38:49 INFO mapred.JobClient:   Job Counters 
12/02/29 12:38:49 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=104029
12/02/29 12:38:49 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
12/02/29 12:38:49 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
12/02/29 12:38:49 INFO mapred.JobClient:     Launched map tasks=4
12/02/29 12:38:49 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
12/02/29 12:38:49 INFO mapred.JobClient:   File Output Format Counters 
12/02/29 12:38:49 INFO mapred.JobClient:     Bytes Written=928
12/02/29 12:38:49 INFO mapred.JobClient:   FileSystemCounters
12/02/29 12:38:49 INFO mapred.JobClient:     HDFS_BYTES_READ=701
12/02/29 12:38:49 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=125482
12/02/29 12:38:49 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=928
12/02/29 12:38:49 INFO mapred.JobClient:   File Input Format Counters 
12/02/29 12:38:49 INFO mapred.JobClient:     Bytes Read=0
12/02/29 12:38:49 INFO mapred.JobClient:   Map-Reduce Framework
12/02/29 12:38:49 INFO mapred.JobClient:     Map input records=3
12/02/29 12:38:49 INFO mapred.JobClient:     Spilled Records=0
12/02/29 12:38:49 INFO mapred.JobClient:     Map output records=3
12/02/29 12:38:49 INFO mapred.JobClient:     SPLIT_RAW_BYTES=701
12/02/29 12:38:49 INFO mapreduce.ImportJobBase: Transferred 928 bytes in 86.7251 seconds (10.7005 bytes/sec)
12/02/29 12:38:49 INFO mapreduce.ImportJobBase: Retrieved 3 records.

 

 

[root@nn0001 bin]# /download/hadoop-0.20.203.0/bin/hadoop dfs -lsr /

drwxr-xr-x   - root supergroup          0 2012-02-29 12:37 /user
drwxr-xr-x   - root supergroup          0 2012-02-29 12:37 /user/root
drwxr-xr-x   - root supergroup          0 2012-02-29 12:38 /user/root/T_USER
-rw-r--r--   3 root supergroup          0 2012-02-29 12:38 /user/root/T_USER/_SUCCESS
drwxr-xr-x   - root supergroup          0 2012-02-29 12:37 /user/root/T_USER/_logs
drwxr-xr-x   - root supergroup          0 2012-02-29 12:37 /user/root/T_USER/_logs/history
-rw-r--r--   3 root supergroup      12907 2012-02-29 12:37 /user/root/T_USER/_logs/history/job_201202291233_0001_1330490252595_root_T%5FUSER.jar
-rw-r--r--   3 root supergroup      26958 2012-02-29 12:37 /user/root/T_USER/_logs/history/job_201202291233_0001_conf.xml
-rw-r--r--   3 root supergroup        311 2012-02-29 12:38 /user/root/T_USER/part-m-00000
-rw-r--r--   3 root supergroup          0 2012-02-29 12:38 /user/root/T_USER/part-m-00001
-rw-r--r--   3 root supergroup        305 2012-02-29 12:38 /user/root/T_USER/part-m-00002
-rw-r--r--   3 root supergroup        312 2012-02-29 12:38 /user/root/T_USER/part-m-00003

 

 

上面是演示如何把資料的資料匯入到hdfs中,下面演示如何把hdfs中的資料匯入到mysql

 

如果新的伺服器上有資料庫和相應的表,則不需要新建,否則要先新建資料庫及相應的表

 

因為資料是從所有的datanode上獲取並匯入到mysql中,所以mysql需要設定允許所有ip客戶端訪問,如上面設定

 

同時表裡面不能有跟匯入的資料id相同的記錄,否則會報錯

 

[root@nn0001 bin]# ./sqoop export --connect jdbc:mysql://10.0.30.235/cncloud --export-dir /user/root/T_ROLE--username root --table T_ROLE

 

注意:紅色字型是hdfs上面的路徑,不是本地路徑

 

 

 

下面引自http://thomas0988.iteye.com/blog/1383463

 

Sqoop 1.3 命令
 
序號    命令/command        類                                       說明
1        impor                     ImportTool                         從關係型資料庫中匯入資料(來自表或者查詢語句)到HDFS中
2        export                    ExportTool                         將HDFS中的資料匯入到關係型資料庫中
3        codegen                 CodeGenTool                     獲取資料庫中某張表資料生成Java並打成jar包
4        create-hive-table   CreateHiveTableTool          建立Hive表
5        eval                        EvalSqlTool                        檢視SQL執行結果
6        import-all-tables     ImportAllTablesTool           匯入某個資料庫下所有表到HDFS中
7        job                          JobTool  
8        list-databases         ListDatabasesTool            列出所有資料庫名
9        list-tables                ListTablesTool                   列出某個資料庫下所有表
10      merge                     MergeTool  
11      metastore               MetastoreTool  
12      help                         HelpTool                            檢視幫助
13      version                    VersionTool                        檢視版本
 
        接著列出Sqoop的各種通用引數,然後針對以上13個命令列出他們自己的引數.Sqoop通用引數又分Common arguments,Incremental import arguments,Output line formatting arguments,Input parsing arguments,Hive arguments,HBase arguments,Generic Hadoop command-line arguments,下面一一說明:
       1.Common arguments  通用引數,主要是針對關係型資料庫連結的一些引數
 
 
序號            引數                                 說明                               樣例
1                connect                           連線關係型資料庫的          URL jdbc:mysql://localhost/sqoop_datas
2                connection-manager       連線管理類,一般不用  
3                driver                              連線驅動 
4                hadoop-home                 hadoop目錄                    /home/guoyun/hadoop
5                help                                檢視幫助資訊  
6                password                       連線關係型資料庫的密碼  
7                username                       連結關係型資料庫的使用者名稱  
8                verbose                          檢視更多的資訊,其實是將日誌級別調低 該引數後面不接值
===========================================
sqoop 匯入資料 報 java heap space 錯誤
使用sqoop匯入資料時,當資料量變大時,在map/reduce的過程中就會提示java heap space error。經過總結,解決方法有兩個:
1、 修改每個執行子程式的jvm大小
修改mapred-site.xml檔案,新增以下屬性:
<property>
 <name>mapred.child.java.opts</name>
 <value>-Xmx512M</value>
</property>
<property>
 <name>mapred.reduce.child.java.opts</name>
 <value>-Xmx512M</value>
</property>
<property>
 <name>mapred.map.child.java.opts</name>?
 <value>-Xmx512M</value>
</property>

2、 增加map數量,通過sqoop -m 選項指定更多的map。通過更多的map,降少每個子程式佔用的heap space,避免超出hadoop設定的java heap space 大小
sqoop ... -m <map 數量>

 

匯入hive

[root@nn0001 bin]# ./sqoop import --connect jdbc:mysql://10.0.30.67/cncloud --username root --table T_USER --hive-import

12/03/01 14:54:03 ERROR tool.ImportTool: Encountered IOException running import job: java.io.IOException: Cannot run program "hive": java.io.IOException: error=2, No such file or directory
        at java.lang.ProcessBuilder.start(ProcessBuilder.java:460)
        at java.lang.Runtime.exec(Runtime.java:593)
        at java.lang.Runtime.exec(Runtime.java:509)
        at org.apache.sqoop.util.Executor.exec(Executor.java:76)
        at org.apache.sqoop.hive.HiveImport.executeExternalHiveScript(HiveImport.java:344)
        at org.apache.sqoop.hive.HiveImport.executeScript(HiveImport.java:297)
        at org.apache.sqoop.hive.HiveImport.importTable(HiveImport.java:239)
        at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:392)
        at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:453)
        at org.apache.sqoop.Sqoop.run(Sqoop.java:145)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
        at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:181)
        at org.apache.sqoop.Sqoop.runTool(Sqoop.java:220)
        at org.apache.sqoop.Sqoop.runTool(Sqoop.java:229)
        at org.apache.sqoop.Sqoop.main(Sqoop.java:238)
        at com.cloudera.sqoop.Sqoop.main(Sqoop.java:57)
Caused by: java.io.IOException: java.io.IOException: error=2, No such file or directory
        at java.lang.UNIXProcess.<init>(UNIXProcess.java:148)
        at java.lang.ProcessImpl.start(ProcessImpl.java:65)
        at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
        ... 15 more

 

出現這個錯誤主要是因為hive沒有配置環境變數,hadoop找不到程式路徑導致的

在/etc/profile配置

HIVE_HOME=/download/hive-0.8.1

export HIVE_HOME

 

上面的命令似乎把資料庫的資料匯入到Hadoop的hdfs中(因為 匯入後我在hive中沒有找到相應的表,在hadoop中卻又相應的資料,但是當我使用下面的命令匯入另外一個表時,卻發現之前的T_ROLE表已經在 hive中了,而且還有資料,而T_USER只是建立了一個表而已,沒有相應的資料記錄,求解?)

[root@nn0001 bin]# ./sqoop create-hive-table --connect jdbc:mysql://10.0.30.67/cncloud --username root --table T_USER --hive-table T_USER

12/03/01 16:04:43 INFO tool.BaseSqoopTool: Using Hive-specific delimiters for output. You can override
12/03/01 16:04:43 INFO tool.BaseSqoopTool: delimiters with --fields-terminated-by, etc.
12/03/01 16:04:43 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
12/03/01 16:04:44 INFO hive.HiveImport: Removing temporary files from import process: T_USER/_logs
12/03/01 16:04:44 INFO hive.HiveImport: Loading uploaded data into Hive
12/03/01 16:04:45 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `T_USER` AS t LIMIT 1
12/03/01 16:04:45 WARN hive.TableDefWriter: Column LOGIN_FAILED_DATE had to be cast to a less precise type in Hive
12/03/01 16:04:45 WARN hive.TableDefWriter: Column LAST_LOGIN_DATE had to be cast to a less precise type in Hive
12/03/01 16:04:45 WARN hive.TableDefWriter: Column UPDATED_DATE had to be cast to a less precise type in Hive
12/03/01 16:04:45 WARN hive.TableDefWriter: Column CREATED_DATE had to be cast to a less precise type in Hive
12/03/01 16:04:49 INFO hive.HiveImport: WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
12/03/01 16:04:51 INFO hive.HiveImport: Logging initialized using configuration in jar:file:/download/hive-0.8.1/lib/hive-common-0.8.1.jar!/hive-log4j.properties
12/03/01 16:04:51 INFO hive.HiveImport: Hive history file=/tmp/root/hive_job_log_root_201203011604_1032827888.txt
12/03/01 16:05:17 INFO hive.HiveImport: OK
12/03/01 16:05:17 INFO hive.HiveImport: Time taken: 24.125 seconds
12/03/01 16:05:17 INFO hive.HiveImport: Hive import complete.

 

[root@nn0001 bin]# ./sqoop import --connect jdbc:mysql://10.0.30.67/cncloud --username root --table T_USER --hive-import --hive-overwrite

這個命令又能奇蹟般把資料匯入到hive中,估計是上面的操作有什麼錯誤。

 

 

匯出資料到MySQL,當然資料庫表要先存在,否則會報錯:

[root@nn0001 bin]# ./sqoop export --connect jdbc:mysql://10.0.30.235/cncloud --username root --table T_USER --export-dir /user/hive/warehouse/t_user

 

java.util.NoSuchElementException
        at java.util.AbstractList$Itr.next(AbstractList.java:350)
        at T_USER.__loadFromFields(T_USER.java:553)
        at T_USER.parse(T_USER.java:502)
        at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:77)
        at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:36)
        at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
        at org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:183)
        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:763)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:369)
        at org.apache.hadoop.mapred.Child$4.run(Child.java:259)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
        at org.apache.hadoop.mapred.Child.main(Child.java:253)

 

此錯誤的原因為sqoop解析檔案的欄位與MySql資料庫的表的欄位對應不上造成的。因此需要在執行的時候給sqoop增加引數,告訴sqoop檔案的分隔符,使它能夠正確的解析檔案欄位。hive預設的欄位分隔符為'\001'

[root@nn0001 bin]# ./sqoop export --connect jdbc:mysql://10.0.30.235/cncloud --username root --table T_USER --export-dir /user/hive/warehouse/t_user --input-fields-terminated-by '\t'

 

官方詳細英文文件:

http://incubator.apache.org/sqoop/docs/1.4.1-incubating/SqoopUserGuide.html

相關文章