文字資料匯入HBASE庫找不到類com/google/common/collect/Multimap
打算將文字檔案匯入HBASE庫,在執行命令的時候找不到類com/google/common/collect/Multima
[hadoop@hadoop1 lib]$ hadoop jar /home/hadoop/hbase-0.94.6/hbase-0.94.6.jar importtsv
Warning: $HADOOP_HOME is deprecated.
Exception in thread "main" java.lang.NoClassDefFoundError: com/google/common/collect/Multimap
at org.apache.hadoop.hbase.mapreduce.Driver.main(Driver.java:43)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
Caused by: java.lang.ClassNotFoundException: com.google.common.collect.Multimap
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:423)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:356)
... 6 more
後來將$HBASE_HOME/lib下的包guava-11.0.2.jar 複製到 $HADOOP_HOME/lib 下, 問題搞定。
[hadoop@hadoop1 lib]$ pwd
/home/hadoop/hadoop-1.0.4/lib
[hadoop@hadoop1 lib]$ cp /home/hadoop/hbase-0.94.6/lib/guava-11.0.2.jar .
[hadoop@hadoop1 lib]$ hadoop jar /home/hadoop/hbase-0.94.6/hbase-0.94.6.jar importtsv
Warning: $HADOOP_HOME is deprecated.
ERROR: Wrong number of arguments: 0
Usage: importtsv -Dimporttsv.columns=a,b,c <tablename> <inputdir>
Imports the given input directory of TSV data into the specified table.
The column names of the TSV data must be specified using the -Dimporttsv.columns
option. This option takes the form of comma-separated column names, where each
column name is either a simple column family, or a columnfamily:qualifier. The special
column name HBASE_ROW_KEY is used to designate that this column should be used
as the row key for each imported record. You must specify exactly one column
to be the row key, and you must specify a column name for every column that exists in the
input data. Another special column HBASE_TS_KEY designates that this column should be
used as timestamp for each record. Unlike HBASE_ROW_KEY, HBASE_TS_KEY is optional.
You must specify atmost one column as timestamp key for each imported record.
Record with invalid timestamps (blank, non-numeric) will be treated as bad record.
Note: if you use this option, then 'importtsv.timestamp' option will be ignored.
By default importtsv will load data directly into HBase. To instead generate
HFiles of data to prepare for a bulk data load, pass the option:
-Dimporttsv.bulk.output=/path/for/output
Note: if you do not use this option, then the target table must already exist in HBase
Other options that may be specified with -D include:
-Dimporttsv.skip.bad.lines=false - fail if encountering an invalid line
'-Dimporttsv.separator=|' - eg separate on pipes instead of tabs
-Dimporttsv.timestamp=currentTimeAsLong - use the specified timestamp for the import
-Dimporttsv.mapper.class=my.Mapper - A user-defined Mapper to use instead of org.apache.hadoop.hbase.mapreduce.TsvImporterMapper
For performance consider the following options:
-Dmapred.map.tasks.speculative.execution=false
-Dmapred.reduce.tasks.speculative.execution=false
[hadoop@hadoop1 lib]$ hadoop jar /home/hadoop/hbase-0.94.6/hbase-0.94.6.jar importtsv
Warning: $HADOOP_HOME is deprecated.
Exception in thread "main" java.lang.NoClassDefFoundError: com/google/common/collect/Multimap
at org.apache.hadoop.hbase.mapreduce.Driver.main(Driver.java:43)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
Caused by: java.lang.ClassNotFoundException: com.google.common.collect.Multimap
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:423)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:356)
... 6 more
後來將$HBASE_HOME/lib下的包guava-11.0.2.jar 複製到 $HADOOP_HOME/lib 下, 問題搞定。
[hadoop@hadoop1 lib]$ pwd
/home/hadoop/hadoop-1.0.4/lib
[hadoop@hadoop1 lib]$ cp /home/hadoop/hbase-0.94.6/lib/guava-11.0.2.jar .
[hadoop@hadoop1 lib]$ hadoop jar /home/hadoop/hbase-0.94.6/hbase-0.94.6.jar importtsv
Warning: $HADOOP_HOME is deprecated.
ERROR: Wrong number of arguments: 0
Usage: importtsv -Dimporttsv.columns=a,b,c <tablename> <inputdir>
Imports the given input directory of TSV data into the specified table.
The column names of the TSV data must be specified using the -Dimporttsv.columns
option. This option takes the form of comma-separated column names, where each
column name is either a simple column family, or a columnfamily:qualifier. The special
column name HBASE_ROW_KEY is used to designate that this column should be used
as the row key for each imported record. You must specify exactly one column
to be the row key, and you must specify a column name for every column that exists in the
input data. Another special column HBASE_TS_KEY designates that this column should be
used as timestamp for each record. Unlike HBASE_ROW_KEY, HBASE_TS_KEY is optional.
You must specify atmost one column as timestamp key for each imported record.
Record with invalid timestamps (blank, non-numeric) will be treated as bad record.
Note: if you use this option, then 'importtsv.timestamp' option will be ignored.
By default importtsv will load data directly into HBase. To instead generate
HFiles of data to prepare for a bulk data load, pass the option:
-Dimporttsv.bulk.output=/path/for/output
Note: if you do not use this option, then the target table must already exist in HBase
Other options that may be specified with -D include:
-Dimporttsv.skip.bad.lines=false - fail if encountering an invalid line
'-Dimporttsv.separator=|' - eg separate on pipes instead of tabs
-Dimporttsv.timestamp=currentTimeAsLong - use the specified timestamp for the import
-Dimporttsv.mapper.class=my.Mapper - A user-defined Mapper to use instead of org.apache.hadoop.hbase.mapreduce.TsvImporterMapper
For performance consider the following options:
-Dmapred.map.tasks.speculative.execution=false
-Dmapred.reduce.tasks.speculative.execution=false
相關文章
- 大文字資料,匯入匯出到資料庫資料庫
- 將informix匯出的文字資料匯入oracle資料庫ORMOracle資料庫
- 資料匯入終章:如何將HBase的資料匯入HDFS?
- 使用FSO把文字資訊匯入資料庫 (轉)資料庫
- Redis批量匯入文字資料Redis
- windows下把資料從oracle匯入hbaseWindowsOracle
- 資料庫 MySQL 資料匯入匯出資料庫MySql
- java.lang.NoClassDefFoundError:com/google/common/base/MoreobjectsJavaErrorGoObject
- 資料庫的匯入匯出資料庫
- mysql 資料庫匯入匯出MySql資料庫
- MySQL資料庫匯入匯出MySql資料庫
- dmp很小,匯入資料庫後很大(compress引數)資料庫
- 如何將資料匯入到 SQL Server Compact Edition 資料庫中SQLServer資料庫
- 文字檔案用sqlldr工具匯入到oracel資料庫中SQL資料庫
- com.css.common.jdbcTemplate中的類CSSJDBC
- Hive資料匯入HBase引起資料膨脹引發的思考Hive
- 【mysql】資料庫匯出和匯入MySql資料庫
- mysqldump匯入匯出mysql資料庫MySql資料庫
- oracle資料庫匯入匯出命令!Oracle資料庫
- Mysql 資料庫匯入與匯出MySql資料庫
- Access 匯入 oracle 資料庫Oracle資料庫
- excel 匯入sqlyog資料庫ExcelSQL資料庫
- 將XML匯入資料庫XML資料庫
- 使用sqlloader向oracle匯入文字資料SQLOracle
- 使用oracle sqlldr匯入文字資料的例子OracleSQL
- 將資料從文字匯入到mysql(轉)MySql
- 在SQL Server資料庫中匯入匯出資料SQLServer資料庫
- SQL資料庫的匯入和匯出SQL資料庫
- plsql developer匯入匯出資料庫方法SQLDeveloper資料庫
- xml與資料庫中資料的匯入匯出XML資料庫
- SQL Server資料庫匯入匯出資料方式比較SQLServer資料庫
- TP5.1excel匯入資料庫的程式碼?php excel如何匯入資料庫?Excel資料庫PHP
- 匯入excel資源到資料庫Excel資料庫
- 採用importtsv匯入外部資料到hbase中ImportTTS
- Oracle資料庫匯入匯出。imp匯入命令和exp匯出命令Oracle資料庫
- HHDBCS資料庫一鍵匯入資料庫
- IMPDP匯入遠端資料庫資料庫
- 將excel表格匯入資料庫Excel資料庫