配置Hadoop2.7.2和Hbase1.1.5支援Snappy解壓壓縮庫

hackeruncle發表於2016-06-26

.Hadoop支援Snappy

1.重新編譯Hadoop 2.7.2原始碼,使其支援Snappy解壓壓縮庫  http://blog.itpub.net/30089851/viewspace-2120631/

2.檢視libsnappy.so.1.2.0

[root@sht-sgmhadoopnn-01 ~]# ll $HADOOP_HOME/lib/native/

total 4880

-rw-r--r-- 1 root root 1211196 Jun 21 19:11 libhadoop.a

-rw-r--r-- 1 root root 1485756 Jun 21 19:12 libhadooppipes.a

lrwxrwxrwx 1 root root      18 Jun 21 19:45 libhadoop.so -> libhadoop.so.1.0.0

-rwxr-xr-x 1 root root  717060 Jun 21 19:11 libhadoop.so.1.0.0

-rw-r--r-- 1 root root  582128 Jun 21 19:12 libhadooputils.a

-rw-r--r-- 1 root root  365052 Jun 21 19:11 libhdfs.a

lrwxrwxrwx 1 root root      16 Jun 21 19:45 libhdfs.so -> libhdfs.so.0.0.0

-rwxr-xr-x 1 root root  229289 Jun 21 19:11 libhdfs.so.0.0.0

-rw-r--r-- 1 root root  233538 Jun 21 19:11 libsnappy.a

-rwxr-xr-x 1 root root     953 Jun 21 19:11 libsnappy.la

lrwxrwxrwx 1 root root      18 Jun 21 19:45 libsnappy.so -> libsnappy.so.1.2.0

lrwxrwxrwx 1 root root      18 Jun 21 19:45 libsnappy.so.1 -> libsnappy.so.1.2.0

-rwxr-xr-x 1 root root  147726 Jun 21 19:11 libsnappy.so.1.2.0 

 [root@sht-sgmhadoopnn-01 ~]#
###假如叢集已經安裝好

3.修改$HADOOP_HOME/etc/hadoop/hadoop-env.sh,新增

export HADOOP_OPTS="$HADOOP_OPTS -Djava.library.path=$HADOOP_HOME/lib:$HADOOP_HOME/lib/native"

 

############會解決Warn:” Unable to load native-hadoop library ”################################################

 [root@sht-sgmhadoopnn-01 ~]# hadoop fs -ls /

  16/06/21 15:08:24 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using     builtin-java classes where applicable

##################################################################################################

4.修改$HADOOP_HOME/etc/hadoop/core-site.xml

點選(此處)摺疊或開啟

  1. <property>
  2.  
  3.   <name>io.compression.codecs</name>
  4.  
  5.   <value>org.apache.hadoop.io.compress.GzipCodec,
  6.  
  7.     org.apache.hadoop.io.compress.DefaultCodec,
  8.  
  9.     org.apache.hadoop.io.compress.BZip2Codec,
  10.  
  11.     org.apache.hadoop.io.compress.SnappyCodec
  12.  
  13.   </value>
  14.  
  15. </property>

5.修改 $HADOOP_HOME/etc/hadoop/mapred-site.xml 中有關壓縮屬性,測試snappy

點選(此處)摺疊或開啟

  1. <property>
  2.  
  3.       <name>mapreduce.map.output.compress</name>
  4.  
  5.       <value>true</value>
  6.  
  7.   </property>
  8.  
  9.                
  10.  
  11.   <property>
  12.  
  13.       <name>mapreduce.map.output.compress.codec</name>
  14.  
  15.       <value>org.apache.hadoop.io.compress.SnappyCodec</value>
  16.  
  17.    </property>

6.新增$HADOOP_HOME/etc/hadoop/yarn-site.xml(是否啟用日誌聚集功能、yarn日誌服務的地址、配置yarn的memory和cpu)

點選(此處)摺疊或開啟

  1. <property>
  2.           <name>yarn.log-aggregation-enable</name>
  3.          <value>true</value>
  4. </property>
  5. <property>
  6.          <name>yarn.log.server.url</name>
  7.          <value>http://sht-sgmhadoopnn-01:19888/jobhistory/logs</value>
  8. </property>

  9. <property>
  10.     <name>yarn.nodemanager.resource.memory-mb</name>
  11.     <value>10240</value>
  12. </property>
  13. <property>
  14.     <name>yarn.scheduler.minimum-allocation-mb</name>
  15.     <value>1500</value>
  16.     <discription>單個任務可申請最少記憶體,預設1024MB</discription>
  17. </property>
  18.   
  19. <property>
  20.     <name>yarn.scheduler.maximum-allocation-mb</name>
  21.     <value>2500</value>
  22.     <discription>單個任務可申請最大記憶體,預設8192MB</discription>
  23.  </property>
  24. <property>
  25.      <name>yarn.nodemanager.resource.cpu-vcores</name>
  26.     <value>2</value>
  27. </property>

7.hadoop-env.sh, core-site.xml, mapred-site.xml,yarn-site.xml 同步到叢集其他節點

8.重啟Hadoop叢集

9.驗證1: hadoop checknative

[root@sht-sgmhadoopnn-01 ~]# hadoop checknative

16/06/25 12:58:13 WARN bzip2.Bzip2Factory: Failed to load/initialize native-bzip2 library system-native, will use pure-Java version

16/06/25 12:58:13 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library

Native library checking:

hadoop:  true /hadoop/hadoop-2.7.2/lib/native/libhadoop.so.1.0.0

zlib:    true /usr/local/lib/libz.so.1

snappy:  true /hadoop/hadoop-2.7.2/lib/native/libsnappy.so.1

lz4:     true revision:99

bzip2:   false

openssl: true /usr/lib64/libcrypto.so

[root@sht-sgmhadoopnn-01 ~]#

####支援本地native,支援snappy

10.驗證2

[root@sht-sgmhadoopnn-01 ~]# vi test.log

a

c d

c d d d a

1 2

a


[root@sht-sgmhadoopnn-01 ~]# hadoop fs -mkdir /input

[root@sht-sgmhadoopnn-01 ~]#hadoop fs -put test.log /input/

[root@sht-sgmhadoopnn-01 ~]#hadoop jar /hadoop/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar wordcount /input /output1

為了驗證是否成功,往hdfs上傳一個文字檔案,敲入一些片語,執行wordcount程式。如果map部分100%完成,即說明我們hadoop snappy安裝成功。

因為hadoop沒有像HBase一樣提供util.CompressionTest(或者是我沒有找到),所以只能按照這種方法來測試。接下來,將詳細列出HBase使用Snappy的配置過程。


.HBase支援Snappy

1.複製hadoopnative資料夾至hbaselib目錄下

[root@sht-sgmhadoopnn-01 ~]# cp -r $HADOOP_HOME/lib/native $HBASE_HOME/lib/

2.修改$HBASE_HOME/conf/hbase-env.sh,新增

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$HADOOP_HOME/lib/native/

export HBASE_LIBRARY_PATH=$HBASE_LIBRARY_PATH:$HBASE_HOME/lib/native/

3.hbase-env.sh同步到叢集其他節點

4.重啟Hbase叢集

5.驗證1: hbase org.apache.hadoop.hbase.util.CompressionTest hdfs://mycluster/output1/part-r-00000 snappy

 [root@sht-sgmhadoopnn-01 ~]#hbase org.apache.hadoop.hbase.util.CompressionTest hdfs://mycluster/output1/part-r-00000 snappy 

#使用 CompressionTest 來檢視snappy是否 enabled 並且能成功 loaded.

/output1/part-r-00000 為我們在驗證hadoop snappy的時候,wordcount的輸

6.驗證2:建表和put插入資料

點選(此處)摺疊或開啟

  1. hbase(main):004:0> create 'test_snappy', { NAME => 'f', COMPRESSION => 'snappy'}
  2.  
  3. 0 row(s) in 2.2570 seconds
  4.  


  5.  
  6. => Hbase::Table - tsnappy0
  7.  
  8. hbase(main):005:0> put ' test_snappy'', 'row1', 'f:col1', 'value

 

來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/30089851/viewspace-2121010/,如需轉載,請註明出處,否則將追究法律責任。

相關文章