hadoop實戰2-更改指定hostname啟動hadoop,jps介紹,yarn部署,yarn上執行程式

shaozi74108發表於2019-04-03

hadoop練習1(更改指定hostname啟動hadoop,jps介紹,yarn部署,yarn上執行程式)

目的.配置hdfs三個程式要以hadoop002啟動

在之前配置的hadoop,三個程式分別是以localhost, 及0.0.0.0啟動的,但是生產一般以固定的hostname啟動,所以現在以新的hostname配置啟動


 [hadoop@hadoop hadoop-2.6.0-cdh5.7.0]$   sbin/start-dfs.sh  


19/02/17 14:33:29 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
 localhost: starting namenode  , logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-namenode-hadoop.out
 localhost: starting datanode  , logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-datanode-hadoop.out
Starting secondary namenodes [0.0.0.0]
 0.0.0.0: starting secondarynamenode  , logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-secondarynamenode-hadoop.out
19/02/17 14:34:15 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

 檢視三個程式埠號  

hadoop實戰2-更改指定hostname啟動hadoop,jps介紹,yarn部署,yarn上執行程式

 檢視三個程式的啟動IP  

hadoop實戰2-更改指定hostname啟動hadoop,jps介紹,yarn部署,yarn上執行程式

 開始配置  

 1.關閉程式  

[hadoop@hadoop hadoop-2.6.0-cdh5.7.0]$   sbin/stop-dfs.sh  

 2.進入配置檔案  

[hadoop@hadoop hadoop]$   cd /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/etc/hadoop  

[hadoop@hadoop hadoop]$ ll

total 152


 -rw-r--r-- 1 hadoop hadoop  4436 Mar 24  2016 capacity-scheduler.xml #  整合hdfs, mapreduce yarn核心大框架配置資訊,詳細的在下面具體的配置檔案  


-rw-r--r-- 1 hadoop hadoop  1335 Mar 24  2016 configuration.xsl
-rw-r--r-- 1 hadoop hadoop   318 Mar 24  2016 container-executor.cfg
-rw-r--r-- 1 hadoop hadoop   884 Feb 14 22:18 core-site.xml
-rw-r--r-- 1 hadoop hadoop  3670 Mar 24  2016 hadoop-env.cmd     #  CMD結尾的是windows的配置  
-rw-r--r-- 1 hadoop hadoop  4335 Feb 14 23:36 hadoop-env.sh         #  CMD結尾的是linux的配置  
-rw-r--r-- 1 hadoop hadoop  2598 Mar 24  2016 hadoop-metrics2.properties
-rw-r--r-- 1 hadoop hadoop  2490 Mar 24  2016 hadoop-metrics.properties
-rw-r--r-- 1 hadoop hadoop  9683 Mar 24  2016 hadoop-policy.xml
-rw-r--r-- 1 hadoop hadoop   867 Feb 14 22:20 hdfs-site.xml
-rw-r--r-- 1 hadoop hadoop  1449 Mar 24  2016 httpfs-env.sh
-rw-r--r-- 1 hadoop hadoop  1657 Mar 24  2016 httpfs-log4j.properties
-rw-r--r-- 1 hadoop hadoop    21 Mar 24  2016 httpfs-signature.secret
-rw-r--r-- 1 hadoop hadoop   620 Mar 24  2016 httpfs-site.xml
-rw-r--r-- 1 hadoop hadoop  3523 Mar 24  2016 kms-acls.xml
-rw-r--r-- 1 hadoop hadoop  1611 Mar 24  2016 kms-env.sh
-rw-r--r-- 1 hadoop hadoop  1631 Mar 24  2016 kms-log4j.properties
-rw-r--r-- 1 hadoop hadoop  5511 Mar 24  2016 kms-site.xml
-rw-r--r-- 1 hadoop hadoop 11291 Mar 24  2016 log4j.properties
-rw-r--r-- 1 hadoop hadoop   938 Mar 24  2016 mapred-env.cmd
-rw-r--r-- 1 hadoop hadoop  1383 Mar 24  2016 mapred-env.sh
-rw-r--r-- 1 hadoop hadoop  4113 Mar 24  2016 mapred-queues.xml.template
-rw-r--r-- 1 hadoop hadoop   758 Mar 24  2016 mapred-site.xml.template
-rw-r--r-- 1 hadoop hadoop    10 Mar 24  2016 slaves
-rw-r--r-- 1 hadoop hadoop  2316 Mar 24  2016 ssl-client.xml.example
-rw-r--r-- 1 hadoop hadoop  2268 Mar 24  2016 ssl-server.xml.example
-rw-r--r-- 1 hadoop hadoop  2237 Mar 24  2016 yarn-env.cmd
-rw-r--r-- 1 hadoop hadoop  4567 Mar 24  2016 yarn-env.sh
-rwzr--r-- 1 hadoop hadoop   690 Mar 24  2016 yarn-site.xml
conf

 3.對映iP hostname  

 vi /etc/hostname  

192.168.1.100 hadoop      #etc/host中第1,2行不要刪除

 4.配置 core-site.xml    #此配置對應namenode程式  


  cd    /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/etc/hadoop  


 vi  core-site.xml    
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://  hadoop  :9000</value>   #l  ocalhost改為hadoop  
</property>
</configuration>

小提示:生產 學習: 不用ip部署,統一機器名稱hostname部署

 5.對映slaves  #此配置檔案對應  datanode程式:

[hadoop@hadoop hadoop]$ vi slaves

hadoop                #如果配置叢集多個節點,格式為  hadoop,hadoop1,hadoop2  

 6.對映secondarynamenode  #此配置檔案對應secondarynamenode程式:  

參考官方配置:    

hadoop實戰2-更改指定hostname啟動hadoop,jps介紹,yarn部署,yarn上執行程式   hadoop實戰2-更改指定hostname啟動hadoop,jps介紹,yarn部署,yarn上執行程式

ctrl+f  搜尋secondary, 將引數及引數值配置到hdfs-site.xml中

hadoop實戰2-更改指定hostname啟動hadoop,jps介紹,yarn部署,yarn上執行程式


  vi hdfs-site.xml  


<configuration>
<property>
   <name>   dfs.namenode.secondary.http-address   </name>  
         <value>hadoop:50090</value>  
</property>
<property>
 <name>dfs.namenode.secondary.https-address</name>  
         <value>hadoop:50091</value>  
</property>
</configuration>

7.重新啟動hadoop


 [hadoop@hadoop hadoop-2.6.0-cdh5.7.0]$   sbin/start-dfs.sh  


19/02/17 15:37:35 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [hadoop]
The authenticity of host   'hadoop (127.0.0.1)' can't be established.  
RSA key fingerprint is f5:3a:0b:3d:c6:ce:a2:e2:87:1c:e6:55:71:b1:aa:31.
Are you sure you want to continue connecting (yes/no)? ^Chadoop: Host key verification failed.

 原因是沒有配置hadoop的互信  

8.重新配置hadoop互信


 cd /home/hadoop/ 


rm -rf  .ssh
ssh-keygen
cat id_rsa.pub >>authorized_keys
chmod 600 authorized_keys    #

9.再次啟動hadoop


  [hadoop@hadoop hadoop-2.6.0-cdh5.7.0]$ sbin/start-dfs.sh  


19/02/17 15:53:35 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [  hadoop  ]       #可以看到是以hadoop,hostname啟動了  
hadoop: starting namenode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-namenode-hadoop.out
hadoop: starting datanode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-datanode-hadoop.out
Starting secondary namenodes [  hadoop  ]
 hadoop:   starting secondarynamenode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-secondarynamenode-hadoop.out
19/02/17 15:53:51 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

10.jps檢查程式是否啟動完成


 [hadoop@hadoop hadoop-2.6.0-cdh5.7.0]$ jps 


7288 NameNode
7532 SecondaryNameNode
7389 DataNode
7742 Jps

至此配置完畢


jps命令的真相

2.1位置哪裡的


 [hadoop@hadoop hadoop-2.6.0-cdh5.7.0]$   which jps  


 /usr/java/jdk1.8.0_45/bin/jps  
[hadoop@hadoop hadoop-2.6.0-cdh5.7.0]$   jps  
7926 Jps
7288 NameNode
7532 SecondaryNameNode
7389 DataNode
[hadoop@hadoop hadoop-2.6.0-cdh5.7.0]$   ps -ef | grep hadoop  
root      4889  4751  0 14:32 pts/0    00:00:00 su - hadoop
hadoop    4890  4889  0 14:32 pts/0    00:00:00 -bash
root      7121  6047  0 15:51 pts/1    00:00:00 su - hadoop
hadoop    7122  7121  0 15:51 pts/1    00:00:00 -bash
hadoop    7288     1  1 15:53 ?        00:00:09 /usr/java/jdk1

 結論:jps的程式與 ps -ef | grep hadoop出來的是一樣的,ps -ef更加詳細  

2.2對應的程式的  標識檔案在哪    /tmp/hsperfdata_程式使用者名稱稱


 [hadoop@hadoop hsperfdata_hadoop]$ pwd 


 /tmp/hsperfdata_hadoop   #目錄格式為:hsperfdata_   hostname  
[hadoop@hadoop hsperfdata_hadoop]$ ll
total 96
-rw------- 1 hadoop hadoop 32768 Feb 17 16:11   7288    #程式號與下面一致  
-rw------- 1 hadoop hadoop 32768 Feb 17 16:11   7389  
-rw------- 1 hadoop hadoop 32768 Feb 17 16:11   7532  
[hadoop@hadoop hsperfdata_hadoop]$   jps  
8072 Jps
 7288   NameNode                 #三個檔案均有內容
 7532   SecondaryNameNode
 7389   DataNode

2.3其他使用者檢視jps的結果

root使用者看所有使用者的jps結果

普通使用者只能看自己的

2.4  jps出來的資訊“process information unavailable“ 是否就是表示hadoop沒啟動

模擬程式kill,root使用者kill

kill -9  1378,1210,1086

此時再jps仍然是存在的“process information unavailable,  其實程式已經結束  ,此時用hadoop使用者換別的session時jps顯示沒有。

提示:如果在kill程式後仍有殘留,可以直接刪除檔案目錄,啟動會再次生成


 [root@hadoop002 tmp]# rm -rf hsperfdata_hadoop 


[root@hadoop002 tmp]#
[root@hadoop002 tmp]# jps
1906 Jps

總結:故這裡有個小坑:hdfs元件使用hdfs使用者安裝,如果在一些指令碼中用root使用者收集資訊,看到“process information unavailable不可盲目認為是程式有問題,而應用ps -ef檢測

 真假判斷: ps -ef|grep namenode 真正判斷程式是否可用,不用程式號,因為程式號代表的可能是檔案的內容,不準確,這裡用namenode  

hadoop實戰2-更改指定hostname啟動hadoop,jps介紹,yarn部署,yarn上執行程式


 [root@hadoop002 ~]# jps 


1520 Jps
1378 -- process information unavailable
1210 -- process information unavailable
1086 -- process information unavailable

kill動作發生原因:

1人為

2 程式在Linux看來是耗記憶體最大的自動給你kill

hadoop實戰2-更改指定hostname啟動hadoop,jps介紹,yarn部署,yarn上執行程式

[root@hadoop002 tmp]#

3.pid檔案 被誤刪除,造成hadoop無法正常啟動問題


 [hadoop@hadoop tmp]$ pwd 


/tmp
-rw-rw-r-- 1 hadoop hadoop    5 Feb 16 20:56 hadoop-hadoop-datanode.pid
-rw-rw-r-- 1 hadoop hadoop    5 Feb 16 20:56 hadoop-hadoop-namenode.pid
-rw-rw-r-- 1 hadoop hadoop    5 Feb 16 20:57 hadoop-hadoop-secondarynamenode.pid

可以看到tmp目錄下有三個pid檔案,但是Linux的tmp目錄會定期刪除一些檔案和資料夾 ,一般儲存30天週期,此時hadoop程式還在,也可正常執行,但是在關閉hadoop會出現問題,發現無法關閉

hadoop實戰2-更改指定hostname啟動hadoop,jps介紹,yarn部署,yarn上執行程式

但是啟動是可以正常啟動的

hadoop實戰2-更改指定hostname啟動hadoop,jps介紹,yarn部署,yarn上執行程式

 此時有問題:雖然現在啟動可以重新啟動,但是namenode程式仍然是之前的程式,而不是新的程式,這個是錯誤的  

 解決方式:修改配置檔案,更改pid存放目錄  


  mkdir /data/tmp    #重新建立個目錄  


 chmod -R 777 /data/tmp  
 cd   /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/etc/hadoop  
 vi    hadoop-env.sh  
\# The directory where pid files are stored. /tmp by default.
\# NOTE: this should be set to a directory that can only be written to by
\#       the user that will run the hadoop daemons.  Otherwise there is the
\#       potential for a symlink attack.
export HADOOP_PID_DIR=${HADOOP_PID_DIR}
export HADOOP_PID_DIR=/data/tmp  

 另一種方式: 更改tmp檔案刪除規則,將不需要刪除的檔案加入規則。  

為什麼要用PID檔案?

hadoop在啟動停止時用到了sbin目錄下的hadoop-daemonsh

cat /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/sbin/stop-dfs.sh, 在stop-dfs.sh指令碼中呼叫了hadoop-daemon.sh指令碼

hadoop實戰2-更改指定hostname啟動hadoop,jps介紹,yarn部署,yarn上執行程式

重新啟動 start_dfs.sh

 啟動時,已經啟動的程式不會再次啟動,沒有啟動的程式會啟動  

hadoop實戰2-更改指定hostname啟動hadoop,jps介紹,yarn部署,yarn上執行程式

4.部署單個節點的yarn

MapReduce: MapReduce在Yarn執行,用來做計算的,是依靠jar包提交Yarn上的,本身不需要部署。

Yarn: 資源和作業排程,是需要部署的。

部署單個節點的yarn


 [hadoop@hadoop hadoop]$  cd /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/etc/hadoop  


[hadoop@hadoop hadoop]$   cp mapred-site.xml.template  mapred-site.xml  
[hadoop@hadoop hadoop]$   vi mapred-site.xml  
 <configuration>  
     <property>  
         <name>mapreduce.framework.name</name>  
         <value>yarn</value>  
     </property>  
 </configuration>  
[hadoop@hadoop hadoop]$   vi yarn-site.xml  
<?xml version="1.0"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<configuration>
<!-- Site specific YARN configuration properties -->
   <property>  
         <name>yarn.nodemanager.aux-services</name>  
         <value>mapreduce_shuffle</value>  
     </property>  
</configuration>

至此部署完成

yarn的程式:

ResourceManager daemon  老大 資源管理者

NodeManager daemon         小弟 節點管理者

啟動yarn


 [hadoop@hadoop hadoop-2.6.0-cdh5.7.0]$   cd /home/hadoop/app/hadoop-2.6.0-cdh5.7.0  


[hadoop@hadoop hadoop-2.6.0-cdh5.7.0]$   sbin/start-yarn.sh  
starting yarn daemons
starting resourcemanager, logging to /home/hadoop/app/  hadoop-2.6.0-cdh5.7.0/logs/yarn-hadoop-resourcemanager-hadoop.out   #yarn日誌目錄  
hadoop: starting nodemanager, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/yarn-hadoop-nodemanager-hadoop.out
[hadoop@hadoop hadoop-2.6.0-cdh5.7.0]$    jps     #檢視yarn程式是否啟動  
 9520 ResourceManager  
 9617 NodeManager  
7288 NameNode
9996 Jps
7532 SecondaryNameNode
7389 DataNode

開啟瀏覽器yarn的web控制視窗

 http://192.168.1.100:8088/  

小提示:日誌跟蹤分析

tail -200f hadoop-hadoop-datanode-hadoop002.log  另外視窗重啟程式 為了再現這個錯誤

hadoop-hadoop-datanode-hadoop002.out不看

或者rz上傳到windows editplus去定位檢視 備份

hadoop-使用者-程式名稱-機器名稱

5.執行mr


  cd   /home/hadoop/app/hadoop-2.6.0-cdh5.7.0  


[hadoop@hadoop hadoop-2.6.0-cdh5.7.0]$   find ./ -name '*example*.jar'  #尋找可執行的jar包
./share/hadoop/mapreduce2/sources/hadoop-mapreduce-examples-2.6.0-cdh5.7.0-sources.jar
./share/hadoop/mapreduce2/sources/hadoop-mapreduce-examples-2.6.0-cdh5.7.0-test-sources.jar
 ./share/hadoop/mapreduce2/hadoop-mapreduce-examples-2.6.0-cdh5.7.0.jar  
./share/hadoop/mapreduce1/hadoop-examples-2.6.0-mr1-cdh5.7.0.jar
執行mr
 cd   /home/hadoop/app/hadoop-2.6.0-cdh5.7.0  
hadoop   #可顯示其餘的引數,類似help幫助

檢視一些經典案例

  hadoop jar ./share/hadoop/mapreduce2/hadoop-mapreduce-examples-2.6.0-cdh5.7.0.jar  

hadoop實戰2-更改指定hostname啟動hadoop,jps介紹,yarn部署,yarn上執行程式

[hadoop@hadoop002 hadoop-2.6.0-cdh5.7.0]$ hadoop jar ./share/hadoop/mapreduce2/hadoop-mapreduce-examples-2.6.0-cdh5.7.0.jar pi 5 10     #  相當於執行pi程式,類似oracle呼叫一個procedure  

 從過程上來講看似是先map後reduce,實際上在map的過程中已經開始reduce  

hadoop實戰2-更改指定hostname啟動hadoop,jps介紹,yarn部署,yarn上執行程式

map 對映

reduce 規約

本次執行遇到的錯誤:

hadoop實戰2-更改指定hostname啟動hadoop,jps介紹,yarn部署,yarn上執行程式   hadoop實戰2-更改指定hostname啟動hadoop,jps介紹,yarn部署,yarn上執行程式

解決方法:

hadoop實戰2-更改指定hostname啟動hadoop,jps介紹,yarn部署,yarn上執行程式

修改為:

hadoop實戰2-更改指定hostname啟動hadoop,jps介紹,yarn部署,yarn上執行程式

詞頻統計案例


 [hadoop@hadoop002 hadoop-2.6.0-cdh5.7.0]$   vi a.log   #建立a文字  


ruoze
jepson
[    ](http:///)
dashu
adai
fanren
1
a
b
c
a b c ruoze jepon
[hadoop@hadoop002 hadoop-2.6.0-cdh5.7.0]$   vi b.txt  #建立b文字  
a b d e f ruoze
1 1 3 5
[hadoop@hadoop002 hadoop-2.6.0-cdh5.7.0]$   hdfs dfs  -mkdir /wordcount   #建立輸出目錄   [hadoop@hadoop002 hadoop-2.6.0-cdh5.7.0]$   hdfs dfs  -mkdir /wordcount/input  #建立輸入目錄  
[hadoop@hadoop002 hadoop-2.6.0-cdh5.7.0]$   hdfs dfs -put a.log /wordcount/input       #將a文字上傳  
[hadoop@hadoop002 hadoop-2.6.0-cdh5.7.0]$   hdfs dfs -put b.txt /wordcount/input        #將b檔案上傳  
[hadoop@hadoop002 hadoop-2.6.0-cdh5.7.0]$   hdfs dfs -ls /wordcount/input/              #檢視上傳的檔案  
Found 2 items
-rw-r--r--   1 hadoop supergroup         76 2019-02-16 21:59 /wordcount/input/a.log
-rw-r--r--   1 hadoop supergroup         24 2019-02-16 21:59 /wordcount/input/b.txt

# 執行程式  \表示上一行與下一行命令連結  

[hadoop@hadoop002 hadoop-2.6.0-cdh5.7.0]$   hadoop jar \  

 ./share/hadoop/mapreduce2/hadoop-mapreduce-examples-2.6.0-cdh5.7.0.jar \  

 wordcount /wordcount/input    /wordcount/output1        #output1為指定的輸出目錄,也可以output2  

 提示:在執行程式時,如果不清楚輸入哪些引數,可以不加引數執行。  

 如:hadoop jar  ./share/hadoop/mapreduce2/hadoop-mapreduce-examples-2.6.0-cdh5.7.0.jar  wordcount  

# 檢視執行結果  


 [hadoop@hadoop002 hadoop-2.6.0-cdh5.7.0]$   hdfs dfs -cat /wordcount/output1/part-r-00000  


19/02/16 22:05:46 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
1       3
3       1
5       1
a       3
adai    1
b       3
c       2
d       1
dashu   1
e       1
f       1
fanren  1
jepon   1
jepson  1
ruoze   3
      1
[hadoop@hadoop002 hadoop-2.6.0-cdh5.7.0]$   hdfs dfs -get /wordcount/output1/part-r-00000 ./  #下載檔案,便於檢視  
[hadoop@hadoop002 hadoop-2.6.0-cdh5.7.0]$  
 cat part-r-00000  #檔案是在hdfs平臺,而非linux本地  
1       3
3       1
5       1
a       3
adai    1
b       3
c       2
d       1
dashu   1
e       1
f       1
fanren  1
jepon   1
jepson  1
ruoze   3
      1
[hadoop@hadoop002 hadoop-2.6.0-cdh5.7.0]$



來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/28339956/viewspace-2640192/,如需轉載,請註明出處,否則將追究法律責任。

相關文章