【實驗】Hadoop2.6.0的偽分佈安裝

hackeruncle發表於2016-02-11
hadoop-2.6.0.tar.gz:
jdk-7u79-linux-x64.gz:
1 設定ip地址 

點選(此處)摺疊或開啟

  1. [root@test1 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth0
  2. # Intel Corporation 82545EM Gigabit Ethernet Controller (Copper)
  3. DEVICE=eth0
  4. BOOTPROTO=none
  5. ONBOOT=yes
  6. HWADDR=00:0c:29:51:cc:37
  7. TYPE=Ethernet
  8. NETMASK=255.255.255.0
  9. IPADDR=192.168.23.131
  10. GATEWAY=192.168.23.1
  11. USERCTL=no
  12. IPV6INIT=no
  13. PEERDNS=yes
執行命令 service network restart
驗證: ifconfig

2 關閉防火牆
執行命令 service iptables stop
驗證: service iptables status

3 關閉防火牆的自動執行
執行命令 chkconfig iptables off
驗證: chkconfig --list | grep iptables

4 設定主機名
執行命令
(1)hostname hadoop1
(2)vi /etc/sysconfig/network
NETWORKING=yes
NETWORKING_IPV6=yes
HOSTNAME=hadoop1

5 ip與hostname繫結
執行命令 (1)vi /etc/hosts
                        192.168.23.131    hadoop1.localdomain hadoop1


驗證: ping hadoop1

6 設定ssh免密碼登陸
執行命令
(1)ssh-keygen -t rsa
(2)cp ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys
驗證:
[root@test1 ~]# ssh hadoop1
The authenticity of host 'hadoop1 (192.168.23.131)' can't be established.
RSA key fingerprint is e9:9f:f2:ea:f2:aa:47:58:5f:12:ea:3c:50:3f:0d:1b.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'hadoop1,192.168.23.131' (RSA) to the list of known hosts.
Last login: Thu Feb 11 20:54:11 2016 from 192.168.23.1
[root@hadoop1 ~]# ssh hadoop1
Last login: Thu Feb 11 20:57:56 2016 from hadoop1.localdomain

7 安裝jdk http://my.oschina.net/gaowm/blog/275184
(1)執行命令       

點選(此處)摺疊或開啟

  1. [root@hadoop1 java]# cd /usr/share/java
  2. [root@hadoop1 java]# cd
  3. [root@hadoop1 ~]# cd /usr/share/java
  4. [root@hadoop1 java]# cp /tmp/jdk-7u79-linux-x64.gz ./
  5. [root@hadoop1 java]# tar -xzvf jdk-7u79-linux-x64.gz
(2)vi /etc/profile 增加內容如下:
export JAVA_HOME=/usr/share/java/jdk1.7.0_79
export PATH=.:$JAVA_HOME/bin:$PATH
(3)source /etc/profile
驗證: java -version

8 安裝hadoop
(1)執行命令     

點選(此處)摺疊或開啟

  1. [root@hadoop1 ~]# cd /usr/local/
  2. [root@hadoop1 local]# cp /tmp/hadoop-2.6.0.tar.gz ./
  3. [root@hadoop1 local]# tar -zxvf hadoop-2.6.0.tar.gz
  4. [root@hadoop1 local]# mv hadoop-2.6.0 hadoop
(2)vi /etc/profile 增加內容如下:
export JAVA_HOME=/usr/share/java/jdk1.7.0_79
export HADOOP_HOME=/usr/local/hadoop
export PATH=.:$HADOOP_HOME/bin:$JAVA_HOME/bin:$PATH
(3)source /etc/profile
(4)修改/usr/local/hadoop/etc/hadoop目錄下的配置檔案hadoop-env.sh、core-site.xml、hdfs-site.xml、mapred-site.xml

點選(此處)摺疊或開啟

  1. [root@hadoop1 hadoop]# vi hadoop-env.sh
  2.     export JAVA_HOME=/usr/share/java/jdk1.7.0_79

  3.     [root@hadoop1 hadoop]# vi core-site.xml
  4.     <configuration>
  5.     <property>
  6.         <name>fs.default.name</name>
  7.         <value>hdfs://hadoop1:9000</value>
  8.      </property>
  9.      <property>
  10.         <name>hadoop.tmp.dir</name>
  11.         <value>/usr/local/hadoop/tmp</value>
  12.      </property>

  13.     </configuration>


  14.     [root@hadoop1 hadoop]# vi hdfs-site.xml

  15.     <configuration>
  16.      <property>
  17.         <name>dfs.replication</name>
  18.         <value>1</value>
  19.      </property>
  20.      <property>
  21.         <name>dfs.permissions</name>
  22.         <value>false</value>
  23.      </property>

  24.     </configuration>
  25.     ~

  26.     [root@hadoop1 hadoop]# cp mapred-site.xml.template mapred-site.xml
  27.     [root@hadoop1 hadoop]# vi mapred-site.xml
  28.     <configuration>
  29.     <property>
  30.         <name>mapred.job.tracker</name>
  31.         <value>hadoop1:9001</value>
  32.      </property>

  33.     </configuration>

(5)hadoop namenode -format
(6)start-all.sh

點選(此處)摺疊或開啟

  1. [root@hadoop1 hadoop]# cd sbin
  2. [root@hadoop1 sbin]# start-all.sh
  3. This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
  4. 16/02/11 21:40:54 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
  5. Starting namenodes on [hadoop1]
  6. hadoop1: starting namenode, logging to /usr/local/hadoop/logs/hadoop-root-namenode-hadoop1.out
  7. The authenticity of host 'localhost (127.0.0.1)' can't be established.
  8. RSA key fingerprint is e9:9f:f2:ea:f2:aa:47:58:5f:12:ea:3c:50:3f:0d:1b.
  9. Are you sure you want to continue connecting (yes/no)? yes
  10. localhost: Warning: Permanently added 'localhost' (RSA) to the list of known hosts.
  11. localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-root-datanode-hadoop1.out
  12. Starting secondary namenodes [0.0.0.0]
  13. The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established.
  14. RSA key fingerprint is e9:9f:f2:ea:f2:aa:47:58:5f:12:ea:3c:50:3f:0d:1b.
  15. Are you sure you want to continue connecting (yes/no)? yes
  16. 0.0.0.0: Warning: Permanently added '0.0.0.0' (RSA) to the list of known hosts.
  17. 0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-root-secondarynamenode-hadoop1.out
  18. 16/02/11 21:41:27 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
  19. starting yarn daemons
  20. starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-root-resourcemanager-hadoop1.out
  21. localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-root-nodemanager-hadoop1.out
  22. [root@hadoop1 sbin]# jps
  23. 7192 SecondaryNameNode
  24. 7432 NodeManager
  25. 7468 Jps
  26. 6913 NameNode
  27. 7333 ResourceManager
  28. 7036 DataNode



驗證: (1)執行命令jps 如果看到5個新的java程式,分別是NameNode、SecondaryNameNode、DataNode、ResourceManager、NodeManager
(2)在瀏覽器檢視
hadoop web控制檯頁面的埠整理:
50070:hdfs檔案管理   http://192.168.23.131:50070
8088:ResourceManager http://192.168.23.131:8088
8042:NodeManager     http://192.168.23.131:8042
   
9 啟動時沒有NameNode的可能原因:
(1)沒有格式化
(2)環境變數設定錯誤
(3)ip與hostname繫結失敗

參考:
         http://stark-summer.iteye.com/blog/2184123
         

來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/30089851/viewspace-1987884/,如需轉載,請註明出處,否則將追究法律責任。

相關文章