CentOS6.5基於Hadoop使用者的HDFS偽分散式部署(a Single Node Cluster)

13545163656發表於2018-05-18
前言:
 
上一篇部落格HDFS偽分散式部署基於管理員root部署成功,但是實際生產中使用 Hadoop 使用者操作及配置
 官網參考:  common/SingleCluster.html

1、檢查hadoop使用者是否存在

  1. [root@hadoop001 ~]# id hadoop
  2. uid=516(hadoop) gid=516(hadoop) groups=516(hadoop)

2、建立hadoop使用者
  1. [root@hadoop001 ~]# useradd hadoop
  2. [root@hadoop001 ~]# passwd hadoop
  3. Changing password for user hadoop.
  4. New password:

3、停止hadoop服務及許可權
  1. [root@hadoop001 ~]# kill -9 $(pgrep -f hadoop)
  2. [root@hadoop001 ~]# jps
  3. 15803 Jps
  4. [root@hadoop001 software]# pwd
    /opt/software
  5. [root@hadoop001 software]# chown -R hadoop:hadoop hadoop-2.8.1


4、配置hadoop使用者SSH互信

# 設定hadoop使用者shh互信(新使用者需要重新設定ssh)
# 其他使用者啟動hadoop,authorized_keys必須授權0600 
  1. [root@hadoop001 .ssh]# su - hadoop
  2. [hadoop@hadoop001 ~]$ ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
  3. Generating public/private rsa key pair.
  4. Your identification has been saved in /home/hadoop/.ssh/id_rsa.
  5. Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
  6. The key fingerprint is:
  7. 68:5e:33:dd:2a:d2:01:e7:4e:4e:7b:4a:72:7d:67:83 hadoop@hadoop001
  8. The key's randomart image is:
    +--[ RSA 2048]----+
    |                 |
    |                 |
    |      . .        |
    |       = . .     |
    |      o S . .    |
    |     o B * . .   |
    |      + O + E +  |
    |       = + . o . |
    |        .        |
    +-----------------+
    [hadoop@hadoop001 ~]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
    [hadoop@hadoop001 ~]$ chmod 0600 ~/.ssh/authorized_keys

    #檢查透過hadoop使用者互信
    [hadoop@hadoop001 ~]$ chmod 0600 ~/.ssh/authorized_keys
    [hadoop@hadoop001 ~]$ ssh -p2222 localhost date      --ssh埠為2222
    Fri May 18 17:32:59 EDT 2018



4、修改HDFS三大程式對外服務IP

#讀寫都是透過NameNode名稱節點訪問IP
  1. [root@hadoop001 hadoop]# pwd
  2. /opt/software/hadoop-2.8.1/etc/hadoop

  3. [root@hadoop001 hadoop]# vim core-site.xml
  4. --新增<property>,localhost修改為IP
  5. <configuration>
  6.     <property>
  7.         <name>fs.defaultFS</name>
  8.         <value>hdfs://localhost:9000</value>    
  9.     </property>
  10. </configuration>

#修改DataNode資料節點訪問IP

  1. [root@hadoop001 hadoop]# pwd
  2. /opt/software/hadoop-2.8.1/etc/hadoop

  3. [root@hadoop001 hadoop]# vim slaves
  4. localhost --localhost修改為IP,多DN逗號分隔ip  

#修改SecondayNameNode資料節點訪問IP
  1. [root@hadoop001 hadoop]# vim hdfs-site.xml
  2. -新增<property>
  3.     <configuration>
  4.         <property>
  5.                 <name>dfs.namenode.secondary.http-address</name>
  6.                 <value>192.168.0.129:50090</value>
  7.         </property>

  8.         <property>
  9.                 <name>dfs.namenode.secondary.https-address</name>
  10.                  <value>192.168.0.129:50091</value>
  11.         </property>
  12.         <property>
  13.                  <name>dfs.replication</name>
  14.                   <value>1</value>
  15.         </property>
  16.     </configuration>


6刪除root使用者的DFS檔案及DFS磁碟格式化
  1. [root@hadoop001 tmp]# rm -rf /tmp/hadoop-* /tmp/hsperfdata-*
  2. [root@hadoop001 tmp]# su - hadoop
  3. [hadoop@hadoop001 hadoop-2.8.1]$ hdfs namenode -format


7、hadoop使用者啟動
  1. [hadoop@hadoop001 sbin]$ pwd
  2. /opt/software/hadoop-2.8.1/sbin
  3. [hadoop@hadoop001 sbin]$ ./start-dfs.sh       -- 第一次啟動輸入密碼
  4. Starting namenodes on [hadoop001]
  5. hadoop001: starting namenode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-namenode-hadoop001.out
  6. 192.168.0.129: starting datanode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-datanode-hadoop001.out
  7. Starting secondary namenodes [hadoop001]
  8. hadoop001: starting secondarynamenode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-secondarynamenode-hadoop001.out

來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/31441024/viewspace-2154712/,如需轉載,請註明出處,否則將追究法律責任。

相關文章