Redhat 6.4-x64 編譯 Hadoop-2.7.1並分散式安裝

梓沐發表於2016-02-15
文章中所用到的軟體百度網盤,方便下載

1.系統環境

```
[root@master ~]# uname -a
Linux master 2.6.32-358.el6.x86_64 #1 SMP Tue Jan 29 11:47:41 EST 2013 x86_64 x86_64 x86_64 GNU/Linux

或者

[root@master /]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 6.4 (Santiago)
```
2.安裝前的準備
2.1關閉防火牆

```
--檢視防火牆狀態
[root@master ~]# service iptables status

--停止防火牆
[root@master ~]# service iptables stop

--永久禁止防火牆
[root@master ~]# chkconfig iptables off

```
2.2檢視ssh是否安裝

```
--第一種方式:
[root@master ~]# which ssh
/usr/bin/ssh

--第二種方式
[root@master ~]# service sshd status
openssh-daemon (pid  1983) is running...

--第三種方式
[root@master ~]# rpm -qa openssh openssl

--第四種方式(檢視是否啟動)
[root@master ~]# ps -ef|grep ssh

```
2.3配置網路卡

```
[root@master ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
HWADDR=08:00:27:C0:18:81
TYPE=Ethernet
UUID=8f07e2a4-702d-4e4e-a7c9-a0d4d1c9a880
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=none
IPADDR=192.168.8.205
NETMASK=255.255.255.0
GATEWAY=192.168.8.1
DNS1=221.228.255.1
```
2.4配置hosts

```
[root@master ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

--新增如下內容
192.168.8.205 master
192.168.8.223 slave
```
2.5修改hostname為master

```
[root@master ~]# cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=master
```
2.6建立hadoop使用者

```
--建立使用者
[root@master ~]# useradd hadoop

--配置使用者密碼
[root@master ~]# passwd hadoop
```
2.7配置ssh無密碼登陸

在master機器上操作

```
--切換hadoop使用者
[root@master ~]# su - hadoop

--生成需要建立ssh的私鑰和公鑰,連續3次回車建立公鑰檔案(id_rsa.pub)和私鑰檔案(id_rsa)
[hadoop@master ~]$ ssh-keygen -t rsa

--把id_rsa.pub追加到授權的authorized_keys裡面去
[hadoop@master ~]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

--修改許可權        
[hadoop@master ~]$ chmod 600 ~/.ssh/authorized_keys       
 
--切換到root使用者,修改ssh配置檔案
[hadoop@master ~]$ exit
[root@master ~]# vim /etc/ssh/sshd_config   
 RSAAuthentication yes #啟用RSA認證
 PubkeyAuthentication yes #啟用公鑰私鑰配對認證方式
 AuthorizedKeysFile .ssh/authorized_keys #公鑰檔案路徑

```
登陸到slave機器上

```
--切換到hadoop使用者
[root@slave ~]# su - hadoop

--建立.ssh資料夾
[hadoop@slave ~]$ mkdir ~/.ssh

--切換到maste主機上的hadoop使用者下
[root@master ~]# su - hadoop

--將金鑰scp到slave機器上
[hadoop@master ~]$ scp ~/.ssh/id_rsa.pub hadoop@slave:~/.ssh

--切換到slave機器上的hadoop下
[hadoop@slave ~]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

--授予許可權(master和slave都要授權)
chmod 700 ~/.ssh/
chmod 700 /home/hadoop
chmod 600 ~/.ssh/authorized_keys
```
3.安裝必須的軟體
3.1安裝jdk

```
--建立jdk目錄
mkdir /usr/java
將下載的jdk-8u60-linux-x64.tar.gz解壓到java目錄下

--配置jdk環境
[root@master ~]# vim /etc/profile
export JAVA_HOME=/usr/java/jdk1.8.0_60
export PATH=$PATH:$JAVA_HOME/bin

--修改生效
[root@master ~]# source profile

--測試是否安裝成功
[hadoop@master ~]$ java -version
java version "1.5.0"
gij (GNU libgcj) version 4.4.7 20120313 (Red Hat 4.4.7-3)

Copyright (C) 2007 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

這裡有點問題,顯示的是系統自帶的java,替換自己安裝的見:
http://blog.csdn.net/u011364306/article/details/48375653
```
3.2安裝其他元件

```
[root@master ~]#yum install svn ncurses-devel gcc* lzo-devel zlib-devel autoconf automake libtool cmake openssl-devel
```
3.3安裝maven

```
--下載apache-maven,將下載後的檔案複製到 /usr/local/目錄下。
apache-maven-3.3.3-bin.tar.gz

--解壓檔案
[root@master local]# tar xf apache-maven-3.3.3-bin.tar.gz

--給maven做一個軟連線
[root@master local]# ln -s apache-maven-3.3.3 apache-maven

--配置maven環境
[root@master ~]# vim /etc/profile
export M2_HOME=/usr/local/apache-maven

--在java環境變數後面加上:$M2_HOME/bin
export PATH=$PATH:$JAVA_HOME/bin:$M2_HOME/bin

--修改生效
[root@master ~]# source profile

--測試是否安裝成功
[root@master local]# mvn -v
Apache Maven 3.3.3 (7994120775791599e205a5524ec3e0dfe41d4a06; 2015-04-22T19:57:37+08:00)
Maven home: /usr/local/apache-maven
Java version: 1.8.0_60, vendor: Oracle Corporation
Java home: /usr/java/jdk1.8.0_60/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "2.6.32-358.el6.x86_64", arch: "amd64", family: "unix"
```
3.4安裝ant

```
--下載apache-ant,將下載後的檔案複製到 /usr/local/目錄下。
apache-ant-1.9.6-bin.tar.gz

--解壓檔案
[root@master local]# tar xf apache-ant-1.9.6-bin.tar.gz

--配置ant環境
[root@master ~]# vim /etc/profile
export ANT_HOME=/usr/local/apache-ant-1.9.6
export PATH=$PATH:$ANT_HOME/bin

--修改生效
[root@master ~]# source profile
```
3.5安裝findbugs

```
--下載findbugs,將下載後的檔案複製到 /usr/local/目錄下。
findbugs-3.0.1.tar.gz

--解壓檔案
[root@master local]# tar xf findbugs-3.0.1.tar.gz

--配置findbugs環境
[root@master ~]# vim /etc/profile
export FINDBUGS_HOME=/usr/local/findbugs-3.0.1
export PATH=$PATH:$FINDBUGS_HOME/bin

--修改生效
[root@master ~]# source profile
```
3.6安裝protobuf

```
--下載protobuf,將下載後的檔案複製到 /usr/local/目錄下。
protobuf-2.5.0.tar.gz

--解壓檔案
[root@master local]# tar xf protobuf-2.5.0.tar.gz

--編譯安裝
[root@master local]# cd protobuf-2.5.0
[root@master protobuf-2.5.0]# ./configure --prefix=/usr/local
[root@master protobuf-2.5.0]# make && make install
```
4.編譯hadoop原始碼

```
--下載hadoop,將下載後的檔案複製到 /usr/local/目錄下。
hadoop-2.7.1-src.tar.gz

--解壓檔案
[root@master local]# tar xf hadoop-2.7.1-src.tar.gz

--編譯安裝
[root@master local]# cd hadoop-2.7.1-src/

--這一步執行後會比較久,我虛擬機器大概花了近1個小時時間
[root@master local]# mvn package -Pdist,native,docs -DskipTests -Dtar
```
4.1修改maven源

```
[root@master local]# cd /usr/local/apache-maven/conf
[root@master conf]# mv settings.xml settings.xml.bak
[root@master conf]# touch settings.xml

--maven中央倉庫的配置(改成oschina,增加訪問速度)
<mirror>
        <id>nexus-osc</id>
        <mirrorOf>*</mirrorOf>
        <name>Nexus osc</name>
        <url>
    </mirror></mirrors><profiles>
    <profile>
    <id>jdk17</id>
    <activation>
        <activeByDefault>true</activeByDefault>
        <jdk>1.7</jdk>
    </activation>
    <properties>
        <maven.compiler.source>1.7</maven.compiler.source>
        <maven.compiler.target>1.7</maven.compiler.target>
        <maven.compiler.compilerVersion>1.7</maven.compiler.compilerVersion>
    </properties>    
        <repositories>
           <repository>
                <id>nexus</id>
                <name>local private nexus</name>
                <url>
                <releases>
                    <enabled>true</enabled>
                </releases>
                <snapshots>
                    <enabled>false</enabled>
                </snapshots>
            </repository>
         </repositories>
        <pluginRepositories>
            <pluginRepository>
                <id>nexus</id>
                <name>local private nexus</name>
                <url>
                <releases>
                    <enabled>true</enabled>
                </releases>
                <snapshots>
                    <enabled>false</enabled>
                </snapshots>
            </pluginRepository>
         </pluginRepositories>
    </profile></profiles>
```
4.2 編譯完後,會在目錄/usr/local/hadoop-2.7.1-src/hadoop-dist/target下生成
hadoop-2.7.1和編譯好的64位hadoop-2.7.1.tar.gz

```
[root@master conf]# cd /usr/local/hadoop-2.7.1-src/hadoop-dist/target
[root@master target]# ll
total 844744
drwxr-xr-x. 2 root root      4096 Sep 11 11:25 antrun
-rw-r--r--. 1 root root      1867 Sep 11 11:25 dist-layout-stitching.sh
-rw-r--r--. 1 root root       640 Sep 11 11:26 dist-tar-stitching.sh
drwxr-xr-x. 9 root root      4096 Sep 11 11:25 hadoop-2.7.1
-rw-r--r--. 1 root root 285910336 Sep 11 11:27 hadoop-2.7.1.tar.gz
-rw-r--r--. 1 root root      2823 Sep 11 11:26 hadoop-dist-2.7.1.jar
-rw-r--r--. 1 root root 579067602 Sep 11 11:29 hadoop-dist-2.7.1-javadoc.jar
drwxr-xr-x. 2 root root      4096 Sep 11 11:28 javadoc-bundle-options
drwxr-xr-x. 2 root root      4096 Sep 11 11:26 maven-archiver
drwxr-xr-x. 2 root root      4096 Sep 11 11:25 test-dir
```
5.配置hadoop
5.1基礎操作

```
--將檔案複製到hadoop使用者下
[root@master /]# cp -r /usr/local/hadoop-2.7.1-src/hadoop-dist/target/hadoop-2.7.1 /home/hadoop/

--授予許可權
[root@master /]# chown -R hadoop.hadoop /home/hadoop/hadoop-2.7.1/

--配置hadoop環境
[root@master ~]# vim /etc/profile
export HADOOP_HOME=/home/hadoop/hadoop-2.7.1
export PATH=$PATH:$HADOOP_HOME/bin

--修改生效
[root@master ~]# source profile

--切換到hadoop使用者,建立基礎目錄
[root@master ~]# su - hadoop
[hadoop@master ~]$ cd hadoop-2.7.1/
[hadoop@master hadoop-2.7.1]$ mkdir -p dfs/name
[hadoop@master hadoop-2.7.1]$ mkdir -p dfs/data
[hadoop@master hadoop-2.7.1]$ mkdir -p tmp
```
5.2配置所有slave節點

```
[hadoop@master hadoop-2.7.1]$ cd etc/hadoop/
[hadoop@master hadoop]$ cat slaves
slave
#slave1
#slave2
#有多少個slave配置多少個
```
5.3修改hadoop-env.sh和yarn-env.sh

```
[hadoop@master hadoop]$ vim hadoop-env.sh
export JAVA_HOME=/usr/java/jdk1.8.0_60

[hadoop@master hadoop]$ vim yarn-env.sh
export JAVA_HOME=/usr/java/jdk1.8.0_60
```
5.4修改core-site.xml

```
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://192.168.8.205:9000</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131702</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/home/hadoop/hadoop-2.7.1/tmp</value>
</property>
<property>
<name>hadoop.proxyuser.hadoop.hosts</name>
<value></value>
</property>
<property>
<name>hadoop.proxyuser.hadoop.groups</name>
<value></value>
</property>
</configuration>
```
5.5修改hdfs-site.xml

```
<configuration>
<property>
<name>dfs.namenode.name.dir</name>
<value>/home/hadoop/hadoop-2.7.1/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/home/hadoop/hadoop-2.7.1/dfs/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>192.168.8.205:9001</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
</configuration>
```
5.6修改mapred-site.xml

```
cp mapred-site.xml.template mapred-site.xml
```

```
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>192.168.8.205:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>192.168.8.205:19888</value>
</property>
</configuration>
```
5.7配置yarn-site.xml

```
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>192.168.8.205:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>192.168.8.205:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>192.168.8.205:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>192.168.8.205:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>192.168.8.205:8088</value>
</property>
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>1024</value>
</property>
</configuration>
```
5.8格式化namenode

```
--切換目錄
[hadoop@master hadoop]$ cd ../..
[hadoop@master hadoop-2.7.1]$ ls
bin  dfs  etc  include  lib  libexec  LICENSE.txt  logs  NOTICE.txt  README.txt  sbin  share  tmp

--格式化
[hadoop@master hadoop-2.7.1]$ ./bin/hdfs namenode -format
```
5.9啟動

```
[hadoop@master hadoop-2.7.1]$ ./sbin/start-dfs.sh
[hadoop@master hadoop-2.7.1]$ ./sbin/start-yarn.sh
```

5.10檢查啟動情況

```
http://192.168.8.205:8088
http://192.168.8.205:50070
```

來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/29812844/viewspace-1988775/,如需轉載,請註明出處,否則將追究法律責任。

相關文章