假設我們有三臺伺服器,他們的角色我們做如下劃分:
10.96.21.120 master
10.96.21.119 slave1
10.96.21.121 slave2
接下來我們按照這個配置來部署hadoop叢集。
1:安裝jdk
下載解壓。
vi /etc/profile JAVA_HOME=/usr/java/jdk1.6.0_29 CLASS_PATH=$JAVA_HOME/lib:JAVA_HOME/jre/lib:JAVA_HOME/lib/tools.jar:$CLASS_PATH PATH=$JAVA_HOME/bin:$PATH if [ -z "$INPUTRC" -a ! -f "$HOME/.inputrc" ]; then INPUTRC=/etc/inputrc fi export PATH USER LOGNAME MAIL HOSTNAME HISTSIZE INPUTRC export CLASS_PATH JAVA_HOME
判斷是否安裝成功。
java -version javac
2:安裝ssh
命令
yum -y install openssh-server openssh-clients
開啟sshd服務
chkconfig sshd on
service sshd start
開啟埠
/sbin/iptable -A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT service iptables save
當然你也可以使22埠只接受某個ip的連線
/sbin/iptables -A INPUT -s 192.168.1.0/24 -m state --state NEW -p tcp --dport 22 -j ACCEPT service iptables save
配置檔案在: /etc/ssh/sshd config
英文地址:http://www.cyberciti.biz/faq/how-to-installing-and-using-ssh-client-server-in-linux/
3:設定host(三臺機器都操作)
vi /etc/hosts
10.96.21.120 master 10.96.21.121 slave1 10.96.21.119 slave2
4:建立hadoop賬戶,並設定本機無密碼登陸(三臺機器都操作)
建立使用者
useradd hadoop
設定密碼為空
passwd -d hadoop
切換賬戶
su - hadoop
生成公私金鑰
ssh-keygen -t rsa
到公鑰所在的資料夾
cd ~ cd .ssh
追加公鑰到信任區域
如果有authorized_keys檔案則 cat id_rsa.pub>> authorized_keys 否則 cp id_rsa.pub authorized_keys
測試本機無密碼登陸
ssh -p 22 localhost
5:設定master到slave的無密碼登陸
到hadoop使用者的ssh資料夾(master操作)
su - hadoop cd ~ cd .ssh
複製公鑰到slave1,slave2(master操作)
scp -P 60022 id_rsa.pub root@10.96.21.121:/home/hadoop/.ssh/10.96.21.120 scp -P 60022 id_rsa.pub root@10.96.21.119:/home/hadoop/.ssh/10.96.21.120
新增master的公鑰到的slave1,slave2信任區域(slave1,slave2上操作)
su - hadoop cd ~ cd .ssh
cat 10.96.21.120 >> authorized_keys
啟動sshd客戶端(master操作)
ssh-agent
新增id_rsa到ssh-agent(新增私鑰到客戶端)(master操作)
ssh-add id_rsa
驗證
ssh -p 60022 slave1 ssh -p 60022 slave2
6:設定hadoop
配置hadoop-env.sh (三臺機器都操作)
vi hadoop-env.sh
export JAVA_HOME=/soft/jdk1.7.0_21 export HADOOP_OPTS=-Djava.net.preferIPv4Stack=true export HADOOP_SSH_OPTS="-p 60022"
配置core-site.xml(三臺機器都操作)
vi core-site.xml <property> <name>fs.default.name</name> <value>hdfs://master:54310</value> <description>The name of the default file system. A URI whose scheme and authority determine the FileSystem implementation. The uri's scheme determines the config property (fs.SCHEME.impl) naming the FileSystem implementation class. The uri's authority is used to determine the host, port, etc. for a filesystem.</description> </property> <property> <name>hadoop.tmp.dir</name> <value>/app/hadoop</value> </property>
配置hdfs-site.xml(三臺機器都操作)
vi hdfs-site.xml <property> <name>dfs.replication</name> <value>2</value> <description>Default block replication. The actual number of replications can be specified when the file is created. The default is used if replication is not specified in create time. </description> </property>
配置mapred-site.xml(三臺機器上操作)
<property> <name>mapred.job.tracker</name> <value>master:54311</value> <description>The host and port that the MapReduce job tracker runs at. If "local", then jobs are run in-process as a single map and reduce task. </description> </property>
配置master(master上操作)
vi masters master
配置slave(master上操作)
vi slaves
slave1
slave2
設定hadoop用到的目錄的許可權
mkdir /app/hadoop chmod 777 /app/hadoop chmod 777 /soft/hadoop (hadoop所在的目錄,預設日誌要建立在這裡)
7:啟動叢集
格式名稱節點(master上操作)
bin/hadoop namenode -format
開啟檔案系統(master上操作)
bin/start-dfs.sh
開啟map
bin/start-mapred.sh
驗證
master
26463 Jps
24660 NameNode
25417 JobTracker
24842 SecondaryNameNode
slave
23823 TaskTracker
4636 DataNode
23964 Jps
執行mapreduce程式用到的命令
bin/hadoop fs -rmr output [刪除資料夾] bin/hadoop fs -mkdir input [建立資料夾] bin/hadoop fs -put /soft/hadoop/file.txt input bin/hadoop fs -get /user/hadoop/output/part-r-00000
如果要用到第三方類庫{比如把結果寫到redis中}需要把類庫放到每臺伺服器的lib資料夾下。
注意hostname名稱一致
ssh只需要master到slave連通就行,不需要slave之間連通
關閉防火牆
錯誤:name node is in safe mode hadoop
解決:bin/hadoop dfsadmin -safemode leave
hdfs的web管理介面是http://10.96.21.120:50070/dfshealth.jsp