Hadoop2.7.2叢集的部署
本文使用了4臺虛擬機器成功的部署了hadoop2.7.2的叢集開發環境。
1. 前言
假設你的環境已經具備了執行Hadoop的必要條件,如果沒有你可以參考Centos上原始碼安裝Hadoop2.7.2。
2. 準備工作
這裡我準備了4臺Centos7虛擬機器,下面我們馬上來進行配置。
# 編輯host
vi /etc/hosts
# 每個節點新增IP、主機名稱
192.168.10.162 hmaster
192.168.10.163 hslave1
192.168.10.164 hslave2
192.168.10.166 hslave3
# 建立hadoop使用者
useradd hadoop
# 設定密碼
passwd hadoop
# 設定許可權
chown hadoop /app/hadoop-2.7.2/ -R
chgrp hadoop /app/hadoop-2.7.2/ -R
# ssh自動登陸
su hadoop
# 這裡一路回車即可
ssh-keygen -t rsa
ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@hmaster
ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@hslave1
ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@hslave2
ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@hslave3
chmod 0600 ~/.ssh/authorized_keys
# 關閉防火牆
systemctl stop firewalld
# 也可以編輯sysctl.conf
vi /etc/sysctl.conf
# 新增以下內容
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
3. 部署
3.1. 設定hadoop環境變數
# 切換到hadoop使用者,新增hadoop環境變數
vi ~/.bashrc
# 新增以下內容
export HADOOP_PREFIX=/app/hadoop-2.7.2
export HADOOP_HOME=$HADOOP_PREFIX
export HADOOP_COMMON_HOME=$HADOOP_PREFIX
export HADOOP_CONF_DIR=$HADOOP_PREFIX/etc/hadoop
export HADOOP_HDFS_HOME=$HADOOP_PREFIX
export HADOOP_MAPRED_HOME=$HADOOP_PREFIX
export HADOOP_YARN_HOME=$HADOOP_PREFIX
export PATH=$PATH:$HADOOP_PREFIX/sbin:$HADOOP_PREFIX/bin
# 退出,使之生效
source ~/.bashrc
3.2. 設定master
# 1. 編輯core-site.xml
vi /app/hadoop-2.7.2/etc/hadoop/core-site.xml
# 新增以下內容
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://hmaster:9000/</value>
</property>
</configuration>
# 2. 建立NameNode目錄
mkdir /home/hadoop/namenode
chown hadoop /home/hadoop/namenode/
chgrp hadoop /home/hadoop/namenode/
# 3. 編輯hdfs-site.xml
vi /app/hadoop-2.7.2/etc/hadoop/hdfs-site.xml
# 新增以下內容
<configuration>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/home/hadoop/datanode</value>
</property>
<property>
<name>dfs.namenode.data.dir</name>
<value>/home/hadoop/namenode</value>
</property>
</configuration>
# 4. 編輯mapred-site.xml
vi /app/hadoop-2.7.2/etc/hadoop/mapred-site.xml
# 新增以下內容
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
# 5. 編輯yarn-site.xml設定ResourceManager和NodeManagers
<configuration>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>hmaster</value>
</property>
<property>
<name>yarn.nodemanager.hostname</name>
<value>hmaster</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
# 6. 編輯slaves新增DataNode
vi /app/hadoop-2.7.2/etc/hadoop/slaves
# 新增以下內容
hmaster
hslave1
hslave2
hslave3
3.3. 設定DataNode
# 1. 編輯core-site.xml
vi /app/hadoop-2.7.2/etc/hadoop/core-site.xml
# 新增以下內容
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://hmaster:9000/</value>
</property>
</configuration>
# 2. 建立DataNode目錄
mkdir /home/hadoop/datanode
chown hadoop /home/hadoop/datanode/
chgrp hadoop /home/hadoop/datanode/
# 3. 編輯hdfs-site.xml
vi /app/hadoop-2.7.2/etc/hadoop/hdfs-site.xml
# 新增以下內容
<configuration>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/home/hadoop/datanode</value>
</property>
</configuration>
3.4. 啟動
# 在master上切換到hadoop使用者
su hadoop
hdfs namenode -format
# 啟動命令後, 使用jps檢視任務是否啟動, Slave節點:DataNode, Master節點: NameNode, DataNode, SecondaryNameNode
cd /app/hadoop-2.7.2/etc/hadoop
sbin/start-dfs.sh
# 啟動YARN, 檢視Master節點上的ResourceManager和NodeManager是否啟動
sbin/start-yarn.sh
# 檢視叢集狀態
hdfs dfsadmin -report
Configured Capacity: 74985766912 (69.84 GB)
Present Capacity: 21625159680 (20.14 GB)
DFS Remaining: 21625114624 (20.14 GB)
DFS Used: 45056 (44 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0
-------------------------------------------------
Live datanodes (4):
Name: 192.168.10.163:50010 (hslave1)
Hostname: hslave1
Decommission Status : Normal
Configured Capacity: 18746441728 (17.46 GB)
DFS Used: 12288 (12 KB)
Non DFS Used: 13337149440 (12.42 GB)
DFS Remaining: 5409280000 (5.04 GB)
DFS Used%: 0.00%
DFS Remaining%: 28.85%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Sat Apr 09 02:18:49 CST 2016
Name: 192.168.10.164:50010 (hslave2)
Hostname: hslave2
Decommission Status : Normal
Configured Capacity: 18746441728 (17.46 GB)
DFS Used: 12288 (12 KB)
Non DFS Used: 13339152384 (12.42 GB)
DFS Remaining: 5407277056 (5.04 GB)
DFS Used%: 0.00%
DFS Remaining%: 28.84%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Sat Apr 09 02:18:50 CST 2016
Name: 192.168.10.162:50010 (hmaster)
Hostname: hmaster
Decommission Status : Normal
Configured Capacity: 18746441728 (17.46 GB)
DFS Used: 12288 (12 KB)
Non DFS Used: 13345161216 (12.43 GB)
DFS Remaining: 5401268224 (5.03 GB)
DFS Used%: 0.00%
DFS Remaining%: 28.81%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Sat Apr 09 02:18:50 CST 2016
Name: 192.168.10.166:50010 (hslave3)
Hostname: hslave3
Decommission Status : Normal
Configured Capacity: 18746441728 (17.46 GB)
DFS Used: 8192 (8 KB)
Non DFS Used: 13339144192 (12.42 GB)
DFS Remaining: 5407289344 (5.04 GB)
DFS Used%: 0.00%
DFS Remaining%: 28.84%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Sat Apr 09 02:18:51 CST 2016
你也可以透過檢視 進行檢視
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/196700/viewspace-2124372/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- 部署分片叢集
- 管理 ES 叢集:集常見的叢集部署方式
- redis叢集之主從複製叢集的原理和部署Redis
- 拆除kubeadm部署的Kubernetes 叢集
- docker部署mysql叢集DockerMySql
- Docker部署ElasticSearch叢集DockerElasticsearch
- jmeter 叢集容器化部署JMeter
- 使用docker部署hadoop叢集DockerHadoop
- 二、Linux部署RabbitMQ叢集LinuxMQ
- Redis Cluster叢集模式部署Redis模式
- redis叢集之分片叢集的原理和常用代理環境部署Redis
- 基於Ubuntu部署企業級kubernetes叢集---k8s叢集容部署UbuntuK8S
- Hadoop的叢集環境部署說明Hadoop
- 在kubernetes上部署consul叢集
- Redis 4.0叢集環境部署Redis
- CentOS部署ElasticSearch7.6.1叢集CentOSElasticsearch
- 在 Azure 上部署 Kubernetes 叢集
- 使用 Ansible 快速部署 HBase 叢集
- Windows Server上部署IoTdb 叢集WindowsServer
- Kubernetes部署叢集Mysql服務MySql
- Linux部署hadoop2.7.7叢集LinuxHadoop
- Redis Cluster叢集模式部署XRedis模式
- kubeadm部署一主兩從的kubernetes叢集
- ElasticSearch 叢集的規劃部署與運維Elasticsearch運維
- 在 Azure 中部署 Kubernetes 容器叢集
- docker 下部署mongodb Replica Set 叢集DockerMongoDB
- TDengine 叢集 多機器docker 部署Docker
- 基於containerd 部署 kubernetes 1.28叢集AI
- 將 .NET Aspire 部署到 Kubernetes 叢集
- ansible快速部署cassandra3叢集
- 【Linux合集】單機部署zk叢集Linux
- KubeSphere 部署 Kafka 叢集實戰指南Kafka
- 【Redis叢集實戰】Redis Cluster 部署Redis
- Ansible部署K8s叢集K8S
- 從零部署TiDB叢集TiDB
- 部署Spark2.2叢集(on Yarn模式)SparkYarn模式
- 部署spark2.2叢集(standalone模式)Spark模式
- kubeadm部署K8S叢集K8S
- Redis-cluster叢集搭建部署Redis