Ganglia+Hadoop+Hbase監控搭建流程
Hadoop叢集基本部署完成,接下來就需要有一個監控系統,能及時發現效能瓶頸,給故障排除提供有力依據。監控hadoop叢集系統好用的比較少,自身感覺ambari比較好用,但不能監控已有的叢集環境,挺悲催的。ganglia在網上看到原生支援Hadoop、Hbase效能監控,試用一段時間感覺還不錯,監控項比較全面,配置簡單,軟體包在epel網路源中,使用yum安裝方便快捷。
Ganglia是一個開源叢集監控系統,主要用來監控系統效能,如:cpu、mem、硬碟利用率、I/O負載、網路流量等。
Ganglia涉及到三個元件:
gmond:被監控端代理程式,用於收集監控資訊,併傳送給gmetad。
gmetad:定期從gmond獲取資料,並將資料儲存到RRD儲存引擎中。
ganglia-web:WEB前端,RRD繪圖後透過PHP展示。
1.CentOS6 YUM源自帶epel網路源,直接安裝
[root@sht-sgmhadoopnn-01 ~]# yum install epel-release
[root@sht-sgmhadoopnn-01 ~]# yum install ganglia-web ganglia-gmetad ganglia-gmond
2.配置監控程式
[root@sht-sgmhadoopnn-01 ~]# vi /etc/ganglia/gmetad.conf
data_source "hadoop_hbase_cluster" 172.16.101.55:8649 172.16.101.56:8649 172.16.101.58:8649 172.16.101.59:8649 172.16.101.60:8649
case_sensitive_hostnames 1
setuid_username "root"
gridname "MyGrid"
3.關聯Apache,因為Ganglia自建立的配置ganglia.conf有問題,所以先刪除,再建立個軟連線到Apache根目錄下
[root@sht-sgmhadoopnn-01 ~]# rm /etc/httpd/conf.d/ganglia.conf
rm: remove regular file `/etc/httpd/conf.d/ganglia.conf'? y
[root@sht-sgmhadoopnn-01 ~]# ln -s /usr/share/ganglia /var/www/html/ganglia
[root@sht-sgmhadoopnn-01 ~]#
4.啟動Apache和Ganglia,並設定開機啟動
[root@sht-sgmhadoopnn-01 ~]# chown -R root:root /var/lib/ganglia
[root@sht-sgmhadoopnn-01 ~]# service httpd start
Starting httpd: [ OK ]
[root@sht-sgmhadoopnn-01 ~]# service gmetad start
Starting GANGLIA gmetad: [ OK ]
[root@sht-sgmhadoopnn-01 ~]#
5.安裝與配置被監控端(每臺同樣配置)
# yum install ganglia-gmond
# vi /etc/ganglia/gmond.conf
globals {
daemonize = yes
setuid = yes
user = root ##root
debug_level = 0
max_udp_msg_len = 1472
mute = no
deaf = no
allow_extra_data = yes
host_dmax = 86400 /*secs. Expires (removes from web interface) hosts in 1 day */
host_tmax = 20 /*secs */
cleanup_threshold = 300 /*secs */
gexec = no
# By default gmond will use reverse DNS resolution when displaying your hostname
# Uncommeting following value will override that value.
# override_hostname = "mywebserver.domain.com"
# If you are not using multicast this value should be set to something other than 0.
# Otherwise if you restart aggregator gmond you will get empty graphs. 60 seconds is reasonable
send_metadata_interval = 0 /*secs */
}
cluster{
name = "hadoop_hbase_cluster" #叢集名,和上面那個一樣
owner = "root" ##root
latlong = "unspecified"
url = "unspecified"
}
/* Thehost section describes attributes of the host, like the location */
host {
location = "unspecified"
}
/*Feel free to specify as many udp_send_channels as you like. Gmond
used to only support having a single channel*/
udp_send_channel{
#bind_hostname = yes # Highly recommended,soon to be default.
# This option tells gmond to use asource address
# that resolves to themachine's hostname. Without
# this, the metrics mayappear to come from any
# interface and the DNSnames associated with
# those IPs will be usedto create the RRDs.
#mcast_join = 239.2.11.71 #關閉多播
host = 172.16.101.55 #新增傳送IP/主機名
port = 8649 #預設埠
ttl = 1
}
/* Youcan specify as many udp_recv_channels as you like as well. */
udp_recv_channel{
#mcast_join = 239.2.11.71
port = 8649
bind = 172.16.101.55 #------------ 本機的ip/hostname 接收地址
retry_bind = true
# Size of the UDP buffer. If you are handlinglots of metrics you really
# should bump it up to e.g. 10MB or evenhigher.
# buffer = 10485760
}
……
6.同步
[root@sht-sgmhadoopnn-01 ~]# scp /etc/ganglia/gmond.conf
gmond.conf 100% 8769 8.6KB/s 00:00
You have mail in /var/spool/mail/root
[root@sht-sgmhadoopnn-01 ~]# scp /etc/ganglia/gmond.conf
gmond.conf 100% 8769 8.6KB/s 00:00
[root@sht-sgmhadoopnn-01 ~]# scp /etc/ganglia/gmond.conf
gmond.conf 100% 8769 8.6KB/s 00:00
[root@sht-sgmhadoopnn-01 ~]# scp /etc/ganglia/gmond.conf
gmond.conf 100% 8769 8.6KB/s 00:00
#### 修改各個節點的gmond.conf的bind 為本節點ip
7.執行gmond (每臺節點)
[root@sht-sgmhadoopnn-01 ~]# service gmond start
[root@sht-sgmhadoopnn-02 ~]# service gmond start
[root@sht-sgmhadoopdn-01 ~]# service gmond start
[root@sht-sgmhadoopdn-02 ~]# service gmond start
[root@sht-sgmhadoopdn-03 ~]# service gmond start
8.新增Hadoop被Ganglia監控,去掉檔案中以***釋並修改(每臺同樣配置)
[root@sht-sgmhadoopnn-01 ~]# vi $HADOOP_HOME/etc/hadoop/hadoop-metrics2.properties
*.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31
*.sink.ganglia.period=10
*.sink.ganglia.supportsparse=true
*.sink.ganglia.slope=jvm.metrics.gcCount=zero,jvm.metrics.memHeapUsedM=both
*.sink.ganglia.dmax=jvm.metrics.threadsBlocked=70,jvm.metrics.memHeapUsedM=40
namenode.sink.ganglia.servers=172.16.101.55:8649 #當有多個ganglia監控系統時,以逗號分隔
datanode.sink.ganglia.servers=172.16.101.55:8649 #都指定ganglia伺服器
resourcemanager.sink.ganglia.servers=172.16.101.55:8649
nodemanager.sink.ganglia.servers=172.16.101.55:8649
[root@sht-sgmhadoopnn-01 ~]# scp /hadoop/hadoop-2.7.2/etc/hadoop/hadoop-metrics2.properties sht-sgmhadoopnn-02:/hadoop/hadoop-2.7.2/etc/hadoop/hadoop-metrics2.properties
hadoop-metrics2.properties 100% 3183 3.1KB/s 00:00
You have new mail in /var/spool/mail/root
[root@sht-sgmhadoopnn-01 ~]# scp /hadoop/hadoop-2.7.2/etc/hadoop/hadoop-metrics2.properties sht-sgmhadoopdn-03:/hadoop/hadoop-2.7.2/etc/hadoop/hadoop-metrics2.properties
hadoop-metrics2.properties 100% 3183 3.1KB/s 00:00
[root@sht-sgmhadoopnn-01 ~]# scp /hadoop/hadoop-2.7.2/etc/hadoop/hadoop-metrics2.properties sht-sgmhadoopdn-02:/hadoop/hadoop-2.7.2/etc/hadoop/hadoop-metrics2.properties
hadoop-metrics2.properties 100% 3183 3.1KB/s 00:00
[root@sht-sgmhadoopnn-01 ~]# scp /hadoop/hadoop-2.7.2/etc/hadoop/hadoop-metrics2.properties sht-sgmhadoopdn-01:/hadoop/hadoop-2.7.2/etc/hadoop/hadoop-metrics2.properties
hadoop-metrics2.properties 100% 3183 3.1KB/s 00:00
[root@sht-sgmhadoopnn-01 ~]#
9.新增HBase被Ganglia監控,新增如下(每臺同樣配置)
[root@sht-sgmhadoopnn-01 ~]# vi /hadoop/hbase/conf/hadoop-metrics2-hbase.properties
*.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31
*.sink.ganglia.period=10
hbase.sink.ganglia.period=10
hbase.sink.ganglia.servers=172.16.101.55:8649
[root@sht-sgmhadoopnn-01 ~]# scp /hadoop/hbase/conf/hadoop-metrics2-hbase.properties sht-sgmhadoopnn-02:/hadoop/hbase/conf/hadoop-metrics2-hbase.properties
[root@sht-sgmhadoopnn-01 ~]# scp /hadoop/hbase/conf/hadoop-metrics2-hbase.properties sht-sgmhadoopdn-01:/hadoop/hbase/conf/hadoop-metrics2-hbase.properties
[root@sht-sgmhadoopnn-01 ~]# scp /hadoop/hbase/conf/hadoop-metrics2-hbase.properties sht-sgmhadoopdn-02:/hadoop/hbase/conf/hadoop-metrics2-hbase.properties
[root@sht-sgmhadoopnn-01 ~]# scp /hadoop/hbase/conf/hadoop-metrics2-hbase.properties sht-sgmhadoopdn-03:/hadoop/hbase/conf/hadoop-metrics2-hbase.properties
10.檢視
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/30089851/viewspace-2126299/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- 搭建Cacti監控
- php-status監控流程PHP
- APM效能監控軟體的監控型別服務及監控流程型別
- 搭建Lepus 天兔 監控MySQLMySql
- 搭建ICINGA監控
- 業務流程監控:讓多維度監控有了靈魂
- Zabbix監控平臺的搭建
- python搭建系統監控Python
- linux監控平臺搭建Linux
- Nagios監控系統搭建iOS
- ganglia監控的搭建部署
- 搭建完美的監控系統
- Prometheus + Grafana 監控平臺搭建PrometheusGrafana
- 前端監控基礎篇 — Docker + Sentry 搭建前端監控系統前端Docker
- 筆記:MMM監控端啟動流程筆記
- 前端監控系統Sentry搭建前端
- Docker 快速搭建主從 + 哨兵監控Docker
- 搭建前端錯誤監控系統前端
- Centos7 搭建Cerebro Elasticsearch監控CentOSElasticsearch
- 視覺化監控搭建過程視覺化
- Docker容器視覺化監控中心搭建Docker視覺化
- nginx下搭建nagios監控環境NginxiOS
- 《zabbix監控的搭建》centos5.8 32CentOS
- 手把手教你搭建高逼格監控平臺,第二彈,監控mysqlMySql
- 手把手教你搭建高逼格監控平臺,第三彈,監控JVMJVM
- 使用樹莓派搭建區域網監控樹莓派
- linux下cacti監控平臺的搭建Linux
- 轉轉支付通道監控系統的搭建
- 搭建私有的前端監控服務: sentry前端
- jmeter+influxdb+grafana監控平臺搭建JMeterUXGrafana
- 黑盒監控、日誌監控
- 利用TICK搭建Docker容器視覺化監控中心Docker視覺化
- 詳解Docker容器視覺化監控中心搭建Docker視覺化
- Jmeter監控平臺搭建:JMeter+InfluxDB+GrafanaJMeterUXGrafana
- 搭建前端監控,如何採集異常資料?前端
- MySQL資料庫與Nacos搭建監控服務MySql資料庫
- 本地搭建Dubbo監控中心的安裝步驟
- 6.prometheus監控--監控dockerPrometheusDocker