apache-storm-1.0.2.tar.gz的叢集搭建(3節點)(圖文詳解)(非HA和HA)

weixin_30924079發表於2020-04-04

 

 

 

 不多說,直接上乾貨!

 

Storm的版本選取

  我這裡,是選用apache-storm-1.0.2.tar.gz

apache-storm-0.9.6.tar.gz的叢集搭建(3節點)(圖文詳解)

 

 

 

  

 

 

 

為什麼我用過storm-0.9.6版本,我還要用storm-1.0.2?

  storm叢集也是由主節點和從節點組成的。 


storm版本的變更: 
  storm0.9.x 
  storm0.10.x 
  storm1.x 
    前面這些版本里面storm的核心原始碼是由Java+clojule組成的。 
  storm2.x 
    後期這個版本就是全部用java重寫了。 
    (阿里在很早的時候就對storm程式了重寫,提供了jstorm,後期jstorm也加入到apachestorm 
    負責使用java對storm進行重寫,這就是storm2.x版本的由來。) 
注意: 
  在storm0.9.x的版本中,storm叢集只支援一個nimbus節點,主節點是存在問題。 
  在storm0.10.x以後,storm叢集可以支援多個nimbus節點,其中有一個為leader,負責真正執行,其餘的為offline。 
  主節點(控制節點 master)【主節點可以有一個或者多個】 
    職責:負責分發程式碼,監控程式碼的執行。 
    nimbus: 
    ui:可以檢視叢集的資訊以及topology的執行情況 
    logviewer:因為主節點會有多個,有時候也需要檢視主節點的日誌資訊。 
  從節點(工作節點 worker)【從節點可以有一個或者多個】 
    職責:負責產生worker程式,執行任務。 
    supervisor: 
    logviewer:可以通過webui介面檢視topology的執行日誌 

 

 

 

 

 

 

 

 

Storm的本地模式安裝

  本地模式在一個程式裡面模擬一個storm叢集的所有功能, 這對開發和測試來說非常方便。以本地模式執行topology跟在叢集上執行topology類似。

  要建立一個程式內“叢集”,使用LocalCluster物件就可以了:

import backtype.storm.LocalCluster;
LocalCluster cluster = new LocalCluster();

  然後可以通過LocalCluster物件的submitTopology方法來提交topology, 效果和StormSubmitter對應的方法是一樣的。submitTopology方法需要三個引數: topology的名字, topology的配置以及topology物件本身。你可以通過killTopology方法來終止一個topology, 它需要一個topology名字作為引數。

  要關閉一個本地叢集,簡單呼叫:

cluster.shutdown();

  就可以了。

 

 

 

 

 

Storm的分散式模式安裝(本博文)

官方安裝文件

http://storm.apache.org/releases/current/Setting-up-a-Storm-cluster.html

 

 

 

 

 機器情況:在master、slave1、slave2機器的/home/hadoop/app目錄下分別下載storm安裝包

 

 

 

 

 

 

 

 

 

  本博文情況是

  master       nimbus 

  slave1        nimbus    supervisor 

  slave2        supervisor 

 

 

 

 

 

 

  1、apache-storm-1.0.2.tar.gz的下載

http://archive.apache.org/dist/storm/apache-storm-1.0.2/

 

 

或者,直接在安裝目錄下,線上下載

wget http://apache.fayea.com/storm/apache-storm-1.0.2/apache-storm-1.0.2.tar.gz

 

  我這裡,選擇先下載好,再上傳安裝的方式。

 

 

 

 

2、上傳壓縮包

 

 

[hadoop@master app]$ ll
total 64
drwxrwxr-x  10 hadoop hadoop 4096 May 21 14:23 apache-storm-0.9.6
drwxrwxr-x   5 hadoop hadoop 4096 May  1 15:21 azkaban
drwxrwxr-x   7 hadoop hadoop 4096 Apr 21 15:43 elasticsearch-2.4.0
drwxrwxr-x   6 hadoop hadoop 4096 Apr 21 12:12 elasticsearch-2.4.3
lrwxrwxrwx   1 hadoop hadoop   20 Apr 21 15:00 es -> elasticsearch-2.4.0/
lrwxrwxrwx   1 hadoop hadoop   11 Apr 20 12:19 flume -> flume-1.6.0
drwxrwxr-x   7 hadoop hadoop 4096 Apr 20 12:17 flume-1.6.0
drwxrwxr-x   7 hadoop hadoop 4096 Apr 20 12:00 flume-1.7.0
lrwxrwxrwx.  1 hadoop hadoop   12 Apr 12 11:27 hadoop -> hadoop-2.6.0
drwxr-xr-x. 10 hadoop hadoop 4096 Apr 12 16:33 hadoop-2.6.0
lrwxrwxrwx.  1 hadoop hadoop   13 Apr 12 11:28 hbase -> hbase-0.98.19
drwxrwxr-x.  8 hadoop hadoop 4096 Apr 12 17:27 hbase-0.98.19
lrwxrwxrwx.  1 hadoop hadoop   10 Apr 12 11:28 hive -> hive-1.0.0
drwxrwxr-x.  8 hadoop hadoop 4096 May 14 14:08 hive-1.0.0
lrwxrwxrwx.  1 hadoop hadoop   11 Apr 12 10:18 jdk -> jdk1.7.0_79
drwxr-xr-x.  8 hadoop hadoop 4096 Apr 11  2015 jdk1.7.0_79
drwxr-xr-x.  8 hadoop hadoop 4096 Aug  5  2015 jdk1.8.0_60
lrwxrwxrwx   1 hadoop hadoop   18 May  3 21:41 kafka -> kafka_2.11-0.8.2.2
drwxr-xr-x   6 hadoop hadoop 4096 May  3 22:01 kafka_2.11-0.8.2.2
lrwxrwxrwx   1 hadoop hadoop   26 Apr 21 22:18 kibana -> kibana-4.6.3-linux-x86_64/
drwxrwxr-x  11 hadoop hadoop 4096 Nov  4  2016 kibana-4.6.3-linux-x86_64
lrwxrwxrwx   1 hadoop hadoop   12 May  1 19:35 snappy -> snappy-1.1.3
drwxr-xr-x   6 hadoop hadoop 4096 May  1 19:40 snappy-1.1.3
lrwxrwxrwx.  1 hadoop hadoop   11 Apr 12 11:28 sqoop -> sqoop-1.4.6
drwxr-xr-x.  9 hadoop hadoop 4096 May 19 10:31 sqoop-1.4.6
lrwxrwxrwx   1 hadoop hadoop   19 May 21 13:17 storm -> apache-storm-0.9.6/
lrwxrwxrwx.  1 hadoop hadoop   15 Apr 12 11:28 zookeeper -> zookeeper-3.4.6
drwxr-xr-x. 10 hadoop hadoop 4096 Apr 12 17:13 zookeeper-3.4.6
[hadoop@master app]$ rz

[hadoop@master app]$ ll
total 175032
drwxrwxr-x  10 hadoop hadoop      4096 May 21 14:23 apache-storm-0.9.6
-rw-r--r--   1 hadoop hadoop 179161400 May 21 15:31 apache-storm-1.0.2.tar.gz
drwxrwxr-x   5 hadoop hadoop      4096 May  1 15:21 azkaban
drwxrwxr-x   7 hadoop hadoop      4096 Apr 21 15:43 elasticsearch-2.4.0
drwxrwxr-x   6 hadoop hadoop      4096 Apr 21 12:12 elasticsearch-2.4.3
lrwxrwxrwx   1 hadoop hadoop        20 Apr 21 15:00 es -> elasticsearch-2.4.0/
lrwxrwxrwx   1 hadoop hadoop        11 Apr 20 12:19 flume -> flume-1.6.0
drwxrwxr-x   7 hadoop hadoop      4096 Apr 20 12:17 flume-1.6.0
drwxrwxr-x   7 hadoop hadoop      4096 Apr 20 12:00 flume-1.7.0
lrwxrwxrwx.  1 hadoop hadoop        12 Apr 12 11:27 hadoop -> hadoop-2.6.0
drwxr-xr-x. 10 hadoop hadoop      4096 Apr 12 16:33 hadoop-2.6.0
lrwxrwxrwx.  1 hadoop hadoop        13 Apr 12 11:28 hbase -> hbase-0.98.19
drwxrwxr-x.  8 hadoop hadoop      4096 Apr 12 17:27 hbase-0.98.19
lrwxrwxrwx.  1 hadoop hadoop        10 Apr 12 11:28 hive -> hive-1.0.0
drwxrwxr-x.  8 hadoop hadoop      4096 May 14 14:08 hive-1.0.0
lrwxrwxrwx.  1 hadoop hadoop        11 Apr 12 10:18 jdk -> jdk1.7.0_79
drwxr-xr-x.  8 hadoop hadoop      4096 Apr 11  2015 jdk1.7.0_79
drwxr-xr-x.  8 hadoop hadoop      4096 Aug  5  2015 jdk1.8.0_60
lrwxrwxrwx   1 hadoop hadoop        18 May  3 21:41 kafka -> kafka_2.11-0.8.2.2
drwxr-xr-x   6 hadoop hadoop      4096 May  3 22:01 kafka_2.11-0.8.2.2
lrwxrwxrwx   1 hadoop hadoop        26 Apr 21 22:18 kibana -> kibana-4.6.3-linux-x86_64/
drwxrwxr-x  11 hadoop hadoop      4096 Nov  4  2016 kibana-4.6.3-linux-x86_64
lrwxrwxrwx   1 hadoop hadoop        12 May  1 19:35 snappy -> snappy-1.1.3
drwxr-xr-x   6 hadoop hadoop      4096 May  1 19:40 snappy-1.1.3
lrwxrwxrwx.  1 hadoop hadoop        11 Apr 12 11:28 sqoop -> sqoop-1.4.6
drwxr-xr-x.  9 hadoop hadoop      4096 May 19 10:31 sqoop-1.4.6
lrwxrwxrwx   1 hadoop hadoop        19 May 21 13:17 storm -> apache-storm-0.9.6/
lrwxrwxrwx.  1 hadoop hadoop        15 Apr 12 11:28 zookeeper -> zookeeper-3.4.6
drwxr-xr-x. 10 hadoop hadoop      4096 Apr 12 17:13 zookeeper-3.4.6
[hadoop@master app]$ 

  slave1和slave2機器同樣。不多贅述。

 

 

 

 

 3、解壓壓縮包,並賦予使用者組和使用者許可權

[hadoop@master app]$ ll
total 175032
drwxrwxr-x  10 hadoop hadoop      4096 May 21 14:23 apache-storm-0.9.6
-rw-r--r--   1 hadoop hadoop 179161400 May 21 15:31 apache-storm-1.0.2.tar.gz
drwxrwxr-x   5 hadoop hadoop      4096 May  1 15:21 azkaban
drwxrwxr-x   7 hadoop hadoop      4096 Apr 21 15:43 elasticsearch-2.4.0
drwxrwxr-x   6 hadoop hadoop      4096 Apr 21 12:12 elasticsearch-2.4.3
lrwxrwxrwx   1 hadoop hadoop        20 Apr 21 15:00 es -> elasticsearch-2.4.0/
lrwxrwxrwx   1 hadoop hadoop        11 Apr 20 12:19 flume -> flume-1.6.0
drwxrwxr-x   7 hadoop hadoop      4096 Apr 20 12:17 flume-1.6.0
drwxrwxr-x   7 hadoop hadoop      4096 Apr 20 12:00 flume-1.7.0
lrwxrwxrwx.  1 hadoop hadoop        12 Apr 12 11:27 hadoop -> hadoop-2.6.0
drwxr-xr-x. 10 hadoop hadoop      4096 Apr 12 16:33 hadoop-2.6.0
lrwxrwxrwx.  1 hadoop hadoop        13 Apr 12 11:28 hbase -> hbase-0.98.19
drwxrwxr-x.  8 hadoop hadoop      4096 Apr 12 17:27 hbase-0.98.19
lrwxrwxrwx.  1 hadoop hadoop        10 Apr 12 11:28 hive -> hive-1.0.0
drwxrwxr-x.  8 hadoop hadoop      4096 May 14 14:08 hive-1.0.0
lrwxrwxrwx.  1 hadoop hadoop        11 Apr 12 10:18 jdk -> jdk1.7.0_79
drwxr-xr-x.  8 hadoop hadoop      4096 Apr 11  2015 jdk1.7.0_79
drwxr-xr-x.  8 hadoop hadoop      4096 Aug  5  2015 jdk1.8.0_60
lrwxrwxrwx   1 hadoop hadoop        18 May  3 21:41 kafka -> kafka_2.11-0.8.2.2
drwxr-xr-x   6 hadoop hadoop      4096 May  3 22:01 kafka_2.11-0.8.2.2
lrwxrwxrwx   1 hadoop hadoop        26 Apr 21 22:18 kibana -> kibana-4.6.3-linux-x86_64/
drwxrwxr-x  11 hadoop hadoop      4096 Nov  4  2016 kibana-4.6.3-linux-x86_64
lrwxrwxrwx   1 hadoop hadoop        12 May  1 19:35 snappy -> snappy-1.1.3
drwxr-xr-x   6 hadoop hadoop      4096 May  1 19:40 snappy-1.1.3
lrwxrwxrwx.  1 hadoop hadoop        11 Apr 12 11:28 sqoop -> sqoop-1.4.6
drwxr-xr-x.  9 hadoop hadoop      4096 May 19 10:31 sqoop-1.4.6
lrwxrwxrwx   1 hadoop hadoop        19 May 21 13:17 storm -> apache-storm-0.9.6/
lrwxrwxrwx.  1 hadoop hadoop        15 Apr 12 11:28 zookeeper -> zookeeper-3.4.6
drwxr-xr-x. 10 hadoop hadoop      4096 Apr 12 17:13 zookeeper-3.4.6
[hadoop@master app]$ tar -zxvf apache-storm-1.0.2.tar.gz 

  slave1和slave2機器同樣。不多贅述。

 

 

 

 4、刪除壓縮包,為了更好容下多版本,建立軟連結

大資料各子專案的環境搭建之建立與刪除軟連線(博主推薦)

 

[hadoop@master app]$ ll
total 68
drwxrwxr-x   2 hadoop hadoop 4096 May 21 17:20 apache-storm-0.9.6
drwxrwxr-x  11 hadoop hadoop 4096 May 21 17:18 apache-storm-1.0.2
drwxrwxr-x   5 hadoop hadoop 4096 May  1 15:21 azkaban
drwxrwxr-x   7 hadoop hadoop 4096 Apr 21 15:43 elasticsearch-2.4.0
drwxrwxr-x   6 hadoop hadoop 4096 Apr 21 12:12 elasticsearch-2.4.3
lrwxrwxrwx   1 hadoop hadoop   20 Apr 21 15:00 es -> elasticsearch-2.4.0/
lrwxrwxrwx   1 hadoop hadoop   11 Apr 20 12:19 flume -> flume-1.6.0
drwxrwxr-x   7 hadoop hadoop 4096 Apr 20 12:17 flume-1.6.0
drwxrwxr-x   7 hadoop hadoop 4096 Apr 20 12:00 flume-1.7.0
lrwxrwxrwx.  1 hadoop hadoop   12 Apr 12 11:27 hadoop -> hadoop-2.6.0
drwxr-xr-x. 10 hadoop hadoop 4096 Apr 12 16:33 hadoop-2.6.0
lrwxrwxrwx.  1 hadoop hadoop   13 Apr 12 11:28 hbase -> hbase-0.98.19
drwxrwxr-x.  8 hadoop hadoop 4096 Apr 12 17:27 hbase-0.98.19
lrwxrwxrwx.  1 hadoop hadoop   10 Apr 12 11:28 hive -> hive-1.0.0
drwxrwxr-x.  8 hadoop hadoop 4096 May 14 14:08 hive-1.0.0
lrwxrwxrwx.  1 hadoop hadoop   11 Apr 12 10:18 jdk -> jdk1.7.0_79
drwxr-xr-x.  8 hadoop hadoop 4096 Apr 11  2015 jdk1.7.0_79
drwxr-xr-x.  8 hadoop hadoop 4096 Aug  5  2015 jdk1.8.0_60
lrwxrwxrwx   1 hadoop hadoop   18 May  3 21:41 kafka -> kafka_2.11-0.8.2.2
drwxr-xr-x   6 hadoop hadoop 4096 May  3 22:01 kafka_2.11-0.8.2.2
lrwxrwxrwx   1 hadoop hadoop   26 Apr 21 22:18 kibana -> kibana-4.6.3-linux-x86_64/
drwxrwxr-x  11 hadoop hadoop 4096 Nov  4  2016 kibana-4.6.3-linux-x86_64
lrwxrwxrwx   1 hadoop hadoop   12 May  1 19:35 snappy -> snappy-1.1.3
drwxr-xr-x   6 hadoop hadoop 4096 May  1 19:40 snappy-1.1.3
lrwxrwxrwx.  1 hadoop hadoop   11 Apr 12 11:28 sqoop -> sqoop-1.4.6
drwxr-xr-x.  9 hadoop hadoop 4096 May 19 10:31 sqoop-1.4.6
lrwxrwxrwx.  1 hadoop hadoop   15 Apr 12 11:28 zookeeper -> zookeeper-3.4.6
drwxr-xr-x. 10 hadoop hadoop 4096 Apr 12 17:13 zookeeper-3.4.6
[hadoop@master app]$ ln -s apache-storm-1.0.2/ storm
[hadoop@master app]$ ll
total 68
drwxrwxr-x   2 hadoop hadoop 4096 May 21 17:20 apache-storm-0.9.6
drwxrwxr-x  11 hadoop hadoop 4096 May 21 17:18 apache-storm-1.0.2
drwxrwxr-x   5 hadoop hadoop 4096 May  1 15:21 azkaban
drwxrwxr-x   7 hadoop hadoop 4096 Apr 21 15:43 elasticsearch-2.4.0
drwxrwxr-x   6 hadoop hadoop 4096 Apr 21 12:12 elasticsearch-2.4.3
lrwxrwxrwx   1 hadoop hadoop   20 Apr 21 15:00 es -> elasticsearch-2.4.0/
lrwxrwxrwx   1 hadoop hadoop   11 Apr 20 12:19 flume -> flume-1.6.0
drwxrwxr-x   7 hadoop hadoop 4096 Apr 20 12:17 flume-1.6.0
drwxrwxr-x   7 hadoop hadoop 4096 Apr 20 12:00 flume-1.7.0
lrwxrwxrwx.  1 hadoop hadoop   12 Apr 12 11:27 hadoop -> hadoop-2.6.0
drwxr-xr-x. 10 hadoop hadoop 4096 Apr 12 16:33 hadoop-2.6.0
lrwxrwxrwx.  1 hadoop hadoop   13 Apr 12 11:28 hbase -> hbase-0.98.19
drwxrwxr-x.  8 hadoop hadoop 4096 Apr 12 17:27 hbase-0.98.19
lrwxrwxrwx.  1 hadoop hadoop   10 Apr 12 11:28 hive -> hive-1.0.0
drwxrwxr-x.  8 hadoop hadoop 4096 May 14 14:08 hive-1.0.0
lrwxrwxrwx.  1 hadoop hadoop   11 Apr 12 10:18 jdk -> jdk1.7.0_79
drwxr-xr-x.  8 hadoop hadoop 4096 Apr 11  2015 jdk1.7.0_79
drwxr-xr-x.  8 hadoop hadoop 4096 Aug  5  2015 jdk1.8.0_60
lrwxrwxrwx   1 hadoop hadoop   18 May  3 21:41 kafka -> kafka_2.11-0.8.2.2
drwxr-xr-x   6 hadoop hadoop 4096 May  3 22:01 kafka_2.11-0.8.2.2
lrwxrwxrwx   1 hadoop hadoop   26 Apr 21 22:18 kibana -> kibana-4.6.3-linux-x86_64/
drwxrwxr-x  11 hadoop hadoop 4096 Nov  4  2016 kibana-4.6.3-linux-x86_64
lrwxrwxrwx   1 hadoop hadoop   12 May  1 19:35 snappy -> snappy-1.1.3
drwxr-xr-x   6 hadoop hadoop 4096 May  1 19:40 snappy-1.1.3
lrwxrwxrwx.  1 hadoop hadoop   11 Apr 12 11:28 sqoop -> sqoop-1.4.6
drwxr-xr-x.  9 hadoop hadoop 4096 May 19 10:31 sqoop-1.4.6
lrwxrwxrwx   1 hadoop hadoop   19 May 21 17:21 storm -> apache-storm-1.0.2/
lrwxrwxrwx.  1 hadoop hadoop   15 Apr 12 11:28 zookeeper -> zookeeper-3.4.6
drwxr-xr-x. 10 hadoop hadoop 4096 Apr 12 17:13 zookeeper-3.4.6
[hadoop@master app]$ 

  slave1和slave2機器同樣。不多贅述。

 

 

 

 5、修改配置環境

[hadoop@master app]$ su root
Password: 
[root@master app]# vim /etc/profile

   slave1和slave2機器同樣。不多贅述

 

 

#storm
export STORM_HOME=/home/hadoop/app/storm
export PATH=$PATH:$STORM_HOME/bin

   slave1和slave2機器同樣。不多贅述

 

[hadoop@master app]$ su root
Password: 
[root@master app]# vim /etc/profile
[root@master app]# source /etc/profile
[root@master app]# 

   slave1和slave2機器同樣。不多贅述

 

 

 

 

6、下載好Storm叢集所需的其他

 

 

 

 

   因為博主我的機器是CentOS6.5,已經自帶了

 

[hadoop@master ~]$ python
Python 2.6.6 (r266:84292, Nov 22 2013, 12:16:22) 
[GCC 4.4.7 20120313 (Red Hat 4.4.7-4)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> 

 

 

 

 

7、配置storm的配置檔案

[hadoop@master storm]$ pwd
/home/hadoop/app/storm
[hadoop@master storm]$ ll
total 200
drwxrwxr-x  2 hadoop hadoop  4096 May 21 17:18 bin
-rw-r--r--  1 hadoop hadoop 82317 Jul 27  2016 CHANGELOG.md
drwxrwxr-x  2 hadoop hadoop  4096 May 21 17:18 conf
drwxrwxr-x  3 hadoop hadoop  4096 Jul 27  2016 examples
drwxrwxr-x 17 hadoop hadoop  4096 May 21 17:18 external
drwxrwxr-x  2 hadoop hadoop  4096 Jul 27  2016 extlib
drwxrwxr-x  2 hadoop hadoop  4096 Jul 27  2016 extlib-daemon
drwxrwxr-x  2 hadoop hadoop  4096 May 21 17:18 lib
-rw-r--r--  1 hadoop hadoop 32101 Jul 27  2016 LICENSE
drwxrwxr-x  2 hadoop hadoop  4096 May 21 17:18 log4j2
-rw-r--r--  1 hadoop hadoop   981 Jul 27  2016 NOTICE
drwxrwxr-x  6 hadoop hadoop  4096 May 21 17:18 public
-rw-r--r--  1 hadoop hadoop 15287 Jul 27  2016 README.markdown
-rw-r--r--  1 hadoop hadoop     6 Jul 27  2016 RELEASE
-rw-r--r--  1 hadoop hadoop 23774 Jul 27  2016 SECURITY.md
[hadoop@master storm]$ 

 

 

 

 進入storm配置目錄下,修改配置檔案storm.yaml

[hadoop@master conf]$ pwd
/home/hadoop/app/storm/conf
[hadoop@master conf]$ ll
total 12
-rw-r--r-- 1 hadoop hadoop 1128 Jul 27  2016 storm_env.ini
-rwxr-xr-x 1 hadoop hadoop  947 Jul 27  2016 storm-env.sh
-rw-r--r-- 1 hadoop hadoop 1635 Jul 27  2016 storm.yaml
[hadoop@master conf]$ vim storm.yaml 

  slave1和slave2機器同樣。不多贅述

 

 

 

 

 

   這裡,教給大家一個非常好的技巧。

大資料搭建各個子專案時配置檔案技巧(適合CentOS和Ubuntu系統)(博主推薦)

 

 

 注意第一列需要一個空格

 

 

 

 注意第一列需要一個空格(HA

 

 storm.zookeeper.servers:
     - "master"
     - "slave1"
     - "slave2"

 nimbus.seeds: ["master", "slave1"]
 ui.port: 9999

 storm.local.dir: "/home/hadoop/data/storm"

 supervisor.slots.ports:
     - 6700
     - 6701
     - 6702
     - 6703

  注意:我的這裡ui.port選定為9999,是自定義,為了解決Storm 和spark預設的 8080 埠衝突

  slave1和slave2機器同樣。不多贅述。

 

 

 

 

 

 

注意第一列需要一個空格(HA

 

 storm.zookeeper.servers:
     - "master"
     - "slave1"
     - "slave2"

 nimbus.seeds: ["master"]
 ui.port: 9999

 storm.local.dir: "/home/hadoop/data/storm"

 supervisor.slots.ports:
     - 6700
     - 6701
     - 6702
     - 6703

  注意:我的這裡ui.port選定為9999,是自定義,為了解決Storm 和spark預設的 8080 埠衝突!

  slave1和slave2機器同樣。不多贅述。

 

 

 

 

8、新建storm資料儲存的路徑目錄

[hadoop@master conf]$ mkdir -p /home/hadoop/data/storm

  slave1和slave2機器同樣。不多贅述

 

 

 

 

 9、啟動storm叢集(HA

 本博文情況是

  master(主)       nimbus 

  slave1(主)(從)        nimbus    supervisor 

  slave2(從)        supervisor 

 

 

1、先在master上啟動 

nohup bin/storm nimbus >/dev/null 2>&1 & 

 

[hadoop@master storm]$ jps
2374 QuorumPeerMain
7862 Jps
3343 AzkabanWebServer
2813 ResourceManager
3401 AzkabanExecutorServer
2515 NameNode
2671 SecondaryNameNode
[hadoop@master storm]$ nohup bin/storm nimbus >/dev/null 2>&1 & 
[1] 7876
[hadoop@master storm]$ jps
2374 QuorumPeerMain
7905 Jps
7910 config_value
3343 AzkabanWebServer
2813 ResourceManager
3401 AzkabanExecutorServer
2515 NameNode
2671 SecondaryNameNode
[hadoop@master storm]$

 

 

 

 

 

 

 

 

2、再在slave1上啟動

nohup bin/storm nimbus >/dev/null 2>&1 & 

 

[hadoop@slave1 storm]$ jps
2421 NodeManager
2342 DataNode
4892 Jps
2274 QuorumPeerMain
[hadoop@slave1 storm]$ nohup bin/storm nimbus >/dev/null 2>&1 & 
[1] 4904

[hadoop@slave1 storm]$ jps
2421 NodeManager
5244 Jps
2342 DataNode
5135 nimbus
5234 config_value
2274 QuorumPeerMain

 

 

 

 

 

 

 

 

 

 

 

3、先在slave1和slave2上啟動

nohup bin/storm supervisor >/dev/null 2>&1 & 

 

 

 

 

 

[hadoop@slave2 storm]$ jps
4868 Jps
4089 supervisor
2365 NodeManager
2291 DataNode
2229 QuorumPeerMain
[hadoop@slave2 storm]$ nohup bin/storm supervisor >/dev/null 2>&1 & 
[1] 4903
[hadoop@slave2 storm]$ jps
4918 Jps
4089 supervisor
2365 NodeManager
2291 DataNode
2229 QuorumPeerMain
[hadoop@slave2 storm]$ 

 

 

 

 

 

 

4、在master上啟動

nohup bin/storm ui>/dev/null 2>&1 & 

 

[hadoop@master storm]$ jps
8550 config_value
2374 QuorumPeerMain
8113 supervisor
3343 AzkabanWebServer
2813 ResourceManager
8560 Jps
3401 AzkabanExecutorServer
8524 config_value
8372 core
2515 NameNode
2671 SecondaryNameNode
[hadoop@master storm]$ nohup bin/storm ui>/dev/null 2>&1 & 
[7] 8582
[hadoop@master storm]$ jps
2374 QuorumPeerMain
8113 supervisor
8623 Jps
3343 AzkabanWebServer
2813 ResourceManager
3401 AzkabanExecutorServer
8372 core
2515 NameNode
8597 config_value
2671 SecondaryNameNode
8613 config_value
[hadoop@master storm]$ 

 

 

 

 

 

5、在master、slave1和slave2上啟動

nohup bin/storm logviwer >/dev/null 2>&1 & 

 

 

 

 

 

 

 9、啟動storm叢集(非HA

 本博文情況是

  master(主)       nimbus 

  slave1(主)(從)     supervisor 

  slave2(從)        supervisor 

 

 

1、先在master上啟動 

nohup bin/storm nimbus >/dev/null 2>&1 & 

 

[hadoop@master storm]$ jps
2374 QuorumPeerMain
7862 Jps
3343 AzkabanWebServer
2813 ResourceManager
3401 AzkabanExecutorServer
2515 NameNode
2671 SecondaryNameNode
[hadoop@master storm]$ nohup bin/storm nimbus >/dev/null 2>&1 & 
[1] 7876
[hadoop@master storm]$ jps
2374 QuorumPeerMain
7905 Jps
7910 config_value
3343 AzkabanWebServer
2813 ResourceManager
3401 AzkabanExecutorServer
2515 NameNode
2671 SecondaryNameNode
9743 nimbus [hadoop@master storm]$

 

 

 

 

2、先在slave1和slave2上啟動

nohup bin/storm supervisor >/dev/null 2>&1 & 

 

 

 

 

 

[hadoop@slave2 storm]$ jps
4868 Jps
4089 supervisor
2365 NodeManager
2291 DataNode
2229 QuorumPeerMain
[hadoop@slave2 storm]$ nohup bin/storm supervisor >/dev/null 2>&1 & 
[1] 4903
[hadoop@slave2 storm]$ jps
4918 Jps
4089 supervisor
2365 NodeManager
2291 DataNode
2229 QuorumPeerMain
[hadoop@slave2 storm]$ 

 

 

 

 

 

 

3、在master上啟動

nohup bin/storm ui>/dev/null 2>&1 & 

 

[hadoop@master storm]$ jps
8550 config_value
2374 QuorumPeerMain
8113 supervisor
3343 AzkabanWebServer
2813 ResourceManager
8560 Jps
3401 AzkabanExecutorServer
8524 config_value
8372 core
2515 NameNode
2671 SecondaryNameNode
[hadoop@master storm]$ nohup bin/storm ui>/dev/null 2>&1 & 
[7] 8582
[hadoop@master storm]$ jps
2374 QuorumPeerMain
8113 supervisor
8623 Jps
3343 AzkabanWebServer
2813 ResourceManager
3401 AzkabanExecutorServer
8372 core
2515 NameNode
8597 config_value
2671 SecondaryNameNode
8613 config_value
[hadoop@master storm]$ 

 

 

 

4、在master、slave1和slave2上啟動

nohup bin/storm logviwer >/dev/null 2>&1 & 

 

 

 

 

 

 

 

   成功!

轉載於:https://www.cnblogs.com/zlslch/p/6885145.html

相關文章