Storm與kafka整合
環境:Storm-1.2.2,Kafka_2.10-0.10.2.1,zookeeper-3.4.10,Idea(Linux版)
該測試用例都是在Linux下完成。
1.Bolt實現
package com.strorm.kafka;
import org.apache.storm.task.OutputCollector;
import org.apache.storm.task.TopologyContext;
import org.apache.storm.topology.IRichBolt;
import org.apache.storm.topology.OutputFieldsDeclarer;
import org.apache.storm.tuple.Fields;
import org.apache.storm.tuple.Tuple;
import java.util.Map;
/**
* @Author zhang
* @Date 18-6-11 下午9:12
*/
public class SplitBolt implements IRichBolt {
private TopologyContext context;
private OutputCollector collector;
public void prepare(Map stormConf, TopologyContext context, OutputCollector collector) {
this.context = context;
this.collector = collector;
}
public void execute(Tuple input) {
String line = input.getString(0);
System.out.println(line);
}
public void cleanup() {
}
public void declareOutputFields(OutputFieldsDeclarer declarer) {
declarer.declare(new Fields("word", "count"));
}
public Map<String, Object> getComponentConfiguration() {
return null;
}
}
2.Topology提交
package com.strorm.kafka;
import com.strorm.wordcount.SplitBolt;
import org.apache.storm.Config;
import org.apache.storm.LocalCluster;
import org.apache.storm.kafka.*;
import org.apache.storm.spout.SchemeAsMultiScheme;
import org.apache.storm.topology.TopologyBuilder;
import java.util.UUID;
/**
* @Author zhang
* @Date 18-6-11 下午9:15
*/
public class kafkaApp {
public static void main(String[] args) {
TopologyBuilder builder = new TopologyBuilder();
String zkConnString = "Server1:2181";
BrokerHosts hosts = new ZkHosts(zkConnString);
SpoutConfig spoutConfig = new SpoutConfig(hosts,"stormkafka","/stormkafka",UUID.randomUUID().toString());
spoutConfig.scheme= new SchemeAsMultiScheme(new StringScheme());
KafkaSpout kafkaSpout = new KafkaSpout(spoutConfig);
builder.setSpout("kafkaspout",kafkaSpout);
builder.setBolt("split-bolt",new SplitBolt()).shuffleGrouping("kafkaspout");
Config config = new Config();
config.setDebug(true);
LocalCluster cluster = new LocalCluster();
cluster.submitTopology("wc",config,builder.createTopology());
}
}
3.pom.xml依賴
<dependencies>
<dependency>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
<version>1.2.17</version>
</dependency>
<dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-core</artifactId>
<version>1.2.2</version>
<!--本地測試關閉,叢集開啟-->
<!--<scope>provided</scope>-->
</dependency>
<dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-kafka</artifactId>
<version>1.2.2</version>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.10</artifactId>
<version>0.10.2.1</version>
<exclusions>
<exclusion>
<groupId>org.apache.zookeeper</groupId>
<artifactId>zookeeper</artifactId>
</exclusion>
<exclusion>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>0.10.2.1</version>
</dependency>
</dependencies>
4.開啟一個訊息生產者
kafka-console-producer.sh --broker-list Desktop:9092,Server1:9092,Server2:9092,Server3:9092 --topic stormkafka
在這裡,kafka是生產者,storm是消費者。
5.測試
啟動zkserver,再啟動kafka,在啟動nimbus,supervisor。
在kafka生產者端傳送訊息,如下:
在Idea中檢視接收效果:
可以看到已經接收到來自kafka的訊息。
解壓檔案後,只需要以匯入maven工程的方式匯入root目錄下的pom.xml
相關文章
- storm(1.1.3)與kafka(1.0.0)整合ORMKafka
- Storm 系列(九)—— Storm 整合 KafkaORMKafka
- Storm系列(六)storm和kafka整合ORMKafka
- kafka + storm 整合原始碼案例KafkaORM原始碼
- SpringBoot整合Kafka和StormSpring BootKafkaORM
- Spring boot 整合Kafka+StormSpring BootKafkaORM
- storm與kafka結合ORMKafka
- Kafka實戰-Kafka到StormKafkaORM
- kafka+storm+hbaseKafkaORM
- Cassandra與Kafka的整合Kafka
- Elasticsearch 與 Kafka 整合剖析ElasticsearchKafka
- storm-kafka-client使用ORMKafkaclient
- Kafka實戰-Storm ClusterKafkaORM
- storm kafka外掛使用案例ORMKafka
- 《Kafka筆記》4、Kafka架構,與其他元件整合Kafka筆記架構元件
- 新版flume+kafka+storm安裝部署KafkaORM
- 【Twitter Storm系列】flume-ng+Kafka+Storm+HDFS 實時系統搭建ORMKafka
- Kafka應用實踐與生態整合Kafka
- Spring 對Apache Kafka的支援與整合SpringApacheKafka
- 彈性整合Apache Mesos與Apache Kafka框架ApacheKafka框架
- flume+kafka+storm+mysql架構設計KafkaORMMySql架構
- Kafka實時流資料經Storm至HdfsKafkaORM
- 大資料6.1 - 實時分析(storm和kafka)大資料ORMKafka
- Java訊息佇列:RabbitMQ與Kafka的整合與應用Java佇列MQKafka
- spark與kafaka整合workcount示例 spark-stream-kafkaSparkKafka
- flume-ng+Kafka+Storm+HDFS 實時系統搭建KafkaORM
- Flume 整合 Kafka_flume 到kafka 配置【轉】Kafka
- kafka+flume的整合Kafka
- Spring Boot 整合 KafkaSpring BootKafka
- log4j+kafka+storm+mongodb+mysql 日誌處理KafkaORMMongoDBMySql
- EMQX 4.x 版本更新:Kafka 與 RocketMQ 整合安全增強MQKafka
- 使用Storm、Kafka和ElasticSearch處理實時資料 -javacodegeeksORMKafkaElasticsearchJava
- Flume與Kafka整合--扇入、扇出功能整合,其中扇出包括:複製流、複用流Kafka
- Storm叢集安裝與部署ORM
- Kafka 簡介 & 整合 SpringBootKafkaSpring Boot
- springboot整合kafka配置方式Spring BootKafka
- Apache Camel與Spring-boot和Kafka的整合開源案例ApacheSpringbootKafka
- Apache Storm系列 之二( 輕鬆搞定 Storm 安裝與啟動)ApacheORM