kafka java 生產消費程式demo示例
kafka是吞吐量巨大的一個訊息系統,它是用scala寫的,和普通的訊息的生產消費還有所不同,寫了個demo程式供大家參考。kafka的安裝請參考官方文件。
首先我們需要新建一個maven專案,然後在pom中引用kafka jar包,引用依賴如下:
<dependency> <groupId>org.apache.kafka</groupId> <artifactId>kafka_2.10</artifactId> <version>0.8.0</version> </dependency>
我們用的版本是0.8, 下面我們看下生產訊息的程式碼:
package cn.outofmemory.kafka;
import java.util.Properties;
import kafka.javaapi.producer.Producer;
import kafka.producer.KeyedMessage;
import kafka.producer.ProducerConfig;
/**
* Hello world!
*
*/
public class KafkaProducer
{
private final Producer<String, String> producer;
public final static String TOPIC = "TEST-TOPIC";
private KafkaProducer(){
Properties props = new Properties();
//此處配置的是kafka的埠
props.put("metadata.broker.list", "192.168.193.148:9092");
//配置value的序列化類
props.put("serializer.class", "kafka.serializer.StringEncoder");
//配置key的序列化類
props.put("key.serializer.class", "kafka.serializer.StringEncoder");
//request.required.acks
//0, which means that the producer never waits for an acknowledgement from the broker (the same behavior as 0.7). This option provides the lowest latency but the weakest durability guarantees (some data will be lost when a server fails).
//1, which means that the producer gets an acknowledgement after the leader replica has received the data. This option provides better durability as the client waits until the server acknowledges the request as successful (only messages that were written to the now-dead leader but not yet replicated will be lost).
//-1, which means that the producer gets an acknowledgement after all in-sync replicas have received the data. This option provides the best durability, we guarantee that no messages will be lost as long as at least one in sync replica remains.
props.put("request.required.acks","-1");
producer = new Producer<String, String>(new ProducerConfig(props));
}
void produce() {
int messageNo = 1000;
final int COUNT = 10000;
while (messageNo < COUNT) {
String key = String.valueOf(messageNo);
String data = "hello kafka message " + key;
producer.send(new KeyedMessage<String, String>(TOPIC, key ,data));
System.out.println(data);
messageNo ++;
}
}
public static void main( String[] args )
{
new KafkaProducer().produce();
}
}
下面是消費端的程式碼實現:
package cn.outofmemory.kafka;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import kafka.consumer.ConsumerConfig;
import kafka.consumer.ConsumerIterator;
import kafka.consumer.KafkaStream;
import kafka.javaapi.consumer.ConsumerConnector;
import kafka.serializer.StringDecoder;
import kafka.utils.VerifiableProperties;
public class KafkaConsumer {
private final ConsumerConnector consumer;
private KafkaConsumer() {
Properties props = new Properties();
//zookeeper 配置
props.put("zookeeper.connect", "192.168.193.148:2181");
//group 代表一個消費組
props.put("group.id", "jd-group");
//zk連線超時
props.put("zookeeper.session.timeout.ms", "4000");
props.put("zookeeper.sync.time.ms", "200");
props.put("auto.commit.interval.ms", "1000");
props.put("auto.offset.reset", "smallest");
//序列化類
props.put("serializer.class", "kafka.serializer.StringEncoder");
ConsumerConfig config = new ConsumerConfig(props);
consumer = kafka.consumer.Consumer.createJavaConsumerConnector(config);
}
void consume() {
Map<String, Integer> topicCountMap = new HashMap<String, Integer>();
topicCountMap.put(KafkaProducer.TOPIC, new Integer(1));
StringDecoder keyDecoder = new StringDecoder(new VerifiableProperties());
StringDecoder valueDecoder = new StringDecoder(new VerifiableProperties());
Map<String, List<KafkaStream<String, String>>> consumerMap =
consumer.createMessageStreams(topicCountMap,keyDecoder,valueDecoder);
KafkaStream<String, String> stream = consumerMap.get(KafkaProducer.TOPIC).get(0);
ConsumerIterator<String, String> it = stream.iterator();
while (it.hasNext())
System.out.println(it.next().message());
}
public static void main(String[] args) {
new KafkaConsumer().consume();
}
}
注意消費端需要配置成zk的地址,而生產端配置的是kafka的ip和埠。
轉載: http://outofmemory.cn/code-snippet/33051/java-kafka-producer-consumer-example
相關文章
- java的kafka生產消費JavaKafka
- ActiveMQ 生產者和消費者demoMQ
- Java多執行緒——生產者消費者示例Java執行緒
- Kafka 架構圖-輕鬆理解 kafka 生產消費Kafka架構
- Kafka 1.0.0 多消費者示例Kafka
- kafka_2.11-0.10.2.1 的生產者 消費者的示例(new producer api)KafkaAPI
- java編寫生產者/消費者模式的程式。Java模式
- 「Kafka應用」PHP實現生產者與消費者KafkaPHP
- Kafka生產消費資料丟失和優化小結Kafka優化
- 生產消費實現-寫程式碼
- Java實現生產者和消費者Java
- Java實現生產者-消費者模型Java模型
- 生產消費者模式模式
- 插曲:Kafka的生產者案例和消費者原理解析Kafka
- Kafka java api-生產者程式碼KafkaJavaAPI
- java多執行緒之消費生產模型Java執行緒模型
- java實現生產者消費者問題Java
- Java 生產者消費者模式詳細分析Java模式
- 生產者消費者模式模式
- 生產者消費者模型模型
- kafka中生產者和消費者APIKafkaAPI
- kafka消費Kafka
- JAVA執行緒消費者與生產者模型Java執行緒模型
- kafka生產者和消費者吞吐量測試-kafka 商業環境實戰Kafka
- 【java併發程式設計】Lock & Condition 協調同步生產消費Java程式設計
- 食堂中的生產-消費模型模型
- edenhill/kcat:通用命令列非 JVM Apache Kafka 生產者和消費者命令列JVMApacheKafka
- Java多執行緒——生產者和消費者模式Java執行緒模式
- Java多執行緒14:生產者/消費者模型Java執行緒模型
- Kafka 消費組消費者分配策略Kafka
- 九、生產者與消費者模式模式
- Spark Streaming 生產、消費流程梳理Spark
- python 生產者消費者模式Python模式
- python中多程式消費者生產者問題Python
- golang 併發程式設計之生產者消費者Golang程式設計
- SpringBoot整合Kafka(生產者和消費者都是SpringBoot服務)Spring BootKafka
- Kafka 簡單實驗二(Python實現簡單生產者消費者)KafkaPython
- Java多執行緒程式設計(同步、死鎖、生產消費者問題)Java執行緒程式設計