在Kafka下載頁面下載0.8版本,解壓縮。
1.修改config目錄下的server.properties 裡面的host.name為機器的ip。假如部署kafka和開發執行kafka例子為同一臺機器,不用修改,用預設的localhost也行。
2.修改config目錄下的zookeeper.properties 裡面的dataDir屬性為你需要的目錄。
3.假如你要配置叢集,在kafka解壓縮目錄下新建zoo_data目錄(第一次的時候需要新建),在zoo_data目錄新建myid檔案,設定內容為1。同時修改zookeeper.properties,具體可參考:solrcloud在tomcat下安裝(三)
4.啟動kafka。
//啟動zookeeper server (用&是為了能退出命令列):
bin/zookeeper-server-start.sh config/zookeeper.properties &
//啟動kafka server:
bin/kafka-server-start.sh config/server.properties &
|
5.新建一個生產者例子
import java.util.Properties;
import kafka.javaapi.producer.Producer;
import kafka.producer.KeyedMessage;
import kafka.producer.ProducerConfig;
public class KafkaTest {
public static void main(String[] args) {
Properties props = new Properties();
props.put("zk.connect", "10.103.22.47:2181");
props.put("serializer.class", "kafka.serializer.StringEncoder");
props.put("metadata.broker.list", "10.103.22.47:9092");
props.put("request.required.acks", "1");
//props.put("partitioner.class", "com.xq.SimplePartitioner");
ProducerConfig config = new ProducerConfig(props);
Producer<String, String> producer = new Producer<String, String>(config);
String ip = "192.168.2.3";
String msg ="this is a messageuuu!";
KeyedMessage<String, String> data = new KeyedMessage<String, String>("test", ip,msg);
producer.send(data);
producer.close();
}
}
|
新建一個消費者例子
import java.nio.ByteBuffer;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import kafka.consumer.Consumer;
import kafka.consumer.ConsumerConfig;
import kafka.consumer.KafkaStream;
import kafka.javaapi.consumer.ConsumerConnector;
import kafka.message.Message;
import kafka.message.MessageAndMetadata;
public class ConsumerSample {
public static void main(String[] args) {
// specify some consumer properties
Properties props = new Properties();
props.put("zookeeper.connect", "10.103.22.47:2181");
props.put("zookeeper.connectiontimeout.ms", "1000000");
props.put("group.id", "test_group");
// Create the connection to the cluster
ConsumerConfig consumerConfig = new ConsumerConfig(props);
ConsumerConnector connector = Consumer.createJavaConsumerConnector(consumerConfig);
Map<String,Integer> topics = new HashMap<String,Integer>();
topics.put("test", 2);
Map<String, List<KafkaStream<byte[], byte[]>>> topicMessageStreams = connector.createMessageStreams(topics);
List<KafkaStream<byte[], byte[]>> streams = topicMessageStreams.get("test");
ExecutorService threadPool = Executors.newFixedThreadPool(2);
for (final KafkaStream<byte[], byte[]> stream : streams) {
threadPool.submit(new Runnable() {
public void run() {
for (MessageAndMetadata msgAndMetadata : stream) {
// process message (msgAndMetadata.message())
System.out.println("topic: " + msgAndMetadata.topic());
Message message = (Message) msgAndMetadata.message();
ByteBuffer buffer = message.payload();
byte[] bytes = new byte[message.payloadSize()];
buffer.get(bytes);
String tmp = new String(bytes);
System.out.println("message content: " + tmp);
}
}
});
}
}
}
|
先啟動消費者例子,然後再啟動生產者例子,這樣會立即看到效果。
本文固定連結:
http://www.chepoo.com/kafka-single-development-environment-example.html | IT技術精華網