實時處理 Kafka 資料來源

weixin_33976072發表於2017-09-15

Time: 2017.9.15

Targets: 實時處理 Kafka 資料

Owner: C. L. Wang

749674-ee97c41c3cc3ef5c.jpg
Kafka

程式碼

匯入Kafka的Jar包

<dependency>
    <groupId>org.apache.kafka</groupId>
    <artifactId>kafka_2.12</artifactId>
    <version>0.11.0.0</version>
</dependency>

Kafka的伺服器地址

cat /etc/hosts

10.215.33.xx md3 m3 hive_server hue_server hive_server.chunyu.me zk_share_3 zk_kafka_3 log_kafka_1
10.215.33.xx md6 log_kafka_2
10.215.33.xx md11 log_kafka_3

Kafka的資料格式,即ConsumerRecord

record: ConsumerRecord(
topic = elapsed.log, 
partition = 1, 
offset = 42418829, 
CreateTime = 1505455758331, 
serialized key size = -1, 
serialized value size = 906, 
headers = RecordHeaders(headers = [], isReadOnly = false), 
key = null, 
value={
  "@timestamp": "2017-09-15T06:09:18.331Z",
  "beat": {
    "hostname": "db06",
    "name": "db06",
    "version": "5.5.2"
  },
  "input_type": "log",
  "log_name": "elapsed",
  "log_type": "project",
  "message": "2017-09-15 14:09:17,328 INFO log_utils.log_elapsed_info Line:134  Time Elapsed: 0.073685s, Path: /api/problem/detail/user_view/, Code: 200, Get: [u'installId=1497448830616', u'vendor=xiaomi', u'app=0', u'secureId=c0e6aa0a403c760d', u'platform=android', u'mac=02:00:00:00:00:00', u'version=8.4.0', u'limit=120', u'phoneType=MI NOTE LTE_by_Xiaomi', u'imei=867993021875040', u'app_ver=8.4.0', u'systemVer=6.0.1', u'problem_id=576822674', u'device_id=867993021875040', 'uid=3636454'], Post: [], 112.67.96.208, Chunyuyisheng/8.4.0 (Android 6.0.1;MI+NOTE+LTE_by_Xiaomi), view_name: ask.view.problem_views.problem_detail_for_user_view, ",
  "offset": 5708029368,
  "source": "/home/chunyu/backup/django_log/elapsed_logger.log-20170915",
  "type": "log"
}
)

Kafka的讀取資料類

public class KafkaMain implements ILaMain {
    // Kafka的伺服器地址
    private static final String KAFKA_SERVERS = "log_kafka_1:9092, log_kafka_2:9092, log_kafka_3:9092";
    private static final String DEF_GROUP_ID = "test"; // 測試的Group ID

    private final String[] mTopics;
    private final KafkaConsumer<String, String> mConsumer;
    private ILaManager mKafkaManager;

    /**
     * 建構函式,Topic即資料來源
     * 日誌處理的Topic,{@link me.chunyu.log_analysis.utils.LaValues.Topics}
     *
     * @param topics Topic
     */
    public KafkaMain(String[] topics) {
        mConsumer = new KafkaConsumer<>(createProperties(KAFKA_SERVERS, DEF_GROUP_ID));
        mTopics = topics;
        mKafkaManager = KafkaManager.getInstance();
    }

    private Properties createProperties(String servers, String groupId) {
        Properties props = new Properties();
        props.put("bootstrap.servers", servers);
        props.put("group.id", groupId);
        props.put("auto.commit.interval.ms", "1000");
        props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");

        return props;
    }

    private void shutdown() {
        if (mConsumer != null)
            mConsumer.close();
    }

    @Override public void run() {
        List<String> list = new ArrayList<>(Arrays.asList(mTopics));
        mConsumer.subscribe(list);

        System.out.println("++++++++++++++++++++ Kafka接受資料 ++++++++++++++++++++");
        try {
            while (true) {
                // Kafka可能會一次載入多條資料
                ConsumerRecords<String, String> records = mConsumer.poll(1000);
                for (ConsumerRecord<String, String> record : records) {
                    System.out.println(record.toString());
                    KafkaValueEntity entity = new Gson().fromJson(record.value(), KafkaValueEntity.class);

                    mKafkaManager.process(entity.message);
                }
                if (!records.isEmpty()) {  // 用於測試資料
                    break;
                }
            }
        } finally {
            shutdown();
        }
        System.out.println("++++++++++++++++++++ Kafka終止資料 ++++++++++++++++++++");
    }
}

配置

Kafka的埠:9000
Kafka的配置:5個Partition;保留時間1天;

主頁:

  • Zookeepers是Kafka分發資料的伺服器,同Brokers,預設3個;
  • Topics是資料來源,含有15個,日誌資料來源是elapsed.log
  • Version是版本,顯示版本0.10.1.0是錯誤的,實際是0.11.0.0,同pom的配置;
749674-a6db073f9cb95096.png
主頁

消費者:

  • Consumer的名稱,即GroupId;
  • Topics就是當前消費者所消費的資料來源;
749674-b4fef1b0affd95a0.png
Consumer

Topic:

  • Patition是Kafka的分片,預設5份,在相同消費者(GroupId)中,最多不要超過5個消費源(程式或執行緒);
  • LogSize是當前資料的位置,Consumer Offset是消費的位置,預設從註冊之後才開始消費;從頭消費需要指定引數,參考
749674-913d5afea8bf7bf6.png
Topic

OK, that's all!

相關文章