0x00 概述
測試搭建一個使用kafka作為訊息佇列的ELK環境,資料採集轉換實現結構如下:
F5 HSL–>logstash(流處理)–> kafka –>elasticsearch
測試中的elk版本為6.3, confluent版本是4.1.1
希望實現的效果是 HSL傳送的日誌脛骨logstash進行流處理後輸出為json,該json類容原樣直接儲存到kafka中,kafka不再做其它方面的格式處理。
0x01 測試
192.168.214.138: 安裝 logstash,confluent環境
192.168.214.137: 安裝ELK套件(停用logstash,只啟動es和kibana)
confluent安裝除錯備忘:
- 像安裝elk環境一樣,安裝java環境先
- 首先在不考慮kafka的情形下,實現F5 HSL—Logstash–ES的正常執行,並實現簡單的正常kibana的展現。後面改用kafka時候直接將這裡output修改為kafka plugin配置即可。
此時logstash的相關配置
input { udp { port => 8514 type => 'f5-dns' } } filter { if [type] == 'f5-dns' { grok { match => { "message" => "%{HOSTNAME:F5hostname} %{IP:clientip} %{POSINT:clientport} %{IP:svrip} %{NUMBER:qid} %{HOSTNAME:qname} %{GREEDYDA TA:qtype} %{GREEDYDATA:status} %{GREEDYDATA:origin}" } } geoip { source => "clientip" target => "geoip" } } } output { #stdout{ codec => rubydebug } #elasticsearch { # hosts => ["192.168.214.137:9200"] # index => "f5-dns-%{+YYYY.MM.dd}" #template_name => "f5-dns" #} kafka { codec => json bootstrap_servers => "localhost:9092" topic_id => "f5-dns-kafka" } }
發一些測試流量,確認es正常收到資料,檢視cerebro上顯示的狀態。(截圖是除錯完畢後截圖)
# cd /usr/share/cerebro/cerebro-0.8.1/ # /bin/cerebro -Dhttp.port=9110 -Dhttp.address=0.0.0.0
安裝confluent,由於是測試環境,直接confluent官方網站下載壓縮包,解壓後使用。位置在/root/confluent-4.1.1/下
由於是測試環境,直接用confluent的命令列來啟動所有相關服務,發現kakfa啟動失敗
[root@kafka-logstash bin]# ./confluent start Using CONFLUENT_CURRENT: /tmp/confluent.dA0KYIWj Starting zookeeper zookeeper is [UP] Starting kafka /Kafka failed to start kafka is [DOWN] Cannot start Schema Registry, Kafka Server is not running. Check your deployment
檢查發現由於虛機記憶體給太少了,導致java無法分配足夠記憶體給kafka
[root@kafka-logstash bin]# ./kafka-server-start ../etc/kafka/server.properties OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000c0000000, 1073741824, 0) failed; error='Cannot allocate memory' (errno=12)
擴大虛擬機器記憶體,並將logstash的jvm配置中設定的記憶體調小
kafka server配置檔案
[root@kafka-logstash kafka]# pwd /root/confluent-4.1.1/etc/kafka [root@kafka-logstash kafka]# egrep -v "^#|^$" server.properties broker.id=0 listeners=PLAINTEXT://localhost:9092 num.network.threads=3 num.io.threads=8 socket.send.buffer.bytes=102400 socket.receive.buffer.bytes=102400 socket.request.max.bytes=104857600 log.dirs=/tmp/kafka-logs num.partitions=1 num.recovery.threads.per.data.dir=1 offsets.topic.replication.factor=1 transaction.state.log.replication.factor=1 transaction.state.log.min.isr=1 log.retention.hours=168 log.segment.bytes=1073741824 log.retention.check.interval.ms=300000 zookeeper.connect=localhost:2181 zookeeper.connection.timeout.ms=6000 confluent.support.metrics.enable=true confluent.support.customer.id=anonymous group.initial.rebalance.delay.ms=0
connect 配置檔案,此配置中,將原來的avro converter替換成了json,同時關閉了key vlaue的schema識別。因為我們輸入的內容是直接的json類容,沒有相關schema,這裡只是希望kafka原樣解析logstash輸出的json內容到es
[root@kafka-logstash kafka]# pwd /root/confluent-4.1.1/etc/kafka [root@kafka-logstash kafka]# egrep -v "^#|^$" connect-standalone.properties bootstrap.servers=localhost:9092 key.converter=org.apache.kafka.connect.json.JsonConverter value.converter=org.apache.kafka.connect.json.JsonConverter key.converter.schemas.enable=false value.converter.schemas.enable=false internal.key.converter=org.apache.kafka.connect.json.JsonConverter internal.value.converter=org.apache.kafka.connect.json.JsonConverter internal.key.converter.schemas.enable=false internal.value.converter.schemas.enable=false offset.storage.file.filename=/tmp/connect.offsets offset.flush.interval.ms=10000 plugin.path=share/java
如果不做上述修改,connect總會在將日誌sink到ES時提示無法反序列化,magic byte錯誤等。如果使用confluent status命令檢視,會發現connect會從up變為down
[root@kafka-logstash confluent-4.1.1]# ./bin/confluent status ksql-server is [DOWN] connect is [DOWN] kafka-rest is [UP] schema-registry is [UP] kafka is [UP] zookeeper is [UP]
schema-registry 相關配置
[root@kafka-logstash schema-registry]# pwd /root/confluent-4.1.1/etc/schema-registry [root@kafka-logstash schema-registry]# egrep -v "^#|^$" connect-avro-distributed.properties connect-avro-standalone.properties log4j.properties schema-registry.properties [root@kafka-logstash schema-registry]# egrep -v "^#|^$" connect-avro-standalone.properties bootstrap.servers=localhost:9092 key.converter.schema.registry.url=http://localhost:8081 value.converter.schema.registry.url=http://localhost:8081 key.converter=org.apache.kafka.connect.json.JsonConverter value.converter=org.apache.kafka.connect.json.JsonConverter key.converter.schemas.enable=false value.converter.schemas.enable=false internal.key.converter=org.apache.kafka.connect.json.JsonConverter internal.value.converter=org.apache.kafka.connect.json.JsonConverter internal.key.converter.schemas.enable=false internal.value.converter.schemas.enable=false offset.storage.file.filename=/tmp/connect.offsets plugin.path=share/java [root@kafka-logstash schema-registry]# egrep -v "^#|^$" schema-registry.properties listeners=http://0.0.0.0:8081 kafkastore.connection.url=localhost:2181 kafkastore.topic=_schemas debug=false
es-connector的配置檔案
[root@kafka-logstash kafka-connect-elasticsearch]# pwd /root/confluent-4.1.1/etc/kafka-connect-elasticsearch [root@kafka-logstash kafka-connect-elasticsearch]# egrep -v "^#|^$" quickstart-elasticsearch.properties name=f5-dns connector.class=io.confluent.connect.elasticsearch.ElasticsearchSinkConnector tasks.max=1 topics=f5-dns-kafka key.ignore=true value.ignore=true schema.ignore=true connection.url=http://192.168.214.137:9200 type.name=doc transforms=MyRouter transforms.MyRouter.type=org.apache.kafka.connect.transforms.TimestampRouter transforms.MyRouter.topic.format=${topic}-${timestamp} transforms.MyRouter.timestamp.format=yyyyMMdd
上述配置中topics配置是希望傳輸到ES的topic,通過設定transform的timestamp router來實現將topic按天動態對映為ES中的index,這樣可以讓ES每天產生一個index。注意需要配置schema.ignore=true,否則kafka無法將受收到的資料傳送到ES上,connect的 connect.stdout 日誌會顯示:
[root@kafka-logstash connect]# pwd /tmp/confluent.dA0KYIWj/connect Caused by: org.apache.kafka.connect.errors.DataException: Cannot infer mapping without schema. at io.confluent.connect.elasticsearch.Mapping.inferMapping(Mapping.java:84) at io.confluent.connect.elasticsearch.jest.JestElasticsearchClient.createMapping(JestElasticsearchClient.java:221) at io.confluent.connect.elasticsearch.Mapping.createMapping(Mapping.java:66) at io.confluent.connect.elasticsearch.ElasticsearchWriter.write(ElasticsearchWriter.java:260) at io.confluent.connect.elasticsearch.ElasticsearchSinkTask.put(ElasticsearchSinkTask.java:162) at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:524)
配置修正完畢後,向logstash傳送資料,發現日誌已經可以正常傳送到了ES上,且格式和沒有kafka時是一致的。
沒有kafka時:
{ "_index": "f5-dns-2018.06.26", "_type": "doc", "_id": "KrddO2QBXB-i0ay0g5G9", "_version": 1, "_score": 1, "_source": { "message": "localhost.lan 202.202.102.100 53777 172.16.199.136 42487 www.test.com A NOERROR GTM_REWRITE ", "F5hostname": "localhost.lan", "qid": "42487", "clientip": "202.202.102.100", "geoip": { "region_name": "Chongqing", "location": { "lon": 106.5528, "lat": 29.5628 }, "country_code2": "CN", "timezone": "Asia/Shanghai", "country_name": "China", "region_code": "50", "continent_code": "AS", "city_name": "Chongqing", "country_code3": "CN", "ip": "202.202.102.100", "latitude": 29.5628, "longitude": 106.5528 }, "status": "NOERROR", "qname": "www.test.com", "clientport": "53777", "@version": "1", "@timestamp": "2018-06-26T09:12:21.585Z", "host": "192.168.214.1", "type": "f5-dns", "qtype": "A", "origin": "GTM_REWRITE ", "svrip": "172.16.199.136" } }
有kafka時:
{ "_index": "f5-dns-kafka-20180628", "_type": "doc", "_id": "f5-dns-kafka-20180628+0+23", "_version": 1, "_score": 1, "_source": { "F5hostname": "localhost.lan", "geoip": { "city_name": "Chongqing", "timezone": "Asia/Shanghai", "ip": "202.202.100.100", "latitude": 29.5628, "country_name": "China", "country_code2": "CN", "continent_code": "AS", "country_code3": "CN", "region_name": "Chongqing", "location": { "lon": 106.5528, "lat": 29.5628 }, "region_code": "50", "longitude": 106.5528 }, "qtype": "A", "origin": "DNSX ", "type": "f5-dns", "message": "localhost.lan 202.202.100.100 53777 172.16.199.136 42487 www.myf5.net A NOERROR DNSX ", "qid": "42487", "clientport": "53777", "@timestamp": "2018-06-28T09:05:20.594Z", "clientip": "202.202.100.100", "qname": "www.myf5.net", "host": "192.168.214.1", "@version": "1", "svrip": "172.16.199.136", "status": "NOERROR" } }
相關REST API輸出
http://192.168.214.138:8083/connectors/elasticsearch-sink/tasks [ { "id": { "connector": "elasticsearch-sink", "task": 0 }, "config": { "connector.class": "io.confluent.connect.elasticsearch.ElasticsearchSinkConnector", "type.name": "doc", "value.ignore": "true", "tasks.max": "1", "topics": "f5-dns-kafka", "transforms.MyRouter.topic.format": "${topic}-${timestamp}", "transforms": "MyRouter", "key.ignore": "true", "schema.ignore": "true", "transforms.MyRouter.timestamp.format": "yyyyMMdd", "task.class": "io.confluent.connect.elasticsearch.ElasticsearchSinkTask", "name": "elasticsearch-sink", "connection.url": "http://192.168.214.137:9200", "transforms.MyRouter.type": "org.apache.kafka.connect.transforms.TimestampRouter" } } ] http://192.168.214.138:8083/connectors/elasticsearch-sink/ { "name": "elasticsearch-sink", "config": { "connector.class": "io.confluent.connect.elasticsearch.ElasticsearchSinkConnector", "type.name": "doc", "value.ignore": "true", "tasks.max": "1", "topics": "f5-dns-kafka", "transforms.MyRouter.topic.format": "${topic}-${timestamp}", "transforms": "MyRouter", "key.ignore": "true", "schema.ignore": "true", "transforms.MyRouter.timestamp.format": "yyyyMMdd", "name": "elasticsearch-sink", "connection.url": "http://192.168.214.137:9200", "transforms.MyRouter.type": "org.apache.kafka.connect.transforms.TimestampRouter" }, "tasks": [ { "connector": "elasticsearch-sink", "task": 0 } ], "type": "sink" } http://192.168.214.138:8083/connectors/elasticsearch-sink/status { "name": "elasticsearch-sink", "connector": { "state": "RUNNING", "worker_id": "172.16.150.179:8083" }, "tasks": [ { "state": "RUNNING", "id": 0, "worker_id": "172.16.150.179:8083" } ], "type": "sink" }
http://192.168.214.138:8082/brokers { "brokers": [ 0 ] } http://192.168.214.138:8082/topics [ "__confluent.support.metrics", "_confluent-ksql-default__command_topic", "_schemas", "connect-configs", "connect-offsets", "connect-statuses", "f5-dns-2018.06", "f5-dns-2018.06.27", "f5-dns-kafka", "test-elasticsearch-sink" ] http://192.168.214.138:8082/topics/f5-dns-kafka { "name": "f5-dns-kafka", "configs": { "file.delete.delay.ms": "60000", "segment.ms": "604800000", "min.compaction.lag.ms": "0", "retention.bytes": "-1", "segment.index.bytes": "10485760", "cleanup.policy": "delete", "follower.replication.throttled.replicas": "", "message.timestamp.difference.max.ms": "9223372036854775807", "segment.jitter.ms": "0", "preallocate": "false", "segment.bytes": "1073741824", "message.timestamp.type": "CreateTime", "message.format.version": "1.1-IV0", "max.message.bytes": "1000012", "unclean.leader.election.enable": "false", "retention.ms": "604800000", "flush.ms": "9223372036854775807", "delete.retention.ms": "86400000", "leader.replication.throttled.replicas": "", "min.insync.replicas": "1", "flush.messages": "9223372036854775807", "compression.type": "producer", "min.cleanable.dirty.ratio": "0.5", "index.interval.bytes": "4096" }, "partitions": [ { "partition": 0, "leader": 0, "replicas": [ { "broker": 0, "leader": true, "in_sync": true } ] } ] }
測試中kafka的配置基本都為確實配置,沒有考慮任何的記憶體優化,kafka使用磁碟的大小考慮等
測試參考:
https://docs.confluent.io/current/installation/installing_cp.html
https://docs.confluent.io/current/connect/connect-elasticsearch/docs/elasticsearch_connector.html
https://docs.confluent.io/current/connect/connect-elasticsearch/docs/configuration_options.html
儲存機制參考 https://blog.csdn.net/opensure/article/details/46048589
kafka配置引數參考 https://blog.csdn.net/lizhitao/article/details/25667831
更多kafka原理 https://blog.csdn.net/ychenfeng/article/details/74980531
confluent CLI:
confluent: A command line interface to manage Confluent services Usage: confluent <command> [<subcommand>] [<parameters>] These are the available commands: acl Specify acl for a service. config Configure a connector. current Get the path of the data and logs of the services managed by the current confluent run. destroy Delete the data and logs of the current confluent run. list List available services. load Load a connector. log Read or tail the log of a service. start Start all services or a specific service along with its dependencies status Get the status of all services or the status of a specific service along with its dependencies. stop Stop all services or a specific service along with the services depending on it. top Track resource usage of a service. unload Unload a connector. 'confluent help' lists available commands. See 'confluent help <command>' to read about a specific command.
confluent platform 服務埠表
參考