Filebeat 關鍵字多行匹配日誌採集(multiline與include_lines)
很多同事認為filebeat採集日誌不能做到多行處理,今天這裡討論下filebeat的multiline與include_lines。
先來個案例,以下日誌,我們只要求採集error的欄位,
1
2
3
|
2017 /06/22 11:26:30 [error] 26067 #0: *17918 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.32.17, server: localhost, request: "GET /wss/ HTTP/1.1", upstream: "http://192.168.12.106:8010/", host: "192.168.12.106"
2017 /06/22 11:26:30 [info] 26067 #0:
2017 /06/22 12:05:10 [error] 26067 #0: *17922 open() "/data/programs/nginx/html/ws" failed (2: No such file or directory), client: 192.168.32.17, server: localhost, request: "GET /ws HTTP/1.1", host: "192.168.12.106"
|
filebeat.yml檔案配置如下:
1
2
3
4
5
6
7
8
9
|
filebeat.prospectors: - input_type: log paths:
- /tmp/test .log
include_lines: [ 'error' ]
output.kafka: enabled: true
hosts: [ "192.168.12.105:9092" ]
topic: logstash-errors-log
|
檢視下kafka佇列
果然只有“error”關鍵字的日誌被採集了
1
2
|
{ "@timestamp" : "2017-06-23T08:57:25.227Z" , "beat" :{ "name" : "192.168.12.106" }, "input_type" : "log" , "message" : "2017/06/22 12:05:10 [error] 26067#0: *17922 open() /data/programs/nginx/html/ws failed (2: No such file or directory), client: 192.168.32.17, server: localhost, request: GET /ws HTTP/1.1, host: 192.168.12.106" , "offset" :30926, "source" : "/tmp/test.log" , "type" : "log" }
{ "@timestamp" : "2017-06-23T08:57:32.228Z" , "beat" :{ "name" : "192.168.12.106" }, "input_type" : "log" , "message" : "2017/06/22 12:05:10 [error] 26067#0: *17922 open() /data/programs/nginx/html/ws failed (2: No such file or directory), client: 192.168.32.17, server: localhost, request: GET /ws HTTP/1.1, host: 192.168.12.106" , "offset" :31342, "source" : "/tmp/test.log" , "type" : "log" }
|
再來多行案例:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
|
[2016-05-25 12:39:04,744][DEBUG][action.bulk ] [Set] [***][3] failed to execute bulk item (index) index {[***][***][***], source [{***}}
MapperParsingException[Field name [events.created] cannot contain '.' ]
at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseProperties(ObjectMapper.java:273)
at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseObjectOrDocumentTypeProperties(ObjectMapper.java:218)
at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parse(ObjectMapper.java:193)
at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseProperties(ObjectMapper.java:305)
at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseObjectOrDocumentTypeProperties(ObjectMapper.java:218)
at org.elasticsearch.index.mapper.object.RootObjectMapper$TypeParser.parse(RootObjectMapper.java:139)
at org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:118)
at org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:99)
at org.elasticsearch.index.mapper.MapperService.parse(MapperService.java:498)
at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.applyRequest(MetaDataMappingService.java:257)
at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.execute(MetaDataMappingService.java:230)
at org.elasticsearch.cluster.service.InternalClusterService.runTasksForExecutor(InternalClusterService.java:468)
at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:772)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:231)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:194)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
|
filebeat.yml檔案配置如下:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
|
filebeat.prospectors: - input_type: log paths:
- /tmp/test .log
multiline:
pattern: '^\['
negate: true
match: after
fields:
beat.name: 192.168.12.106
fields_under_root: true
output.kafka: enabled: true
hosts: [ "192.168.12.105:9092" ]
topic: logstash-errors-log
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
|
kafka佇列如下: { "@timestamp" : "2017-06-23T09:09:02.887Z" , "beat" :{ "name" : "192.168.12.106" }, "input_type" : "log" ,
"message" :"[2016-05-25 12:39:04,744][DEBUG][action.bulk ] [Set] [***][3] failed to execute bulk item (index) index {[***][***][***], source [{***}}\n
MapperParsingException[Field name [events.created] cannot contain '.' ]\n at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseProperties(ObjectMapper.java:273)\n
at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseObjectOrDocumentTypeProperties(ObjectMapper.java:218)\n at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parse(ObjectMapper.java:193)\n at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseProperties(ObjectMapper.java:305)\n at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseObjectOrDocumentTypeProperties(ObjectMapper.java:218)\n at org.elasticsearch.index.mapper.object.RootObjectMapper$TypeParser.parse(RootObjectMapper.java:139)\n at org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:118)\n at org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:99)\n at org.elasticsearch.index.mapper.MapperService.parse(MapperService.java:498)\n at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.applyRequest(MetaDataMappingService.java:257)\n at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.execute(MetaDataMappingService.java:230)\n at org.elasticsearch.cluster.service.InternalClusterService.runTasksForExecutor(InternalClusterService.java:468)\n at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:772)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:231)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:194)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\n\n\n\n "," offset ":35737," source ":" /tmp/test .log "," type ":" log"}
|
可以看出multiline將多行日誌彙總。
multiline與include_lines,結合使用。
filebeat.yml檔案配置如下:
1
2
3
4
5
6
7
8
9
10
11
12
13
|
filebeat.prospectors: - input_type: log paths:
- /tmp/test .log
include_lines: [ 'error' ]
multiline:
pattern: '^\['
negate: true
match: after
output.kafka: enabled: true
hosts: [ "192.168.12.105:9092" ]
topic: logstash-errors-log
|
即日誌中如果有"error"關鍵字的日誌,進行多行合併,傳送至kafka.
經驗證,在日誌不斷輸入的情況,會把不含"error"的行也進行合併,日誌有間隔的情況輸入,過濾效果比較好,具體結合業務情況實用吧。
總之一句話,filebeat可以多行合併和進行關鍵字日誌採集。
本文轉自 jackjiaxiong 51CTO部落格,原文連結:http://blog.51cto.com/xiangcun168/1941401
相關文章
- KubeSphere 多行日誌採集方案深度探索
- 日誌採集/分析
- 日誌採集框架Flume框架
- 【老男孩Linux技術分享】5分鐘帶你搞懂日誌採集利器Filebeat!Linux
- Kubernetes日誌採集
- FCS程式日誌的關鍵字說明
- 日誌服務之使用Nginx模式採集日誌Nginx模式
- docker 容器日誌集中 ELK + filebeatDocker
- 日誌服務 HarmonyOS NEXT 日誌採集最佳實踐
- ELK太重?試試KFC日誌採集
- Filebeat 收集日誌的那些事兒
- 日誌收集之filebeat使用介紹
- ELK+FileBeat日誌分析系統
- filebeat 收集nginx日誌輸出到kafkaNginxKafka
- 日誌收集器Filebeat詳解
- tomcat日誌集中採集、分析與展示的幾種方法Tomcat
- 透過 Filebeat 收集 ubuntu 系統日誌Ubuntu
- 應用日誌採集是什麼意思?批次採集應用日誌軟體用哪個?怎麼操作?應用日誌
- awk多行日誌排序輸出排序
- 日誌分析平臺ELK之日誌收集器filebeat
- Zabbix Agent active主動模式監控日誌(多關鍵字)模式
- IT小白也能輕鬆get日誌服務---使用Nginx模式採集日誌Nginx模式
- Docker筆記(十三):容器日誌採集實踐Docker筆記
- Android 崩潰日誌採集元件-DhccCrashLibAndroid元件
- 一文搞懂 SAE 日誌採集架構架構
- 轉轉容器日誌採集的演進之路
- 遊戲日誌分析2:全方位資料採集遊戲
- ELK+FileBeat+Kafka搭建日誌管理平臺Kafka
- 在Docker上搭建ELK+Filebeat日誌中心Docker
- abstract關鍵字 super 關鍵字 類與繼承繼承
- ConcurrentModificationException日誌關鍵字報警引發的思考Exception
- IoT日誌利器:嵌入式日誌採集客戶端(C Producer)釋出客戶端
- SQL“多欄位模糊匹配關鍵字查詢”SQL
- MySQL日誌收集之Filebeat和Logstsh的一鍵安裝配置(ELK架構)MySql架構
- 騰訊雲容器服務日誌採集最佳實踐
- go-kit 微服務 日誌分析管理 (ELK + Filebeat)Go微服務
- 針對Fluent-Bit採集容器日誌的補充
- Logtail檔案日誌採集之完整正則模式AI模式