ELK 5.0.1+Filebeat5.0.1實時監控MongoDB日誌並使用正則解析mongodb日誌
關於ELK5.0.1的安裝部署,請參考博文( ELK 5.0.1+Filebeat5.0.1 for LINUX RHEL6.6 監控MongoDB日誌),
本文重點說明如何適用filebeat實時監控mongodb資料庫日誌及在logstash正則解析mongodb日誌。
部署完ELK5.0.1後,在需要監控mongodb日誌的資料庫伺服器上部署filebeat來抓取日誌,
首先需要修改filebeat配置檔案:
[root@se122 filebeat-5.0.1]# pwd
/opt/filebeat-5.0.1
[root@se122 filebeat-5.0.1]#
[root@se122 filebeat-5.0.1]# ls
data filebeat filebeat.full.yml filebeat.template-es2x.json filebeat.template.json filebeat.yml scripts
[root@se122 filebeat-5.0.1]# cat filebeat.yml
filebeat :
prospectors :
-
paths :
- /root/rs0-0.log #filebeat負責實時監控的mongodb日誌
document_type : mongodblog #指定filebeat傳送到logstash的mongodb日誌的文件型別為document_type,一定要指定(logstash接收解析匹配要使用)
input_type : log
registry_file :
/opt/filebeat-5.0.1/data/registry
output.logstash:
hosts: ["10.117.194.228:5044"] #logstash服務部署的機器IP地址及執行的服務埠號
[root@se122 filebeat-5.0.1]#
其次修改logstash配置檔案:
[root@rhel6 config]# pwd
/opt/logstash-5.0.1/config
[root@rhel6 config]# cat logstash_mongodb.conf
#input {
# stdin {}
#}
input{
beats {
host => "0.0.0.0"
port => 5044
type => mongodblog #指定filebeat輸入的日誌型別是mongodblog
}
}
filter {
if [type] == "mongodblog" { #過濾器,只處理filebeat傳送過來的mogodblog日誌資料
grok { #解析傳送過來的mognodblog日誌
match => ["message","%{TIMESTAMP_ISO8601:timestamp}\s+%{MONGO3_SEVERITY:severity}\s+%{MONGO3_COMPONENT:component}\s+(?:\[%{DATA:context}\])?\s+%{GREEDYDATA:body}"]
}
if [component] =~ "WRITE" {
grok { #第二層解析body部分,提取mongodblog中的command_type、db_name、command、spend_time欄位
match => ["body","%{WORD:command_type}\s+%{DATA:db_name}\s+\w+\:\s+%{GREEDYDATA:command}%{INT:spend_time}ms$"]
}
} else {
grok {
match => ["body","\s+%{DATA:db_name}\s+\w+\:\s+%{WORD:command_type}\s+%{GREEDYDATA:command}protocol.*%{INT:spend_time}ms$"]
}
}
date {
match => [ "timestamp", "UNIX", "YYYY-MM-dd HH:mm:ss", "ISO8601"]
remove_field => [ "timestamp" ]
}
}
}
output{
elasticsearch {
hosts => ["192.168.144.230:9200"]
index => "mongod_log-%{+YYYY.MM}"
}
stdout {
codec => rubydebug
}
}
[root@rhel6 config]#
然後,確保ELK服務端的服務程式都已經開啟,啟動命令:
[elasticsearch@rhel6 ]$ /home/elasticsearch/elasticsearch-5.0.1/bin/elasticsearch
[root@rhel6 ~]# /opt/logstash-5.0.1/bin/logstash -f /opt/logstash-5.0.1/config/logstash_mongodb.conf
[root@rhel6 ~]# /opt/kibana-5.0.1/bin/kibana
在遠端端啟動filebeat,開始監控mongodb日誌:
[root@se122 filebeat-5.0.1]# /opt/filebeat-5.0.1/filebeat -e -c /opt/filebeat-5.0.1/filebeat.yml -d "Publish"
2017/02/16 05:50:40.931969 beat.go:264: INFO Home path: [/opt/filebeat-5.0.1] Config path: [/opt/filebeat-5.0.1] Data path: [/opt/filebeat-5.0.1/data] Logs path: [/opt/filebeat-5.0.1/logs]
2017/02/16 05:50:40.932036 beat.go:174: INFO Setup Beat: filebeat; Version: 5.0.1
2017/02/16 05:50:40.932167 logp.go:219: INFO Metrics logging every 30s
2017/02/16 05:50:40.932227 logstash.go:90: INFO Max Retries set to: 3
2017/02/16 05:50:40.932444 outputs.go:106: INFO Activated logstash as output plugin.
2017/02/16 05:50:40.932594 publish.go:291: INFO Publisher name: se122
2017/02/16 05:50:40.935437 async.go:63: INFO Flush Interval set to: 1s
2017/02/16 05:50:40.935473 async.go:64: INFO Max Bulk Size set to: 2048
2017/02/16 05:50:40.935745 beat.go:204: INFO filebeat start running.
2017/02/16 05:50:40.935836 registrar.go:66: INFO Registry file set to: /opt/filebeat-5.0.1/data/registry
2017/02/16 05:50:40.935905 registrar.go:99: INFO Loading registrar data from /opt/filebeat-5.0.1/data/registry
2017/02/16 05:50:40.936717 registrar.go:122: INFO States Loaded from registrar: 1
2017/02/16 05:50:40.936771 crawler.go:34: INFO Loading Prospectors: 1
2017/02/16 05:50:40.936860 prospector_log.go:40: INFO Load previous states from registry into memory
2017/02/16 05:50:40.936923 registrar.go:211: INFO Starting Registrar
2017/02/16 05:50:40.936939 sync.go:41: INFO Start sending events to output
2017/02/16 05:50:40.937148 spooler.go:64: INFO Starting spooler: spool_size: 2048; idle_timeout: 5s
2017/02/16 05:50:40.937286 prospector_log.go:67: INFO Previous states loaded: 1
2017/02/16 05:50:40.937404 crawler.go:46: INFO Loading Prospectors completed. Number of prospectors: 1
2017/02/16 05:50:40.937440 crawler.go:61: INFO All prospectors are initialised and running with 1 states to persist
2017/02/16 05:50:40.937478 prospector.go:106: INFO Starting prospector of type: log
2017/02/16 05:50:40.937745 log.go:84: INFO Harvester started for file: /root/rs0-0.log
我們看到,這裡已經開始實時監控mongodb日誌是/root/rs0-0.log;然後,我們去logstash開啟的前臺視窗,可以看到有如下資訊:
{
"severity" => "I",
"offset" => 243843239,
"spend_time" => "0",
"input_type" => "log",
"source" => "/root/rs0-0.log",
"message" => "2017-02-04T14:03:30.025+0800 I COMMAND [conn272] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:364 locks:{} protocol:op_query 0ms",
"type" => "mongodblog",
"body" => "command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:364 locks:{} protocol:op_query 0ms",
"command" => "{ replSetGetStatus: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:364 locks:{} ",
"tags" => [
[0] "beats_input_codec_plain_applied"
],
"component" => "COMMAND",
"@timestamp" => 2017-02-04T06:03:30.025Z,
"db_name" => "admin.$cmd",
"command_type" => "replSetGetStatus",
"@version" => "1",
"beat" => {
"hostname" => "se122",
"name" => "se122",
"version" => "5.0.1"
},
"host" => "se122",
"context" => "conn272"
}
這說明logstash按照配置檔案正常過濾並按照指定的正則解析了mongodblog日誌,再到kibana建立索引:
然後,就能在kibana自定義檢視檢視到監控到的Mongodb日誌了:
本文重點說明如何適用filebeat實時監控mongodb資料庫日誌及在logstash正則解析mongodb日誌。
部署完ELK5.0.1後,在需要監控mongodb日誌的資料庫伺服器上部署filebeat來抓取日誌,
首先需要修改filebeat配置檔案:
[root@se122 filebeat-5.0.1]# pwd
/opt/filebeat-5.0.1
[root@se122 filebeat-5.0.1]#
[root@se122 filebeat-5.0.1]# ls
data filebeat filebeat.full.yml filebeat.template-es2x.json filebeat.template.json filebeat.yml scripts
[root@se122 filebeat-5.0.1]# cat filebeat.yml
filebeat :
prospectors :
-
paths :
- /root/rs0-0.log #filebeat負責實時監控的mongodb日誌
document_type : mongodblog #指定filebeat傳送到logstash的mongodb日誌的文件型別為document_type,一定要指定(logstash接收解析匹配要使用)
input_type : log
registry_file :
/opt/filebeat-5.0.1/data/registry
output.logstash:
hosts: ["10.117.194.228:5044"] #logstash服務部署的機器IP地址及執行的服務埠號
[root@se122 filebeat-5.0.1]#
其次修改logstash配置檔案:
[root@rhel6 config]# pwd
/opt/logstash-5.0.1/config
[root@rhel6 config]# cat logstash_mongodb.conf
#input {
# stdin {}
#}
input{
beats {
host => "0.0.0.0"
port => 5044
type => mongodblog #指定filebeat輸入的日誌型別是mongodblog
}
}
filter {
if [type] == "mongodblog" { #過濾器,只處理filebeat傳送過來的mogodblog日誌資料
grok { #解析傳送過來的mognodblog日誌
match => ["message","%{TIMESTAMP_ISO8601:timestamp}\s+%{MONGO3_SEVERITY:severity}\s+%{MONGO3_COMPONENT:component}\s+(?:\[%{DATA:context}\])?\s+%{GREEDYDATA:body}"]
}
if [component] =~ "WRITE" {
grok { #第二層解析body部分,提取mongodblog中的command_type、db_name、command、spend_time欄位
match => ["body","%{WORD:command_type}\s+%{DATA:db_name}\s+\w+\:\s+%{GREEDYDATA:command}%{INT:spend_time}ms$"]
}
} else {
grok {
match => ["body","\s+%{DATA:db_name}\s+\w+\:\s+%{WORD:command_type}\s+%{GREEDYDATA:command}protocol.*%{INT:spend_time}ms$"]
}
}
date {
match => [ "timestamp", "UNIX", "YYYY-MM-dd HH:mm:ss", "ISO8601"]
remove_field => [ "timestamp" ]
}
}
}
output{
elasticsearch {
hosts => ["192.168.144.230:9200"]
index => "mongod_log-%{+YYYY.MM}"
}
stdout {
codec => rubydebug
}
}
[root@rhel6 config]#
然後,確保ELK服務端的服務程式都已經開啟,啟動命令:
[elasticsearch@rhel6 ]$ /home/elasticsearch/elasticsearch-5.0.1/bin/elasticsearch
[root@rhel6 ~]# /opt/logstash-5.0.1/bin/logstash -f /opt/logstash-5.0.1/config/logstash_mongodb.conf
[root@rhel6 ~]# /opt/kibana-5.0.1/bin/kibana
在遠端端啟動filebeat,開始監控mongodb日誌:
[root@se122 filebeat-5.0.1]# /opt/filebeat-5.0.1/filebeat -e -c /opt/filebeat-5.0.1/filebeat.yml -d "Publish"
2017/02/16 05:50:40.931969 beat.go:264: INFO Home path: [/opt/filebeat-5.0.1] Config path: [/opt/filebeat-5.0.1] Data path: [/opt/filebeat-5.0.1/data] Logs path: [/opt/filebeat-5.0.1/logs]
2017/02/16 05:50:40.932036 beat.go:174: INFO Setup Beat: filebeat; Version: 5.0.1
2017/02/16 05:50:40.932167 logp.go:219: INFO Metrics logging every 30s
2017/02/16 05:50:40.932227 logstash.go:90: INFO Max Retries set to: 3
2017/02/16 05:50:40.932444 outputs.go:106: INFO Activated logstash as output plugin.
2017/02/16 05:50:40.932594 publish.go:291: INFO Publisher name: se122
2017/02/16 05:50:40.935437 async.go:63: INFO Flush Interval set to: 1s
2017/02/16 05:50:40.935473 async.go:64: INFO Max Bulk Size set to: 2048
2017/02/16 05:50:40.935745 beat.go:204: INFO filebeat start running.
2017/02/16 05:50:40.935836 registrar.go:66: INFO Registry file set to: /opt/filebeat-5.0.1/data/registry
2017/02/16 05:50:40.935905 registrar.go:99: INFO Loading registrar data from /opt/filebeat-5.0.1/data/registry
2017/02/16 05:50:40.936717 registrar.go:122: INFO States Loaded from registrar: 1
2017/02/16 05:50:40.936771 crawler.go:34: INFO Loading Prospectors: 1
2017/02/16 05:50:40.936860 prospector_log.go:40: INFO Load previous states from registry into memory
2017/02/16 05:50:40.936923 registrar.go:211: INFO Starting Registrar
2017/02/16 05:50:40.936939 sync.go:41: INFO Start sending events to output
2017/02/16 05:50:40.937148 spooler.go:64: INFO Starting spooler: spool_size: 2048; idle_timeout: 5s
2017/02/16 05:50:40.937286 prospector_log.go:67: INFO Previous states loaded: 1
2017/02/16 05:50:40.937404 crawler.go:46: INFO Loading Prospectors completed. Number of prospectors: 1
2017/02/16 05:50:40.937440 crawler.go:61: INFO All prospectors are initialised and running with 1 states to persist
2017/02/16 05:50:40.937478 prospector.go:106: INFO Starting prospector of type: log
2017/02/16 05:50:40.937745 log.go:84: INFO Harvester started for file: /root/rs0-0.log
我們看到,這裡已經開始實時監控mongodb日誌是/root/rs0-0.log;然後,我們去logstash開啟的前臺視窗,可以看到有如下資訊:
{
"severity" => "I",
"offset" => 243843239,
"spend_time" => "0",
"input_type" => "log",
"source" => "/root/rs0-0.log",
"message" => "2017-02-04T14:03:30.025+0800 I COMMAND [conn272] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:364 locks:{} protocol:op_query 0ms",
"type" => "mongodblog",
"body" => "command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:364 locks:{} protocol:op_query 0ms",
"command" => "{ replSetGetStatus: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:364 locks:{} ",
"tags" => [
[0] "beats_input_codec_plain_applied"
],
"component" => "COMMAND",
"@timestamp" => 2017-02-04T06:03:30.025Z,
"db_name" => "admin.$cmd",
"command_type" => "replSetGetStatus",
"@version" => "1",
"beat" => {
"hostname" => "se122",
"name" => "se122",
"version" => "5.0.1"
},
"host" => "se122",
"context" => "conn272"
}
這說明logstash按照配置檔案正常過濾並按照指定的正則解析了mongodblog日誌,再到kibana建立索引:
然後,就能在kibana自定義檢視檢視到監控到的Mongodb日誌了:
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/29357786/viewspace-2133655/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- ELK 5.0.1+Filebeat5.0.1 for LINUX RHEL6.6 監控MongoDB日誌LinuxMongoDB
- ELK監控nginx日誌總結Nginx
- ELK前端日誌分析、監控系統前端
- mongodb profiling慢請求監控日誌MongoDB
- 爬蟲日誌監控 -- Elastc Stack(ELK)部署爬蟲AST
- 日誌監控
- mongodb 日誌分析工具mtoolsMongoDB
- MongoDB 日誌分析工具 mtoolsMongoDB
- ELK日誌
- 黑盒監控、日誌監控
- Mongodb預設日誌的清理!MongoDB
- logstash監控海量日誌並報警
- Laravel 5.6+ 使用 MongoDB 儲存框架日誌LaravelMongoDB框架
- Mysql事件監控日誌MySql事件
- [日誌分析篇]-利用ELK分析jumpserver日誌-日誌拆分篇Server
- ElasticSearch實戰-日誌監控平臺Elasticsearch
- 【ELK】elastalert 日誌告警AST
- 小程式日誌監控工具
- DG 日誌傳輸監控
- Python監控日誌程式Python
- 03-Loki 日誌監控Loki
- SpringBoot使用ELK日誌收集Spring Boot
- 基於 MongoDB 的 python 日誌功能MongoDBPython
- 基於MongoDB的python日誌功能MongoDBPython
- mongodb日誌太大的解決辦法MongoDB
- 使用zabbix監控oracle的後臺日誌Oracle
- 前端異常日誌監控 – 使用Sentry前端
- Oracle 監聽器日誌解析Oracle
- Docker安裝ELK並實現JSON格式日誌分析DockerJSON
- 使用Docker快速部署ELK分析Nginx日誌實踐DockerNginx
- ELK日誌分析系統
- elk 日誌分析系統
- ELK 日誌分析體系
- 【ELK】日誌分析系統
- ELK實時分析之php的laravel專案日誌PHPLaravel
- 日誌監控實踐 - 監控Agent整合Lua引擎實現多維度日誌採集
- ELK日誌系統之使用Rsyslog快速方便的收集Nginx日誌Nginx
- 跟我一起學docker(15)--監控日誌和日誌管理Docker