ELK 5.0.1+Filebeat5.0.1 for LINUX RHEL6.6 監控MongoDB日誌
ELK5.0.1搭建用到的工具有:
filebeat-5.0.1-linux-x86_64.tar.gz
logstash-5.0.1.tar.gz
elasticsearch-5.0.1.tar.gz
kibana-5.0.1-linux-x86_64.tar.gz
以上4個工具可以網址下的歷史版本中找到。
除此之外,ELK5.0.1對作業系統核心有要求,要求LINUX作業系統核心大於3.5,本次實驗使用的linux作業系統是ORACLE LINUX6.6;
另外,對JAVA JDK版本也有要求,最好安裝jdk-8u111-linux-x64.tar.gz,可以在Oracle官方網站上找到並免費下載到。
linux主機需要修改的配置是:
vi /etc/sysctl.conf
vm.max_map_count = 262144
vi /etc/security/limits.conf
* soft nofile 65536
* hard nofile 131072
* soft nproc 2048
* hard nproc 4096
ELK的工作原理是:filebeat在資料庫伺服器mongodb上監控mongodb日誌,並實時將mongodb的日誌更新內容抓取到併傳送給logstash,
logstash負責根據事先編輯好的正則及過濾條件對filebeat傳送過來的資料進行過濾及正則解析,然後logstash將處理後的資料傳送到elasticsearch引擎,
kibana負責展示elasticsearch中的資料,進行分類、彙總、查詢、製表、畫圖等等。
安裝流程是:
一、elasticsearch-5.0.1.tar.gz安裝
確定作業系統版本核心大於3.5(這裡需要說明的是es要求作業系統核心必須大於3.5,否則es5無法啟動)
[root@rhel6 ~]# uname -a
Linux rhel6 3.8.13-44.1.1.el6uek.x86_64 #2 SMP Wed Sep 10 06:10:25 PDT 2014 x86_64 x86_64 x86_64 GNU/Linux
[root@rhel6 ~]#
確定系統JAVA版本為1.8
[root@rhel6 ~]# java -version
java version "1.8.0_111"
Java(TM) SE Runtime Environment (build 1.8.0_111-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.111-b14, mixed mode)
[root@rhel6 ~]#
建立es組、elasticsearch使用者及es安裝目錄(這裡要說明的是,es5啟動不能使用root,否則報錯無法啟動)
軟體安裝目錄:
/home/elasticsearch/elasticsearch-5.0.1
資料及日誌存放目錄:
/opt/es5.0.1
[root@rhel6 opt]# ls -l
total 20
drwxr-xr-x. 4 elasticsearch es 4096 Feb 13 19:47 es5.0.1
[root@rhel6 opt]# id elasticsearch
uid=700(elasticsearch) gid=700(es) groups=700(es)
[root@rhel6 opt]#
接下來就是解壓安裝elasticsearch-5.0.1.tar.gz,將elasticsearch-5.0.1.tar.gz解壓到/home/elasticsearch/elasticsearch-5.0.1目錄下並修改許可權即可。
修改es的配置檔案:
[root@rhel6 config]# vi elasticsearch.yml
path.data: /opt/es5.0.1/data
path.logs: /opt/es5.0.1/logs
network.host: 192.168.144.230 #IP地址是本機的ip地址
http.port: 9200 #es的web服務埠
使用elasticsearch使用者啟動es5:
[elasticsearch@rhel6 bin]$ ./elasticsearch
[2017-02-13T19:50:49,111][INFO ][o.e.n.Node ] [] initializing ...
[2017-02-13T19:50:49,362][INFO ][o.e.e.NodeEnvironment ] [58P-l3h] using [1] data paths, mounts [[/ (/dev/sda3)]], net usable_space [16.3gb], net total_space [23.4gb], spins? [possibly], types [ext4]
[2017-02-13T19:50:49,363][INFO ][o.e.e.NodeEnvironment ] [58P-l3h] heap size [1.9gb], compressed ordinary object pointers [true]
[2017-02-13T19:50:49,365][INFO ][o.e.n.Node ] [58P-l3h] node name [58P-l3h] derived from node ID; set [node.name] to override
[2017-02-13T19:50:49,390][INFO ][o.e.n.Node ] [58P-l3h] version[5.0.1], pid[3644], build[080bb47/2016-11-11T22:08:49.812Z], OS[Linux/3.8.13-44.1.1.el6uek.x86_64/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_111/25.111-b14]
[2017-02-13T19:50:52,449][INFO ][o.e.p.PluginsService ] [58P-l3h] loaded module [aggs-matrix-stats]
[2017-02-13T19:50:52,450][INFO ][o.e.p.PluginsService ] [58P-l3h] loaded module [ingest-common]
[2017-02-13T19:50:52,450][INFO ][o.e.p.PluginsService ] [58P-l3h] loaded module [lang-expression]
[2017-02-13T19:50:52,450][INFO ][o.e.p.PluginsService ] [58P-l3h] loaded module [lang-groovy]
[2017-02-13T19:50:52,450][INFO ][o.e.p.PluginsService ] [58P-l3h] loaded module [lang-mustache]
[2017-02-13T19:50:52,450][INFO ][o.e.p.PluginsService ] [58P-l3h] loaded module [lang-painless]
[2017-02-13T19:50:52,451][INFO ][o.e.p.PluginsService ] [58P-l3h] loaded module [percolator]
[2017-02-13T19:50:52,451][INFO ][o.e.p.PluginsService ] [58P-l3h] loaded module [reindex]
[2017-02-13T19:50:52,452][INFO ][o.e.p.PluginsService ] [58P-l3h] loaded module [transport-netty3]
[2017-02-13T19:50:52,452][INFO ][o.e.p.PluginsService ] [58P-l3h] loaded module [transport-netty4]
[2017-02-13T19:50:52,460][INFO ][o.e.p.PluginsService ] [58P-l3h] no plugins loaded
[2017-02-13T19:50:56,213][INFO ][o.e.n.Node ] [58P-l3h] initialized
[2017-02-13T19:50:56,213][INFO ][o.e.n.Node ] [58P-l3h] starting ...
[2017-02-13T19:50:56,637][INFO ][o.e.t.TransportService ] [58P-l3h] publish_address {192.168.144.230:9300}, bound_addresses {192.168.144.230:9300}
[2017-02-13T19:50:56,642][INFO ][o.e.b.BootstrapCheck ] [58P-l3h] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks
[2017-02-13T19:50:59,864][INFO ][o.e.c.s.ClusterService ] [58P-l3h] new_master {58P-l3h}{58P-l3hGTqm7e9QzXWn0eA}{J3O-p0wfSMeS4evTxfTmVA}{192.168.144.230}{192.168.144.230:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
[2017-02-13T19:50:59,902][INFO ][o.e.h.HttpServer ] [58P-l3h] publish_address {192.168.144.230:9200}, bound_addresses {192.168.144.230:9200}
[2017-02-13T19:50:59,902][INFO ][o.e.n.Node ] [58P-l3h] started
[2017-02-13T19:50:59,930][INFO ][o.e.g.GatewayService ] [58P-l3h] recovered [0] indices into cluster_state
透過web頁面訪問:http://192.168.144.230:9200/?pretty,能看到類似如下資訊,說明es啟動成功並且正常提供服務:
{
"name" : "58P-l3h",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "mO7oaIXJQyiwCEA-jsSueg",
"version" : {
"number" : "5.0.1",
"build_hash" : "080bb47",
"build_date" : "2016-11-11T22:08:49.812Z",
"build_snapshot" : false,
"lucene_version" : "6.2.1"
},
"tagline" : "You Know, for Search"
}
二、安裝logstash5.0.1
建立軟體安裝目錄:/opt/logstash-5.0.1
將logstash-5.0.1.tar.gz解壓到安裝目錄
編輯 logstash.conf啟動配置檔案:
[root@rhel6 config]# cat logstash.conf
#input {
# stdin {}
#}
input{
beats {
host => "0.0.0.0"
port => 5044
}
}
output{
elasticsearch {
hosts => ["192.168.144.230:9200"]
index => "test"
}
stdout {
codec => rubydebug
}
}
[root@rhel6 config]#
啟動logstash5
./logstash -f /opt/logstash-5.0.1/config/logstash.conf
看到如下輸出,說明logstash啟動成功:
[root@rhel6 bin]# ./logstash -f /opt/logstash-5.0.1/config/logstash.conf
Sending Logstash's logs to /opt/logstash-5.0.1/logs which is now configured via log4j2.properties
[2017-02-14T01:03:25,860][INFO ][logstash.inputs.beats ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[2017-02-14T01:03:25,965][INFO ][org.logstash.beats.Server] Starting server on port: 5044
[2017-02-14T01:03:26,305][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>["http://192.168.144.230:9200"]}}
[2017-02-14T01:03:26,307][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2017-02-14T01:03:26,460][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>50001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"_all"=>{"enabled"=>true, "norms"=>false}, "dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword"}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date", "include_in_all"=>false}, "@version"=>{"type"=>"keyword", "include_in_all"=>false}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2017-02-14T01:03:26,483][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["192.168.144.230:9200"]}
[2017-02-14T01:03:26,492][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>250}
[2017-02-14T01:03:26,500][INFO ][logstash.pipeline ] Pipeline main started
[2017-02-14T01:03:26,552][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
三、kinaba5.0.1安裝
建立軟體安裝目錄:[root@rhel6 kibana-5.0.1]# pwd
/opt/kibana-5.0.1
[root@rhel6 kibana-5.0.1]#
將kibana-5.0.1-linux-x86_64.tar.gz解壓到安裝目錄,修改配置檔案
vi /opt/kibana-5.0.1/config/kibana.conf
server.port: 5601
server.host: "192.168.144.230"
server.name: "rhel6"
elasticsearch.url: "http://192.168.144.230:9200" #這裡指定的是從elasticsearch相關的服務http讀取資料
pid.file: /var/run/kibana.pid
root啟動kinaba5.0.1,看到如下資訊輸出,說明kinaba啟動成功併成連線到elasticsearch:
[root@rhel6 bin]# ./kibana
log [13:04:52.598] [info][status][plugin:kibana@5.0.1] Status changed from uninitialized to green - Ready
log [13:04:52.657] [info][status][plugin:elasticsearch@5.0.1] Status changed from uninitialized to yellow - Waiting for Elasticsearch
log [13:04:52.693] [info][status][plugin:console@5.0.1] Status changed from uninitialized to green - Ready
log [13:04:52.947] [info][status][plugin:timelion@5.0.1] Status changed from uninitialized to green - Ready
log [13:04:52.968] [info][listening] Server running at http://192.168.144.230:5601
log [13:04:52.970] [info][status][ui settings] Status changed from uninitialized to yellow - Elasticsearch plugin is yellow
log [13:04:58.016] [info][status][plugin:elasticsearch@5.0.1] Status changed from yellow to yellow - No existing Kibana index found
log [13:04:58.643] [info][status][plugin:elasticsearch@5.0.1] Status changed from yellow to green - Kibana index ready
log [13:04:58.645] [info][status][ui settings] Status changed from yellow to green - Ready
四、filebeat安裝
建立軟體安裝目錄:
/opt/filebeat-5.0.1
將壓縮包filebeat-5.0.1-linux-x86_64.tar.gz解壓到軟體安裝目錄,修改配置檔案
[root@rhel6 filebeat-5.0.1]# vi filebeat.yml
paths:
- /opt/logs/*.log #定義日誌的監控目錄
output.logstash:
# The Logstash hosts
hosts: ["localhost:5044"]
root啟動filebeat5
[root@rhel6 filebeat-5.0.1]# ./filebeat -e -c filebeat.yml -d "Publish"
2017/02/13 15:45:47.498852 beat.go:264: INFO Home path: [/opt/filebeat-5.0.1] Config path: [/opt/filebeat-5.0.1] Data path: [/opt/filebeat-5.0.1/data] Logs path: [/opt/filebeat-5.0.1/logs]
2017/02/13 15:45:47.498913 beat.go:174: INFO Setup Beat: filebeat; Version: 5.0.1
2017/02/13 15:45:47.498966 logstash.go:90: INFO Max Retries set to: 3
2017/02/13 15:45:47.499008 outputs.go:106: INFO Activated logstash as output plugin.
2017/02/13 15:45:47.499055 publish.go:291: INFO Publisher name: rhel6
2017/02/13 15:45:47.499169 async.go:63: INFO Flush Interval set to: 1s
2017/02/13 15:45:47.499180 async.go:64: INFO Max Bulk Size set to: 2048
2017/02/13 15:45:47.499241 beat.go:204: INFO filebeat start running.
2017/02/13 15:45:47.499251 registrar.go:66: INFO Registry file set to: /opt/filebeat-5.0.1/data/registry
2017/02/13 15:45:47.499309 registrar.go:99: INFO Loading registrar data from /opt/filebeat-5.0.1/data/registry
2017/02/13 15:45:47.499337 registrar.go:122: INFO States Loaded from registrar: 0
2017/02/13 15:45:47.499346 crawler.go:34: INFO Loading Prospectors: 1
2017/02/13 15:45:47.499381 logp.go:219: INFO Metrics logging every 30s
2017/02/13 15:45:47.499386 prospector_log.go:40: INFO Load previous states from registry into memory
2017/02/13 15:45:47.499431 prospector_log.go:67: INFO Previous states loaded: 0
2017/02/13 15:45:47.499479 crawler.go:46: INFO Loading Prospectors completed. Number of prospectors: 1
2017/02/13 15:45:47.499487 crawler.go:61: INFO All prospectors are initialised and running with 0 states to persist
2017/02/13 15:45:47.499501 prospector.go:106: INFO Starting prospector of type: log
2017/02/13 15:45:47.499630 log.go:84: INFO Harvester started for file: /opt/logs/firstset.log
檔案目錄下/opt/logs/我放了一個mongodb的log檔案,暫時是靜態的,後期可以修改,firstset.log的內容:
[root@rhel6 logs]# cat firstset.log
2017-02-11T06:44:42.954+0000 I COMMAND [conn6] command wangxi.t command: insert { insert: "t", documents: [ { _id: ObjectId('589eb2da39e265f288b9d9ae'), name: "wangxi" } ], ordered: true } ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:25 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 1, W: 1 } }, Collection: { acquireCount: { W: 1 } } } protocol:op_command 7ms
2017-02-11T06:45:59.907+0000 I COMMAND [conn7] command wangxi.t command: find { find: "t", filter: { name: "wangxi" } } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:141 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms
[root@rhel6 logs]#
然後觀察logstash視窗有如下內容輸出(說明filebeat讀取到了/opt/logs/firstset.log日誌併傳送到logstash):
[2017-02-14T01:21:29,779][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
{
"@timestamp" => 2017-02-13T17:22:08.837Z,
"offset" => 413,
"@version" => "1",
"input_type" => "log",
"beat" => {
"hostname" => "rhel6",
"name" => "rhel6",
"version" => "5.0.1"
},
"host" => "rhel6",
"source" => "/opt/logs/firstset.log",
"message" => "2017-02-11T06:44:42.954+0000 I COMMAND [conn6] command wangxi.t command: insert { insert: \"t\", documents: [ { _id: ObjectId('589eb2da39e265f288b9d9ae'), name: \"wangxi\" } ], ordered: true } ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:25 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 1, W: 1 } }, Collection: { acquireCount: { W: 1 } } } protocol:op_command 7ms",
"type" => "log",
"tags" => [
[0] "beats_input_codec_plain_applied"
]
}
{
"@timestamp" => 2017-02-13T17:22:08.837Z,
"offset" => 816,
"@version" => "1",
"input_type" => "log",
"beat" => {
"hostname" => "rhel6",
"name" => "rhel6",
"version" => "5.0.1"
},
"host" => "rhel6",
"source" => "/opt/logs/firstset.log",
"message" => "2017-02-11T06:45:59.907+0000 I COMMAND [conn7] command wangxi.t command: find { find: \"t\", filter: { name: \"wangxi\" } } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:141 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms",
"type" => "log",
"tags" => [
[0] "beats_input_codec_plain_applied"
]
}
然後,訪問http://192.168.144.230:5601/app/kibana#/management/kibana/indices/test?_g=()&_a=(tab:indexedFields)建立test索引(這裡的索引是是logstash啟動控制檔案裡的索引名稱):
[root@rhel6 config]# cat logstash.conf
#input {
# stdin {}
#}
input{
beats {
host => "0.0.0.0"
port => 5044
}
}
output{
elasticsearch {
hosts => ["192.168.144.230:9200"]
index => "test"
}
stdout {
codec => rubydebug
}
}
[root@rhel6 config]#
然後,就能訪問http://192.168.144.230:5601/app/kibana#/dev_tools/console?_g=(),輸入如下查詢語句:
GET _search
{
"query": {
"match_phrase": {
"message": "wangxi"
}
}
}
查詢到我們匯入的mongodb日誌了:
filebeat-5.0.1-linux-x86_64.tar.gz
logstash-5.0.1.tar.gz
elasticsearch-5.0.1.tar.gz
kibana-5.0.1-linux-x86_64.tar.gz
以上4個工具可以網址下的歷史版本中找到。
除此之外,ELK5.0.1對作業系統核心有要求,要求LINUX作業系統核心大於3.5,本次實驗使用的linux作業系統是ORACLE LINUX6.6;
另外,對JAVA JDK版本也有要求,最好安裝jdk-8u111-linux-x64.tar.gz,可以在Oracle官方網站上找到並免費下載到。
linux主機需要修改的配置是:
vi /etc/sysctl.conf
vm.max_map_count = 262144
vi /etc/security/limits.conf
* soft nofile 65536
* hard nofile 131072
* soft nproc 2048
* hard nproc 4096
ELK的工作原理是:filebeat在資料庫伺服器mongodb上監控mongodb日誌,並實時將mongodb的日誌更新內容抓取到併傳送給logstash,
logstash負責根據事先編輯好的正則及過濾條件對filebeat傳送過來的資料進行過濾及正則解析,然後logstash將處理後的資料傳送到elasticsearch引擎,
kibana負責展示elasticsearch中的資料,進行分類、彙總、查詢、製表、畫圖等等。
安裝流程是:
一、elasticsearch-5.0.1.tar.gz安裝
確定作業系統版本核心大於3.5(這裡需要說明的是es要求作業系統核心必須大於3.5,否則es5無法啟動)
[root@rhel6 ~]# uname -a
Linux rhel6 3.8.13-44.1.1.el6uek.x86_64 #2 SMP Wed Sep 10 06:10:25 PDT 2014 x86_64 x86_64 x86_64 GNU/Linux
[root@rhel6 ~]#
確定系統JAVA版本為1.8
[root@rhel6 ~]# java -version
java version "1.8.0_111"
Java(TM) SE Runtime Environment (build 1.8.0_111-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.111-b14, mixed mode)
[root@rhel6 ~]#
建立es組、elasticsearch使用者及es安裝目錄(這裡要說明的是,es5啟動不能使用root,否則報錯無法啟動)
軟體安裝目錄:
/home/elasticsearch/elasticsearch-5.0.1
資料及日誌存放目錄:
/opt/es5.0.1
[root@rhel6 opt]# ls -l
total 20
drwxr-xr-x. 4 elasticsearch es 4096 Feb 13 19:47 es5.0.1
[root@rhel6 opt]# id elasticsearch
uid=700(elasticsearch) gid=700(es) groups=700(es)
[root@rhel6 opt]#
接下來就是解壓安裝elasticsearch-5.0.1.tar.gz,將elasticsearch-5.0.1.tar.gz解壓到/home/elasticsearch/elasticsearch-5.0.1目錄下並修改許可權即可。
修改es的配置檔案:
[root@rhel6 config]# vi elasticsearch.yml
path.data: /opt/es5.0.1/data
path.logs: /opt/es5.0.1/logs
network.host: 192.168.144.230 #IP地址是本機的ip地址
http.port: 9200 #es的web服務埠
使用elasticsearch使用者啟動es5:
[elasticsearch@rhel6 bin]$ ./elasticsearch
[2017-02-13T19:50:49,111][INFO ][o.e.n.Node ] [] initializing ...
[2017-02-13T19:50:49,362][INFO ][o.e.e.NodeEnvironment ] [58P-l3h] using [1] data paths, mounts [[/ (/dev/sda3)]], net usable_space [16.3gb], net total_space [23.4gb], spins? [possibly], types [ext4]
[2017-02-13T19:50:49,363][INFO ][o.e.e.NodeEnvironment ] [58P-l3h] heap size [1.9gb], compressed ordinary object pointers [true]
[2017-02-13T19:50:49,365][INFO ][o.e.n.Node ] [58P-l3h] node name [58P-l3h] derived from node ID; set [node.name] to override
[2017-02-13T19:50:49,390][INFO ][o.e.n.Node ] [58P-l3h] version[5.0.1], pid[3644], build[080bb47/2016-11-11T22:08:49.812Z], OS[Linux/3.8.13-44.1.1.el6uek.x86_64/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_111/25.111-b14]
[2017-02-13T19:50:52,449][INFO ][o.e.p.PluginsService ] [58P-l3h] loaded module [aggs-matrix-stats]
[2017-02-13T19:50:52,450][INFO ][o.e.p.PluginsService ] [58P-l3h] loaded module [ingest-common]
[2017-02-13T19:50:52,450][INFO ][o.e.p.PluginsService ] [58P-l3h] loaded module [lang-expression]
[2017-02-13T19:50:52,450][INFO ][o.e.p.PluginsService ] [58P-l3h] loaded module [lang-groovy]
[2017-02-13T19:50:52,450][INFO ][o.e.p.PluginsService ] [58P-l3h] loaded module [lang-mustache]
[2017-02-13T19:50:52,450][INFO ][o.e.p.PluginsService ] [58P-l3h] loaded module [lang-painless]
[2017-02-13T19:50:52,451][INFO ][o.e.p.PluginsService ] [58P-l3h] loaded module [percolator]
[2017-02-13T19:50:52,451][INFO ][o.e.p.PluginsService ] [58P-l3h] loaded module [reindex]
[2017-02-13T19:50:52,452][INFO ][o.e.p.PluginsService ] [58P-l3h] loaded module [transport-netty3]
[2017-02-13T19:50:52,452][INFO ][o.e.p.PluginsService ] [58P-l3h] loaded module [transport-netty4]
[2017-02-13T19:50:52,460][INFO ][o.e.p.PluginsService ] [58P-l3h] no plugins loaded
[2017-02-13T19:50:56,213][INFO ][o.e.n.Node ] [58P-l3h] initialized
[2017-02-13T19:50:56,213][INFO ][o.e.n.Node ] [58P-l3h] starting ...
[2017-02-13T19:50:56,637][INFO ][o.e.t.TransportService ] [58P-l3h] publish_address {192.168.144.230:9300}, bound_addresses {192.168.144.230:9300}
[2017-02-13T19:50:56,642][INFO ][o.e.b.BootstrapCheck ] [58P-l3h] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks
[2017-02-13T19:50:59,864][INFO ][o.e.c.s.ClusterService ] [58P-l3h] new_master {58P-l3h}{58P-l3hGTqm7e9QzXWn0eA}{J3O-p0wfSMeS4evTxfTmVA}{192.168.144.230}{192.168.144.230:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
[2017-02-13T19:50:59,902][INFO ][o.e.h.HttpServer ] [58P-l3h] publish_address {192.168.144.230:9200}, bound_addresses {192.168.144.230:9200}
[2017-02-13T19:50:59,902][INFO ][o.e.n.Node ] [58P-l3h] started
[2017-02-13T19:50:59,930][INFO ][o.e.g.GatewayService ] [58P-l3h] recovered [0] indices into cluster_state
透過web頁面訪問:http://192.168.144.230:9200/?pretty,能看到類似如下資訊,說明es啟動成功並且正常提供服務:
{
"name" : "58P-l3h",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "mO7oaIXJQyiwCEA-jsSueg",
"version" : {
"number" : "5.0.1",
"build_hash" : "080bb47",
"build_date" : "2016-11-11T22:08:49.812Z",
"build_snapshot" : false,
"lucene_version" : "6.2.1"
},
"tagline" : "You Know, for Search"
}
二、安裝logstash5.0.1
建立軟體安裝目錄:/opt/logstash-5.0.1
將logstash-5.0.1.tar.gz解壓到安裝目錄
編輯 logstash.conf啟動配置檔案:
[root@rhel6 config]# cat logstash.conf
#input {
# stdin {}
#}
input{
beats {
host => "0.0.0.0"
port => 5044
}
}
output{
elasticsearch {
hosts => ["192.168.144.230:9200"]
index => "test"
}
stdout {
codec => rubydebug
}
}
[root@rhel6 config]#
啟動logstash5
./logstash -f /opt/logstash-5.0.1/config/logstash.conf
看到如下輸出,說明logstash啟動成功:
[root@rhel6 bin]# ./logstash -f /opt/logstash-5.0.1/config/logstash.conf
Sending Logstash's logs to /opt/logstash-5.0.1/logs which is now configured via log4j2.properties
[2017-02-14T01:03:25,860][INFO ][logstash.inputs.beats ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[2017-02-14T01:03:25,965][INFO ][org.logstash.beats.Server] Starting server on port: 5044
[2017-02-14T01:03:26,305][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>["http://192.168.144.230:9200"]}}
[2017-02-14T01:03:26,307][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2017-02-14T01:03:26,460][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>50001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"_all"=>{"enabled"=>true, "norms"=>false}, "dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword"}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date", "include_in_all"=>false}, "@version"=>{"type"=>"keyword", "include_in_all"=>false}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2017-02-14T01:03:26,483][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["192.168.144.230:9200"]}
[2017-02-14T01:03:26,492][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>250}
[2017-02-14T01:03:26,500][INFO ][logstash.pipeline ] Pipeline main started
[2017-02-14T01:03:26,552][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
三、kinaba5.0.1安裝
建立軟體安裝目錄:[root@rhel6 kibana-5.0.1]# pwd
/opt/kibana-5.0.1
[root@rhel6 kibana-5.0.1]#
將kibana-5.0.1-linux-x86_64.tar.gz解壓到安裝目錄,修改配置檔案
vi /opt/kibana-5.0.1/config/kibana.conf
server.port: 5601
server.host: "192.168.144.230"
server.name: "rhel6"
elasticsearch.url: "http://192.168.144.230:9200" #這裡指定的是從elasticsearch相關的服務http讀取資料
pid.file: /var/run/kibana.pid
root啟動kinaba5.0.1,看到如下資訊輸出,說明kinaba啟動成功併成連線到elasticsearch:
[root@rhel6 bin]# ./kibana
log [13:04:52.598] [info][status][plugin:kibana@5.0.1] Status changed from uninitialized to green - Ready
log [13:04:52.657] [info][status][plugin:elasticsearch@5.0.1] Status changed from uninitialized to yellow - Waiting for Elasticsearch
log [13:04:52.693] [info][status][plugin:console@5.0.1] Status changed from uninitialized to green - Ready
log [13:04:52.947] [info][status][plugin:timelion@5.0.1] Status changed from uninitialized to green - Ready
log [13:04:52.968] [info][listening] Server running at http://192.168.144.230:5601
log [13:04:52.970] [info][status][ui settings] Status changed from uninitialized to yellow - Elasticsearch plugin is yellow
log [13:04:58.016] [info][status][plugin:elasticsearch@5.0.1] Status changed from yellow to yellow - No existing Kibana index found
log [13:04:58.643] [info][status][plugin:elasticsearch@5.0.1] Status changed from yellow to green - Kibana index ready
log [13:04:58.645] [info][status][ui settings] Status changed from yellow to green - Ready
四、filebeat安裝
建立軟體安裝目錄:
/opt/filebeat-5.0.1
將壓縮包filebeat-5.0.1-linux-x86_64.tar.gz解壓到軟體安裝目錄,修改配置檔案
[root@rhel6 filebeat-5.0.1]# vi filebeat.yml
paths:
- /opt/logs/*.log #定義日誌的監控目錄
output.logstash:
# The Logstash hosts
hosts: ["localhost:5044"]
root啟動filebeat5
[root@rhel6 filebeat-5.0.1]# ./filebeat -e -c filebeat.yml -d "Publish"
2017/02/13 15:45:47.498852 beat.go:264: INFO Home path: [/opt/filebeat-5.0.1] Config path: [/opt/filebeat-5.0.1] Data path: [/opt/filebeat-5.0.1/data] Logs path: [/opt/filebeat-5.0.1/logs]
2017/02/13 15:45:47.498913 beat.go:174: INFO Setup Beat: filebeat; Version: 5.0.1
2017/02/13 15:45:47.498966 logstash.go:90: INFO Max Retries set to: 3
2017/02/13 15:45:47.499008 outputs.go:106: INFO Activated logstash as output plugin.
2017/02/13 15:45:47.499055 publish.go:291: INFO Publisher name: rhel6
2017/02/13 15:45:47.499169 async.go:63: INFO Flush Interval set to: 1s
2017/02/13 15:45:47.499180 async.go:64: INFO Max Bulk Size set to: 2048
2017/02/13 15:45:47.499241 beat.go:204: INFO filebeat start running.
2017/02/13 15:45:47.499251 registrar.go:66: INFO Registry file set to: /opt/filebeat-5.0.1/data/registry
2017/02/13 15:45:47.499309 registrar.go:99: INFO Loading registrar data from /opt/filebeat-5.0.1/data/registry
2017/02/13 15:45:47.499337 registrar.go:122: INFO States Loaded from registrar: 0
2017/02/13 15:45:47.499346 crawler.go:34: INFO Loading Prospectors: 1
2017/02/13 15:45:47.499381 logp.go:219: INFO Metrics logging every 30s
2017/02/13 15:45:47.499386 prospector_log.go:40: INFO Load previous states from registry into memory
2017/02/13 15:45:47.499431 prospector_log.go:67: INFO Previous states loaded: 0
2017/02/13 15:45:47.499479 crawler.go:46: INFO Loading Prospectors completed. Number of prospectors: 1
2017/02/13 15:45:47.499487 crawler.go:61: INFO All prospectors are initialised and running with 0 states to persist
2017/02/13 15:45:47.499501 prospector.go:106: INFO Starting prospector of type: log
2017/02/13 15:45:47.499630 log.go:84: INFO Harvester started for file: /opt/logs/firstset.log
檔案目錄下/opt/logs/我放了一個mongodb的log檔案,暫時是靜態的,後期可以修改,firstset.log的內容:
[root@rhel6 logs]# cat firstset.log
2017-02-11T06:44:42.954+0000 I COMMAND [conn6] command wangxi.t command: insert { insert: "t", documents: [ { _id: ObjectId('589eb2da39e265f288b9d9ae'), name: "wangxi" } ], ordered: true } ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:25 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 1, W: 1 } }, Collection: { acquireCount: { W: 1 } } } protocol:op_command 7ms
2017-02-11T06:45:59.907+0000 I COMMAND [conn7] command wangxi.t command: find { find: "t", filter: { name: "wangxi" } } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:141 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms
[root@rhel6 logs]#
然後觀察logstash視窗有如下內容輸出(說明filebeat讀取到了/opt/logs/firstset.log日誌併傳送到logstash):
[2017-02-14T01:21:29,779][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
{
"@timestamp" => 2017-02-13T17:22:08.837Z,
"offset" => 413,
"@version" => "1",
"input_type" => "log",
"beat" => {
"hostname" => "rhel6",
"name" => "rhel6",
"version" => "5.0.1"
},
"host" => "rhel6",
"source" => "/opt/logs/firstset.log",
"message" => "2017-02-11T06:44:42.954+0000 I COMMAND [conn6] command wangxi.t command: insert { insert: \"t\", documents: [ { _id: ObjectId('589eb2da39e265f288b9d9ae'), name: \"wangxi\" } ], ordered: true } ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:25 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 1, W: 1 } }, Collection: { acquireCount: { W: 1 } } } protocol:op_command 7ms",
"type" => "log",
"tags" => [
[0] "beats_input_codec_plain_applied"
]
}
{
"@timestamp" => 2017-02-13T17:22:08.837Z,
"offset" => 816,
"@version" => "1",
"input_type" => "log",
"beat" => {
"hostname" => "rhel6",
"name" => "rhel6",
"version" => "5.0.1"
},
"host" => "rhel6",
"source" => "/opt/logs/firstset.log",
"message" => "2017-02-11T06:45:59.907+0000 I COMMAND [conn7] command wangxi.t command: find { find: \"t\", filter: { name: \"wangxi\" } } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:141 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms",
"type" => "log",
"tags" => [
[0] "beats_input_codec_plain_applied"
]
}
然後,訪問http://192.168.144.230:5601/app/kibana#/management/kibana/indices/test?_g=()&_a=(tab:indexedFields)建立test索引(這裡的索引是是logstash啟動控制檔案裡的索引名稱):
[root@rhel6 config]# cat logstash.conf
#input {
# stdin {}
#}
input{
beats {
host => "0.0.0.0"
port => 5044
}
}
output{
elasticsearch {
hosts => ["192.168.144.230:9200"]
index => "test"
}
stdout {
codec => rubydebug
}
}
[root@rhel6 config]#
然後,就能訪問http://192.168.144.230:5601/app/kibana#/dev_tools/console?_g=(),輸入如下查詢語句:
GET _search
{
"query": {
"match_phrase": {
"message": "wangxi"
}
}
}
查詢到我們匯入的mongodb日誌了:
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/29357786/viewspace-2133526/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- ELK 5.0.1+Filebeat5.0.1實時監控MongoDB日誌並使用正則解析mongodb日誌MongoDB
- ELK監控nginx日誌總結Nginx
- ELK前端日誌分析、監控系統前端
- 爬蟲日誌監控 -- Elastc Stack(ELK)部署爬蟲AST
- 黑盒監控、日誌監控
- 日誌監控
- mongodb profiling慢請求監控日誌MongoDB
- Linux-ELK日誌收集Linux
- ELK日誌
- Mysql事件監控日誌MySql事件
- 小程式日誌監控工具
- DG 日誌傳輸監控
- Python監控日誌程式Python
- 03-Loki 日誌監控Loki
- Linux下使用GoAccess監控Nginx訪問日誌LinuxGoNginx
- 【ELK】elastalert 日誌告警AST
- Grafana、Prometheus、mtail-日誌監控GrafanaPrometheusAI
- ELK日誌分析系統
- elk 日誌分析系統
- ELK 日誌分析體系
- 【ELK】日誌分析系統
- 在 Linux 上用 Bash 指令碼監控 messages 日誌Linux指令碼
- 部署Sentry日誌監控系統
- ElasticSearch實戰-日誌監控平臺Elasticsearch
- Zabbix如何監控Oracle的告警日誌Oracle
- MongoDB之監控MongoDB
- [日誌分析篇]-利用ELK分析jumpserver日誌-日誌拆分篇Server
- 跟我一起學docker(15)--監控日誌和日誌管理Docker
- [elk]基於elk的業務日誌格式設計
- ELK-日誌分析系統
- docker 容器日誌集中 ELK + filebeatDocker
- ELK日誌告警elastalert2AST
- linux下日誌檔案error監控報警指令碼分享LinuxError指令碼
- Linux 系統中使用 logwatch 監控日誌檔案Linux
- 用 Logwatch 工具監控 Linux 系統 Log 日誌(zt)Linux
- 日誌監控實踐 - 監控Agent整合Lua引擎實現多維度日誌採集
- 使用zabbix監控oracle的後臺日誌Oracle
- 分散式系統監控(五)- 日誌分析分散式