為了達到不會因為ELK中的某一項元件因為故障而導致整個ELK工作出問題,於是
將logstash收集到的資料存入到訊息佇列中如redis,rabbitMQ,activeMQ或者kafka,這裡以redis為例進行操作
首先將需要收集的日誌資料通過logstash收集到redis中,這裡需要用到output的redis外掛了:
1、安裝redis,這裡採用yum安裝,存在於epel源
2、修改redis配置
[root@node3 ~]# egrep -v "^#|^$" /etc/redis.conf bind 192.168.44.136 protected-mode yes port 6379 tcp-backlog 511 timeout 0 tcp-keepalive 300 daemonize yes
開啟redis服務:
[root@node3 ~]# /etc/init.d/redis start 啟動 : [確定]
編寫logstash配置檔案,將日誌資料寫入到redis中:
[root@node3 conf.d]# cat redis_out.conf input { file { path => ["/var/log/nginx/access.log"] start_position => "beginning" } } output { redis { host => ["192.168.44.136"] data_type => "list" db => 1 key => "nginx" } }
然後logstash開始執行:
[root@node3 conf.d]# /usr/share/logstash/bin/logstash -f redis_out.conf Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
登入redis進行檢視是否已經有資料了:
[root@node3 ~]# redis-cli -h 192.168.44.136 192.168.44.136:6379> select 1 OK 192.168.44.136:6379[1]> llen nginx (integer) 19
select 1:1就是上面logstash配置檔案的db
llen nginx:nginx就是上面logstash配置檔案的key
接下來將redis收集到的資料再通過logstash輸出到es中:
[root@node3 conf.d]# cat redis_input.conf input { redis { host => ["192.168.44.136"] data_type => "list" db => 1 key => "nginx" } } output { elasticsearch { hosts => ["192.168.44.134:9200"] index => "redis-nginx" } }
開始執行:
[root@node3 conf.d]# /usr/share/logstash/bin/logstash -f redis_input.conf Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
通過登入redis檢視資料是否還存在:
[root@node3 ~]# redis-cli -h 192.168.44.136 192.168.44.136:6379> select 1 OK 192.168.44.136:6379[1]> llen nginx (integer) 0 192.168.44.136:6379[1]>
可以看見redis中的資料已經被讀取完了,現在檢視es是否建立了相應的索引:
然後通過kibana將es的index新增進來:
於是整個流程就這樣完成了,redis這個收集站可以換成其他訊息佇列服務
接下來通過操作kibana,使收集到的資料更加好看: