之前的文件介紹了ELK架構的基礎知識,日誌集中分析系統的實施方案:
- ELK+Redis
- ELK+Filebeat
- ELK+Filebeat+Redis
- ELK+Filebeat+Kafka+ZooKeeper
ELK進一步優化架構為EFK,其中F就表示Filebeat。Filebeat即是輕量級資料收集引擎,基於原先Logstash-fowarder 的原始碼改造出來。換句話說:Filebeat就是新版的 Logstash-fowarder,也會是ELK Stack在shipper端的第一選擇。
這裡選擇ELK+Redis的方式進行部署,下面簡單記錄下ELK結合Redis搭建日誌分析平臺的叢集環境部署過程,大致的架構如下:
+ Elasticsearch是一個分散式搜尋分析引擎,穩定、可水平擴充套件、易於管理是它的主要設計初衷
+ Logstash是一個靈活的資料收集、加工和傳輸的管道軟體
+ Kibana是一個資料視覺化平臺,可以通過將資料轉化為酷炫而強大的影象而實現與資料的互動將三者的收集加工,儲存分析和可視轉化整合在一起就形成了ELK。
基本流程:
1)Logstash-Shipper獲取日誌資訊傳送到redis。
2)Redis在此處的作用是防止ElasticSearch服務異常導致丟失日誌,提供訊息佇列的作用。[注意,測試時如果寫到redis裡的日誌量比較小,則很快就會被輸送到elasticsearch,輸送完之後,屆時在redis裡的key就沒有了,也就檢視不到了.]
3)logstash是讀取Redis中的日誌資訊傳送給ElasticSearch。
4)ElasticSearch提供日誌儲存和檢索。
5)Kibana是ElasticSearch視覺化介面外掛。
1)機器環境
主機名 ip地址 部署的服務 elk-node01 192.168.10.213 es01,redis01 elk-node02 192.168.10.214 es02,redis02(vip:192.168.10.217) elk-node03 192.168.10.215 es03,kibana,nginx 三臺節點都是centos7.4系統 [root@elk-node01 ~]# cat /etc/redhat-release CentOS Linux release 7.4.1708 (Core) 三臺節點各自修改主機名 [root@localhost ~]# hostname elk-node01 [root@localhost ~]# hostnamectl set-hostname elk-node01 關閉三臺節點的iptables和selinux [root@elk-node01 ~]# systemctl stop firewalld.service [root@elk-node01 ~]# systemctl disable firewalld.service [root@elk-node01 ~]# firewall-cmd --state not running [root@elk-node01 ~]# setenforce 0 [root@elk-node01 ~]# getenforce Disabled [root@elk-node01 ~]# vim /etc/sysconfig/selinux ...... SELINUX=disabled 三臺節點機都要做下hosts繫結 [root@elk-node01 ~]# cat /etc/hosts ...... 192.168.10.213 elk-node01 192.168.10.214 elk-node02 192.168.10.215 elk-node03 同步三臺節點機的系統時間 [root@elk-node01 ~]# yum install -y ntpdate [root@elk-node01 ~]# ntpdate ntp1.aliyun.com 三臺節點都要部署java8環境 下載地址:https://pan.baidu.com/s/1pLaAjPp 提取密碼:x27s [root@elk-node01 ~]# rpm -ivh jdk-8u131-linux-x64.rpm --force [root@elk-node01 ~]# vim /etc/profile ...... JAVA_HOME=/usr/java/jdk1.8.0_131 JAVA_BIN=/usr/java/jdk1.8.0_131/bin PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin:/bin:/sbin/ CLASSPATH=.:/lib/dt.jar:/lib/tools.jar export JAVA_HOME JAVA_BIN PATH CLASSPATH [root@elk-node01 ~]# source /etc/profile [root@elk-node01 ~]# java -version java version "1.8.0_131" Java(TM) SE Runtime Environment (build 1.8.0_131-b11) Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)
2)部署ElasticSearch叢集環境
a)安裝Elasticsearch(三臺節點都要操作。部署的時候,要求三臺節點機器都能正常對外訪問,正常聯網) [root@elk-node01 ~]# vim /etc/yum.repos.d/elasticsearch.repo [elasticsearch-2.x] name=Elasticsearch repository for 2.x packages baseurl=http://packages.elastic.co/elasticsearch/2.x/centos gpgcheck=1 gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch enabled=1 [root@elk-node01 ~]# yum install -y elasticsearch b)配置Elasticsearch叢集 elk-node01節點的配置 [root@elk-node01 ~]# cp /etc/elasticsearch/elasticsearch.yml /etc/elasticsearch/elasticsearch.yml.bak [root@elk-node01 ~]# cat /etc/elasticsearch/elasticsearch.yml|grep -v "#" cluster.name: kevin-elk #叢集名稱,三個節點的叢集名稱配置要一樣 node.name: elk-node01 #叢集節點名稱,一般為本節點主機名。注意這個要是能ping通的,即在各節點的/etc/hosts裡繫結。 path.data: /data/es-data #叢集資料存放目錄,注意目錄許可權要是elasticsearch path.logs: /var/log/elasticsearch #日誌路徑,預設就是這個路徑 network.host: 192.168.10.213 #服務繫結的網路地址,一般填寫本節點ip;也可以填寫0.0.0.0 http.port: 9200 discovery.zen.ping.unicast.hosts: ["192.168.10.213", "192.168.10.214", "192.168.10.215"] #新增叢集中的主機地址,會自動發現並自動選擇master主節點 [root@elk-node01 ~]# mkdir -p /data/es-data [root@elk-node01 ~]# mkdir -p /var/log/elasticsearch/ [root@elk-node01 ~]# chown -R elasticsearch.elasticsearch /data/es-data [root@elk-node01 ~]# chown -R elasticsearch.elasticsearch /var/log/elasticsearch/ [root@elk-node01 ~]# systemctl daemon-reload [root@elk-node01 ~]# systemctl enable elasticsearch [root@elk-node01 ~]# systemctl start elasticsearch [root@elk-node01 ~]# systemctl status elasticsearch [root@elk-node01 ~]# lsof -i:9200 ------------------------------------------------------------------------------------- elk-node02節點的配置 [root@elk-node02 ~]# cat /etc/elasticsearch/elasticsearch.yml |grep -v "#" cluster.name: kevin-elk node.name: elk-node02 path.data: /data/es-data path.logs: /var/log/elasticsearch network.host: 192.168.10.214 http.port: 9200 discovery.zen.ping.unicast.hosts: ["192.168.10.213", "192.168.10.214", "192.168.10.215"] [root@elk-node02 ~]# mkdir -p /data/es-data [root@elk-node02 ~]# mkdir -p /var/log/elasticsearch/ [root@elk-node02 ~]# chown -R elasticsearch.elasticsearch /data/es-data [root@elk-node02 ~]# chown -R elasticsearch.elasticsearch /var/log/elasticsearch/ [root@elk-node02 ~]# systemctl daemon-reload [root@elk-node02 ~]# systemctl enable elasticsearch [root@elk-node02 ~]# systemctl start elasticsearch [root@elk-node02 ~]# systemctl status elasticsearch [root@elk-node02 ~]# lsof -i:9200 ------------------------------------------------------------------------------------- elk-node03節點的配置 [root@elk-node03 ~]# cat /etc/elasticsearch/elasticsearch.yml|grep -v "#" cluster.name: kevin-elk node.name: elk-node03 path.data: /data/es-data path.logs: /var/log/elasticsearch network.host: 192.168.10.215 http.port: 9200 discovery.zen.ping.unicast.hosts: ["192.168.10.213", "192.168.10.214", "192.168.10.215"] [root@elk-node03 ~]# mkdir -p /data/es-data [root@elk-node03 ~]# mkdir -p /var/log/elasticsearch/ [root@elk-node03 ~]# chown -R elasticsearch.elasticsearch /data/es-data [root@elk-node03 ~]# chown -R elasticsearch.elasticsearch /var/log/elasticsearch/ [root@elk-node03 ~]# systemctl daemon-reload [root@elk-node03 ~]# systemctl enable elasticsearch [root@elk-node03 ~]# systemctl start elasticsearch [root@elk-node03 ~]# systemctl status elasticsearch [root@elk-node03 ~]# lsof -i:9200 c)檢視elasticsearch叢集資訊(下面命令在任意一個節點機器上操作都可以) [root@elk-node01 ~]# curl -XGET 'http://192.168.10.213:9200/_cat/nodes' 192.168.10.213 192.168.10.213 8 49 0.01 d * elk-node01 #帶*號表示該節點是master主節點。 192.168.10.214 192.168.10.214 8 49 0.00 d m elk-node02 192.168.10.215 192.168.10.215 8 59 0.00 d m elk-node03 後面新增 ?v ,表示詳細顯示 [root@elk-node01 ~]# curl -XGET 'http://192.168.10.213:9200/_cat/nodes?v' host ip heap.percent ram.percent load node.role master name 192.168.10.213 192.168.10.213 8 49 0.00 d * elk-node01 192.168.10.214 192.168.10.214 8 49 0.06 d m elk-node02 192.168.10.215 192.168.10.215 8 59 0.00 d m elk-node03 查詢叢集狀態方法 [root@elk-node01 ~]# curl -XGET 'http://192.168.10.213:9200/_cluster/state/nodes?pretty' { "cluster_name" : "kevin-elk", "nodes" : { "1GGuoA9FT62vDw978HSBOA" : { "name" : "elk-node01", "transport_address" : "192.168.10.213:9300", "attributes" : { } }, "EN8L2mP_RmipPLF9KM5j7Q" : { "name" : "elk-node02", "transport_address" : "192.168.10.214:9300", "attributes" : { } }, "n75HL99KQ5GPqJDk6F2W2A" : { "name" : "elk-node03", "transport_address" : "192.168.10.215:9300", "attributes" : { } } } } 查詢叢集中的master [root@elk-node01 ~]# curl -XGET 'http://192.168.10.213:9200/_cluster/state/master_node?pretty' { "cluster_name" : "kevin-elk", "master_node" : "1GGuoA9FT62vDw978HSBOA" } 或者 [root@elk-node01 ~]# curl -XGET 'http://192.168.10.213:9200/_cat/master?v' id host ip node 1GGuoA9FT62vDw978HSBOA 192.168.10.213 192.168.10.213 elk-node01 查詢叢集的健康狀態(一共三種狀態:green、yellow,red;其中green表示健康) [root@elk-node01 ~]# curl -XGET 'http://192.168.10.213:9200/_cat/health?v' epoch timestamp cluster status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent 1527576950 14:55:50 kevin-elk green 3 3 0 0 0 0 0 0 - 100.0% 或者 [root@elk-node01 ~]# curl -XGET 'http://192.168.10.213:9200/_cluster/health?pretty' { "cluster_name" : "kevin-elk", "status" : "green", "timed_out" : false, "number_of_nodes" : 3, "number_of_data_nodes" : 3, "active_primary_shards" : 0, "active_shards" : 0, "relocating_shards" : 0, "initializing_shards" : 0, "unassigned_shards" : 0, "delayed_unassigned_shards" : 0, "number_of_pending_tasks" : 0, "number_of_in_flight_fetch" : 0, "task_max_waiting_in_queue_millis" : 0, "active_shards_percent_as_number" : 100.0 } d)線上安裝elasticsearch外掛(三個節點上都要操作,且機器都要能對外正常訪問) 安裝head外掛 [root@elk-node01 ~]# /usr/share/elasticsearch/bin/plugin install mobz/elasticsearch-head -> Installing mobz/elasticsearch-head... Trying https://github.com/mobz/elasticsearch-head/archive/master.zip ... Downloading .............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................DONE Verifying https://github.com/mobz/elasticsearch-head/archive/master.zip checksums if available ... NOTE: Unable to verify checksum for downloaded plugin (unable to find .sha1 or .md5 file to verify) Installed head into /usr/share/elasticsearch/plugins/head 安裝kopf外掛 [root@elk-node01 ~]# /usr/share/elasticsearch/bin/plugin install lmenezes/elasticsearch-kopf -> Installing lmenezes/elasticsearch-kopf... Trying https://github.com/lmenezes/elasticsearch-kopf/archive/master.zip ... Downloading ....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................DONE Verifying https://github.com/lmenezes/elasticsearch-kopf/archive/master.zip checksums if available ... NOTE: Unable to verify checksum for downloaded plugin (unable to find .sha1 or .md5 file to verify) Installed kopf into /usr/share/elasticsearch/plugins/kopf 安裝bigdesk外掛 [root@elk-node01 ~]# /usr/share/elasticsearch/bin/plugin install hlstudio/bigdesk -> Installing hlstudio/bigdesk... Trying https://github.com/hlstudio/bigdesk/archive/master.zip ... Downloading ................................................................................................................................................................................................................................DONE Verifying https://github.com/hlstudio/bigdesk/archive/master.zip checksums if available ... NOTE: Unable to verify checksum for downloaded plugin (unable to find .sha1 or .md5 file to verify) Installed bigdesk into /usr/share/elasticsearch/plugins/bigdesk 三個外掛安裝後,記得給plugins目錄授權,並重啟elasticsearch服務 [root@elk-node01 ~]# chown -R elasticsearch:elasticsearch /usr/share/elasticsearch/plugins [root@elk-node01 ~]# ll /usr/share/elasticsearch/plugins total 4 drwxr-xr-x. 3 elasticsearch elasticsearch 124 May 29 14:58 bigdesk drwxr-xr-x. 6 elasticsearch elasticsearch 4096 May 29 14:56 head drwxr-xr-x. 8 elasticsearch elasticsearch 230 May 29 14:57 kopf [root@elk-node01 ~]# systemctl restart elasticsearch [root@elk-node01 ~]# lsof -i:9200 #服務重啟後,9200埠稍過一會兒才能起來 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME java 31855 elasticsearch 107u IPv6 87943 0t0 TCP elk-node01:wap-wsp (LISTEN) 最後就可以檢視外掛狀態,直接訪問http://ip:9200/_plugin/"外掛名"; head叢集管理介面的狀態圖,五角星表示該節點為master; 這裡在三個節點機上安裝了外掛,所以三個節點都可以訪問外掛狀態。
比如用elk-node01節點的ip地址訪問這三個外掛,分別是http://192.168.10.213:9200/_plugin/head、http://192.168.10.213:9200/_plugin/kopf、http://192.168.10.213:9200/_plugin/bigdesk,如下:
3)Redis+Keepalived高可用環境部署記錄
參考另一篇文件:https://www.cnblogs.com/kevingrace/p/9001975.html 部署過程在此省略 [root@elk-node01 ~]# redis-cli -h 192.168.10.213 INFO|grep role role:master [root@elk-node01 ~]# redis-cli -h 192.168.10.214 INFO|grep role role:slave [root@elk-node01 ~]# redis-cli -h 192.168.10.217 INFO|grep role role:master [root@elk-node01 ~]# ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 52:54:00:ae:01:00 brd ff:ff:ff:ff:ff:ff inet 192.168.10.213/24 brd 192.168.10.255 scope global eth0 valid_lft forever preferred_lft forever inet 192.168.10.217/32 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::7562:4278:d71d:f862/64 scope link valid_lft forever preferred_lft forever 即redis的master主節點一開始在elk-node01節點上。
4)Kibana及nginx代理訪問環境部署(訪問許可權控制)。在elk-node03節點機上操作
a)kibana安裝配置(官網下載地址:https://www.elastic.co/downloads) [root@elk-node03 ~]# cd /usr/local/src/ [root@elk-node03 src]# wget https://download.elastic.co/kibana/kibana/kibana-4.6.6-linux-x86_64.tar.gz [root@elk-node03 src]# tar -zvxf kibana-4.6.6-linux-x86_64.tar.gz 由於維護的業務系統比較多,每個系統下的業務日誌在kibana介面展示的訪問許可權只給該系統相關人員開放,對系統外人員不開放。所以需要做kibana許可權控制。 這裡通過nginx的訪問驗證配置來實現。 可以配置多個埠的kibana,每個系統單獨開一個kibana埠號,比如財務系統kibana使用5601埠、租賃系統kibana使用5602,然後nginx做代理訪問配置。 每個系統的業務日誌單獨在其對應的埠的kibana介面裡展示。 [root@elk-node03 src]# cp -r kibana-4.6.6-linux-x86_64 /usr/local/nc-5601-kibana [root@elk-node03 src]# cp -r kibana-4.6.6-linux-x86_64 /usr/local/zl-5602-kibana [root@elk-node03 src]# ll -d /usr/local/*-kibana drwxr-xr-x. 11 root root 203 May 29 16:49 /usr/local/nc-5601-kibana drwxr-xr-x. 11 root root 203 May 29 16:49 /usr/local/zl-5602-kibana 修改配置檔案: [root@elk-node03 src]# vim /usr/local/nc-5601-kibana/config/kibana.yml ...... server.port: 5601 server.host: "0.0.0.0" elasticsearch.url: "http://192.168.10.213:9200" #新增elasticsearch的master主節點的ip地址 kibana.index: ".nc-kibana" [root@elk-node03 src]# vim /usr/local/zl-5602-kibana/config/kibana.yml ...... server.port: 5602 server.host: "0.0.0.0" elasticsearch.url: "http://192.168.10.213:9200" kibana.index: ".zl-kibana" 安裝screen,並啟動kibana [root@elk-node03 src]# yum -y install screen [root@elk-node03 src]# screen [root@elk-node03 src]# /usr/local/nc-5601-kibana/bin/kibana #按鍵ctrl+a+d將其放在後臺執行 [root@elk-node03 src]# screen [root@elk-node03 src]# /usr/local/zl-5602-kibana/bin/kibana #按鍵ctrl+a+d將其放在後臺執行 [root@elk-node03 src]# lsof -i:5601 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME node 32627 root 13u IPv4 1028042 0t0 TCP *:esmagent (LISTEN) [root@elk-node03 src]# lsof -i:5602 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME node 32659 root 13u IPv4 1029133 0t0 TCP *:a1-msc (LISTEN) -------------------------------------------------------------------------------------- 接著配置nginx的反向代理以及訪問驗證 [root@elk-node03 ~]# yum -y install gcc pcre-devel zlib-devel openssl-devel [root@elk-node03 ~]# cd /usr/local/src/ [root@elk-node03 src]# wget http://nginx.org/download/nginx-1.9.7.tar.gz [root@elk-node03 src]# tar -zvxf nginx-1.9.7.tar.gz [root@elk-node03 src]# cd nginx-1.9.7 [root@elk-node03 nginx-1.9.7]# useradd www -M -s /sbin/nologin [root@elk-node03 nginx-1.9.7]# ./configure --prefix=/usr/local/nginx --user=www --group=www --with-http_ssl_module --with-http_flv_module --with-http_stub_status_module --with-http_gzip_static_module --with-pcre [root@elk-node03 nginx-1.9.7]# make && make install nginx的配置 [root@elk-node03 nginx-1.9.7]# cd /usr/local/nginx/conf/ [root@elk-node03 conf]# cp nginx.conf nginx.conf.bak [root@elk-node03 conf]# cat nginx.conf user www; worker_processes 8; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 65535; } http { include mime.types; default_type application/octet-stream; charset utf-8; ###### ## set access log format ###### log_format main '$http_x_forwarded_for $remote_addr $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_cookie" $host $request_time'; ####### ## http setting ####### sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; proxy_cache_path /var/www/cache levels=1:2 keys_zone=mycache:20m max_size=2048m inactive=60m; proxy_temp_path /var/www/cache/tmp; fastcgi_connect_timeout 3000; fastcgi_send_timeout 3000; fastcgi_read_timeout 3000; fastcgi_buffer_size 256k; fastcgi_buffers 8 256k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_intercept_errors on; # client_header_timeout 600s; client_body_timeout 600s; # client_max_body_size 50m; client_max_body_size 100m; client_body_buffer_size 256k; gzip on; gzip_min_length 1k; gzip_buffers 4 16k; gzip_http_version 1.1; gzip_comp_level 9; gzip_types text/plain application/x-javascript text/css application/xml text/javascript application/x-httpd-php; gzip_vary on; ## includes vhosts include vhosts/*.conf; } [root@elk-node03 conf]# mkdir vhosts [root@elk-node03 conf]# cd vhosts/ [root@elk-node03 vhosts]# vim nc_kibana.conf server { listen 15601; server_name localhost; location / { proxy_pass http://192.168.10.215:5601/; auth_basic "Access Authorized"; auth_basic_user_file /usr/local/nginx/conf/nc_auth_password; } } [root@elk-node03 vhosts]# vim zl_kibana.conf server { listen 15602; server_name localhost; location / { proxy_pass http://192.168.10.215:5602/; auth_basic "Access Authorized"; auth_basic_user_file /usr/local/nginx/conf/zl_auth_password; } } [root@elk-node03 vhosts]# /usr/local/nginx/sbin/nginx [root@elk-node03 vhosts]# /usr/local/nginx/sbin/nginx -s reload [root@elk-node03 vhosts]# lsof -i:15601 [root@elk-node03 vhosts]# lsof -i:15602 --------------------------------------------------------------------------------------------- 設定驗證訪問 建立類htpasswd檔案(如果沒有htpasswd命令,可通過"yum install -y *htpasswd*"或"yum install -y httpd") [root@elk-node03 vhosts]# yum install -y *htpasswd* 建立財務系統日誌的kibana訪問的驗證許可權 [root@elk-node03 vhosts]# htpasswd -c /usr/local/nginx/conf/nc_auth_password nclog New password: Re-type new password: Adding password for user nclog [root@elk-node03 vhosts]# cat /usr/local/nginx/conf/nc_auth_password nclog:$apr1$WLHsdsCP$PLLNJB/wxeQKy/OHp/7o2. 建立租賃系統日誌的kibana訪問的驗證許可權 [root@elk-node03 vhosts]# htpasswd -c /usr/local/nginx/conf/zl_auth_password zllog New password: Re-type new password: Adding password for user zllog [root@elk-node03 vhosts]# cat /usr/local/nginx/conf/zl_auth_password zllog:$apr1$dRHpzdwt$yeJxnL5AAQh6A6MJFPCEM1 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ htpasswd命令的使用技巧 1) 首次生成驗證檔案,使用-c引數,建立時後面跟一個使用者名稱,但是不能直接跟密碼,需要回車輸入兩次密碼 # htpasswd -c /usr/local/nginx/conf/nc_auth_password nclog 2)在驗證檔案生成後,後續新增使用者,使用-b引數,後面可以直接跟使用者名稱和密碼。 注意這時不能加-c引數,否則會將之前建立的使用者資訊覆蓋掉。 # htpasswd -b /usr/local/nginx/conf/nc_auth_password kevin kevin@123 3)刪除用於,使用-D引數。 #htpasswd -D /usr/local/nginx/conf/nc_auth_password kevin 4)修改使用者密碼(可以先刪除,再建立) # htpasswd -D /usr/local/nginx/conf/nc_auth_password kevin # htpasswd -b /usr/local/nginx/conf/nc_auth_password kevin keivn@#2312 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
5)客戶機日誌收集操作(Logstash)
1)安裝logstash [root@elk-client ~]# cat /etc/yum.repos.d/logstash.repo [logstash-2.1] name=Logstash repository for 2.1.x packages baseurl=http://packages.elastic.co/logstash/2.1/centos gpgcheck=1 gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch enabled=1 [root@elk-client ~]# yum install -y logstash [root@elk-client ~]# ll -d /opt/logstash/ drwxr-xr-x. 5 logstash logstash 160 May 29 17:45 /opt/logstash/ 2)調整java環境。 有些伺服器由於業務程式碼自身限制只能用java6或java7,而新版logstash要求java8環境。 這種情況下,要安裝Logstash,就只能單獨配置Logstas自己使用的java8環境了。 [root@elk-client ~]# java -version java version "1.6.0_151" OpenJDK Runtime Environment (rhel-2.6.11.0.el6_9-x86_64 u151-b00) OpenJDK 64-Bit Server VM (build 24.151-b00, mixed mode) 下載jdk-8u172-linux-x64.tar.gz,放到/usr/local/src目錄下 下載地址:https://pan.baidu.com/s/1z3L4Q24AuHA2r6KT6oT9vw 提取密碼:dprz [root@elk-client ~]# cd /usr/local/src/ [root@elk-client src]# tar -zvxf jdk-8u172-linux-x64.tar.gz [root@elk-client src]# mv jdk1.8.0_172 /usr/local/ 在/etc/sysconfig/logstash檔案結尾新增下面兩行內容: [root@elk-client src]# vim /etc/sysconfig/logstash ....... JAVA_CMD=/usr/local/jdk1.8.0_172/bin JAVA_HOME=/usr/local/jdk1.8.0_172 在/opt/logstash/bin/logstash.lib.sh檔案新增下面一行內容: [root@elk-client src]# vim /opt/logstash/bin/logstash.lib.sh ....... export JAVA_HOME=/usr/local/jdk1.8.0_172 這樣使用logstash收集日誌,就不會報java版本的錯誤了。 3)使用logstash收集日誌 ------------------------------------------------------------ 比如收集財務系統的日誌 [root@elk-client ~]# mkdir /opt/nc [root@elk-client ~]# cd /opt/nc [root@elk-client nc]# vim redis-input.conf input { file { path => "/data/nc-tomcat/logs/catalina.out" type => "nc-log" start_position => "beginning" codec => multiline { pattern => "^[a-zA-Z0-9]|[^ ]+" #收集以字母(大小寫)或數字或空格開頭的日誌資訊 negate => true what => "previous" } } } output { if [type] == "nc-log"{ redis { host => "192.168.10.217" port => "6379" db => "1" data_type => "list" key => "nc-log" } } } [root@elk-client nc]# vim file.conf input { redis { type => "nc-log" host => "192.168.10.217" #redis高可用的vip地址 port => "6379" db => "1" data_type => "list" key => "nc-log" } } output { if [type] == "nc-log"{ elasticsearch { hosts => ["192.168.10.213:9200"] #elasticsearch叢集的master主節點地址 index => "nc-app01-nc-log-%{+YYYY.MM.dd}" } } } 驗證收集日誌的logstash檔案是否配置OK [root@elk-client nc]# /opt/logstash/bin/logstash -f /opt/nc/redis-input.conf --configtest Configuration OK [root@elk-client nc]# /opt/logstash/bin/logstash -f /opt/nc/file.conf --configtest Configuration OK 啟動收集日誌的logstash程式 [root@elk-client nc]# /opt/logstash/bin/logstash -f /opt/nc/redis-input.conf & [root@elk-client nc]# /opt/logstash/bin/logstash -f /opt/nc/file.conf & [root@elk-client nc]# ps -ef|grep logstash ------------------------------------------------------------------- 再比如收集租賃系統的日誌 [root@elk-client ~]# mkdir /opt/zl [root@elk-client ~]# cd /opt/zl [root@elk-client zl]# vim redis-input.conf input { file { path => "/data/zl-tomcat/logs/catalina.out" type => "zl-log" start_position => "beginning" codec => multiline { pattern => "^[a-zA-Z0-9]|[^ ]+" negate => true what => "previous" } } } output { if [type] == "zl-log"{ redis { host => "192.168.10.217" port => "6379" db => "2" data_type => "list" key => "zl-log" } } } [root@elk-client zl]# vim file.conf input { redis { type => "zl-log" host => "192.168.10.217" port => "6379" db => "2" data_type => "list" key => "zl-log" } } output { if [type] == "zl-log"{ elasticsearch { hosts => ["192.168.10.213:9200"] index => "zl-app01-zl-log-%{+YYYY.MM.dd}" } } } [root@elk-client zl]# /opt/logstash/bin/logstash -f /opt/zl/redis-input.conf --configtest Configuration OK [root@elk-client zl]# /opt/logstash/bin/logstash -f /opt/zl/file.conf --configtest Configuration OK [root@elk-client zl]# ps -ef|grep logstash 當上面財務和租賃系統日誌有新資料寫入時,日誌就會被logstash收集起來,並最終通過各自的kibana進行web展示。
訪問head外掛就可以看到收集的日誌資訊(在logstash程式啟動後,當有新日誌資料寫入時,才會在head外掛訪問介面裡展示)
新增財務系統kibana日誌展示
新增租賃系統kibana日誌展示
========Logstash之multiline外掛(匹配多行日誌)使用說明========
在處理日誌時,除了訪問日誌外,還要處理執行時日誌,該日誌大都用程式寫的,比如log4j。執行時日誌跟訪問日誌最大的不同是,執行時日誌是多行,也就是說,連續的多行才能表達一個意思。如果能按多行處理,那麼把它們拆分到欄位就很容易了。這裡就需要說下Logstash的multiline外掛,用於匹配多行日誌。首先看下面一個java日誌:
[2016-05-20 11:54:24,106][INFO][cluster.metadata ] [node-1][.kibana] creating index,cause [api],template [],shards [1]/[1],mappings [config] at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:440) at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:396) at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:837) at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:445) at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:191) at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:523) at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:207) at oracle.jdbc.driver.T4CPreparedStatement.executeForDescribe(T4CPreparedStatement.java:863) at oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:1153) at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1275) at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3576) at oracle.jdbc.driver.OraclePreparedStatement.executeQuery(OraclePreparedStatement.java:3620)
再看看這些日誌資訊在kibana介面裡的展示效果:
可以看到,每一行at其實都屬於一個事件的資訊,但是Logstash卻使用了多行顯示出來,這樣會造成閱讀不便。為了解決這個問題,可以使用Logstash input外掛中的file外掛,其中還有一個子功能是Codec-->multiline。官方對於multiline外掛的描述是“Merges multiline messages into a single event”,翻譯過來就是將多行資訊合併為單一事件。
登入客戶機器檢視Java日誌,發現每一個單獨的事件都是以“[ ]”方括號開始的,所以可以把這個方括號當做特徵,再結合multiline外掛來實現合併資訊。使用外掛的語法如下,主要含義是“把任何不以[開頭的行,都與前面不是[開頭的行合併成一個事件”:
[root@elk-client zl]# vim redis-input.conf input { file { path => "/data/zl-tomcat/logs/catalina.out" type => "zl-log" start_position => "beginning" codec => multiline { pattern => "^\[" negate => true what => previous } } } output { if [type] == "zl-log"{ redis { host => "192.168.10.217" port => "6379" db => "2" data_type => "list" key => "zl-log" } } }
解釋說明:
pattern => "^\[" 這是正規表示式,用來做規則匹配。匹配多行日誌的方式要根據實際日誌資訊進行正則匹配,這裡是以"[",也可以是正則匹配,以日誌具體情況而定。
negate => true 這個negate是對pattern的結果做判斷是否匹配,預設值是false代表匹配,而true代表不匹配,這裡並沒有反,因為negate本身是否定的意思,在這裡就是不以大括號開頭的內容才算符合條件,後續才會進行合併操作。
what => previous next或者previous二選一,previous代表codec將把匹配內容與之前的內容合併,next代表之後的內容。
經過外掛整理後的資訊在kibana介面裡檢視就直觀多了,如下圖:
multiline 欄位屬性
對multiline 外掛來說,有三個設定比較重要:negate、pattern 和 what。
negate
- 型別是 boolean
- 預設為 false
否定正規表示式(如果沒有匹配的話)。
pattern
- 必須設定
- 型別為 string
- 沒有預設值
要匹配的正規表示式。
what
- 必須設定
- 可以為 previous 或 next
- 沒有預設值
如果正規表示式匹配了,那麼該事件是屬於下一個或是前一個事件?
==============================================
再來看一例:
看下面的java日誌: [root@elk-client ~]# tail -f /data/nc-tomcat/logs/catalina.out ........ ........ $$callid=1527643542536-4261 $$thread=[WebContainer : 23] $$host=10.0.52.21 $$userid=1001A6100000000006KR $$ts=2018-05-30 09:25:42 $$remotecall=[nc.bs.dbcache.intf.ICacheVersionBS] $$debuglevel=ERROR $$msg=<Select CacheTabName, CacheTabVersion From BD_cachetabversion where CacheTabVersion >= null order by CacheTabVersion desc>throws ORA-00942: 表或檢視不存在 $$callid=1527643542536-4261 $$thread=[WebContainer : 23] $$host=10.0.52.21 $$userid=1001A6100000000006KR $$ts=2018-05-30 09:25:42 $$remotecall=[nc.bs.dbcache.intf.ICacheVersionBS] $$debuglevel=ERROR $$msg=sql original exception java.sql.SQLException: ORA-00942: 表或檢視不存在 at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:440) at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:396) at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:837) at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:445) at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:191) at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:523) at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:207) at oracle.jdbc.driver.T4CPreparedStatement.executeForDescribe(T4CPreparedStatement.java:863) at oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:1153) at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1275) at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3576) at oracle.jdbc.driver.OraclePreparedStatement.executeQuery(OraclePreparedStatement.java:3620) at oracle.jdbc.driver.OraclePreparedStatementWrapper.executeQuery(OraclePreparedStatementWrapper.java:1203) at com.ibm.ws.rsadapter.jdbc.WSJdbcPreparedStatement.pmiExecuteQuery(WSJdbcPreparedStatement.java:1110) at com.ibm.ws.rsadapter.jdbc.WSJdbcPreparedStatement.executeQuery(WSJdbcPreparedStatement.java:712) at nc.jdbc.framework.crossdb.CrossDBPreparedStatement.executeQuery(CrossDBPreparedStatement.java:103) at nc.jdbc.framework.JdbcSession.executeQuery(JdbcSession.java:297) 從以上日誌可以看出,每一個單獨的事件都是以"$"開始的,所以可以把這個方括號當做特徵,結合multiline外掛來實現合併資訊。 [root@elk-client nc]# vim redis-input.conf input { file { path => "/data/nc-tomcat/logs/catalina.out" type => "nc-log" start_position => "beginning" codec => multiline { pattern => "^\$" #匹配以$開頭的日誌資訊。(如果日誌每行是以日期開頭顯示,比如"2018-05-30 11:42.....",則此行就配置為pattern => "^[0-9]",即表示匹配以數字開頭的行) negate => true #不匹配 what => "previous" #即上面不匹配的行的內容與之前的行的內容合併 } } } output { if [type] == "nc-log"{ redis { host => "192.168.10.217" port => "6379" db => "1" data_type => "list" key => "nc-log" } } }
如上調整後,登入kibana介面,就可以看到匹配的多行合併的展示效果了(如果多行合併後內容過多,可以點選截圖中的小箭頭,點選進去直接看message資訊,這樣就能看到合併多行後的內容了):
=======================================================
從上面的例子中可以發現,logstash收集的日誌在kibana的展示介面裡出現了中文亂碼。
這就需要在logstash收集日誌的配置中指定編碼。使用"file"命令去檢視對應日誌檔案的字元編碼:
1)如果命令返回結果說明改日誌為utf-8,則logstash配置檔案中charset設定為UTF-8。(其實如果命令結果為utf-8,則預設不用新增charset設定,logstash收集日誌中的中文資訊也會正常顯示出來)
2)如果命令返回結果說明改日誌不是utf-8,則logstash配置檔案中charset統一設定為GB2312。
具體操作記錄:
[root@elk-client ~]# file /data/nchome/nclogs/master/nc-log.log /data/nchome/nclogs/master/nc-log.log: ISO-8859 English text, with very long lines, with CRLF, LF line terminators 由上面的file命令檢視得知,該日誌檔案的字元編碼不是UTF-8,所以在logstash配置檔案中將charset統一設定為GB2312。 根據上面的例子,只需要在redis-input.conf檔案中新增對應字元編碼的配置即可,file.conf檔案不需要修改。如下: [root@elk-client nc]# vim redis-input.conf input { file { path => "/data/nc-tomcat/logs/catalina.out" type => "nc-log" start_position => "beginning" codec => multiline { charset => "GB2312" #新增這一行 pattern => "^\$" negate => true what => "previous" } } } output { if [type] == "nc-log"{ redis { host => "192.168.10.217" port => "6379" db => "1" data_type => "list" key => "nc-log" } } } 重啟logstash程式,然後登陸kibana,就發現中文能正常顯示了!
=============================================================
logstash收集那些存在"以當天日期為目錄名"下的日誌,如下:
[root@elk-client ~]# cd /data/yxhome/yx_data/applog [root@elk-client applog]# ls 20180528 20180529 20180530 20180531 20180601 20180602 20180603 20180604 [root@elk-client applog]# ls 20180604 cm.log timsserver.log /data/yxhome/yx_data/applog下那些以當天日期為名稱的目錄建立時間是0點 [root@elk-client ~]# ll -d /data/yxhome/yx_data/applog/20180603 drwxr-xr-x 2 root root 4096 6月 3 00:00 /data/yxhome/yx_data/applog/20180603 由於logstash檔案中input->file下的path路徑配置不能跟`date +%Y%m%d`或$(date +%Y%m%d)。 我的做法是:寫個指令碼將每天的日誌軟連結到一個固定路徑下,然後logstash檔案中的path配置成軟鏈之後的新路徑。 [root@elk-client ~]# vim /mnt/yx_log_line.sh #!/bin/bash /bin/rm -f /mnt/yx_log/* /bin/ln -s /data/yxhome/yx_data/applog/$(date +%Y%m%d)/cm.log /mnt/yx_log/cm.log /bin/ln -s /data/yxhome/yx_data/applog/$(date +%Y%m%d)/timsserver.log /mnt/yx_log/timsserver.log [root@elk-client ~]# chmod +755 /mnt/yx_log_line.sh [root@elk-client ~]# /bin/bash -x /mnt/yx_log_line.sh [root@elk-client ~]# ll /mnt/yx_log 總用量 0 lrwxrwxrwx 1 root root 43 6月 4 14:29 cm.log -> /data/yxhome/yx_data/applog/20180604/cm.log lrwxrwxrwx 1 root root 51 6月 4 14:29 timsserver.log -> /data/yxhome/yx_data/applog/20180604/timsserver.log [root@elk-client ~]# crontab -l 0 3 * * * /bin/bash -x /mnt/yx_log_line.sh > /dev/null 2>&1 logstash配置如下(多個log日誌採集的配置放在一個檔案裡): [root@elk-client ~]# cat /opt/redis-input.conf input { file { path => "/data/nchome/nclogs/master/nc-log.log" type => "nc-log" start_position => "beginning" codec => multiline { charset => "GB2312" pattern => "^\$" negate => true what => "previous" } } file { path => "/mnt/yx_log/timsserver.log" type => "yx-timsserver.log" start_position => "beginning" codec => multiline { charset => "GB2312" pattern => "^[0-9]" #以數字開頭。實際該日誌是以2018日期字樣開頭,比如2018-06-04 09:19:53,364:...... negate => true what => "previous" } } file { path => "/mnt/yx_log/cm.log" type => "yx-cm.log" start_position => "beginning" codec => multiline { charset => "GB2312" pattern => "^[0-9]" negate => true what => "previous" } } } output { if [type] == "nc-log"{ redis { host => "192.168.10.217" port => "6379" db => "2" data_type => "list" key => "nc-log" } } if [type] == "yx-timsserver.log"{ redis { host => "192.168.10.217" port => "6379" db => "4" data_type => "list" key => "yx-timsserver.log" } } if [type] == "yx-cm.log"{ redis { host => "192.168.10.217" port => "6379" db => "5" data_type => "list" key => "yx-cm.log" } } } [root@elk-client ~]# cat /opt/file.conf input { redis { type => "nc-log" host => "192.168.10.217" port => "6379" db => "2" data_type => "list" key => "nc-log" } redis { type => "yx-timsserver.log" host => "192.168.10.217" port => "6379" db => "4" data_type => "list" key => "yx-timsserver.log" } redis { type => "yx-cm.log" host => "192.168.10.217" port => "6379" db => "5" data_type => "list" key => "yx-cm.log" } } output { if [type] == "nc-log"{ elasticsearch { hosts => ["192.168.10.213:9200"] index => "elk-client(10.0.52.21)-nc-log-%{+YYYY.MM.dd}" } } if [type] == "yx-timsserver.log"{ elasticsearch { hosts => ["192.168.10.213:9200"] index => "elk-client(10.0.52.21)-yx-timsserver.log-%{+YYYY.MM.dd}" } } if [type] == "yx-cm.log"{ elasticsearch { hosts => ["192.168.10.213:9200"] index => "elk-client(10.0.52.21)-yx-cm.log-%{+YYYY.MM.dd}" } } } 先檢查配置是否正確 [root@elk-client ~]# /opt/logstash/bin/logstash -f /opt/redis-input.conf --configtest Configuration OK [root@elk-client ~]# /opt/logstash/bin/logstash -f /opt/file.conf --configtest Configuration OK [root@elk-client ~]# 接著啟動 [root@elk-client ~]# /opt/logstash/bin/logstash -f /opt/redis-input.conf & [root@elk-client ~]# /opt/logstash/bin/logstash -f /opt/file.conf & [root@elk-client ~]# ps -ef|grep logstash 當日志檔案中有新資訊寫入,訪問elasticsearch的head外掛就能看到對應的索引了,然後新增到kibana介面裡即可。
====================ELK收集IDC防火牆日誌======================
1)通過rsyslog將機房防火牆(地址為10.1.32.105)日誌收集到一臺linux伺服器上(比如A伺服器) rsyslog收集防火牆日誌操作,可參考:http://www.cnblogs.com/kevingrace/p/5570411.html 比如收集到A伺服器上的日誌路徑為: [root@Server-A ~]# cd /data/fw_logs/10.1.32.105/ [root@Server-A 10.1.32.105]# ll total 127796 -rw------- 1 root root 130855971 Jun 13 16:24 10.1.32.105_2018-06-13.log 由於rsyslog收集後的日誌會每天產生一個日誌檔案,並且以當天日期命名。 可以編寫指令碼,將每天收集的日誌檔案軟連結到一個固定名稱的檔案上。 [root@Server-A ~]# cat /data/fw_logs/log.sh #!/bin/bash /bin/unlink /data/fw_logs/firewall.log /bin/ln -s /data/fw_logs/10.1.32.105/10.1.32.105_$(date +%Y-%m-%d).log /data/fw_logs/firewall.log [root@Server-A ~]# sh /data/fw_logs/log.sh [root@Server-A ~]# ll /data/fw_logs/firewall.log lrwxrwxrwx 1 root root 52 Jun 13 15:17 /data/fw_logs/firewall.log -> /data/fw_logs/10.1.32.105/10.1.32.105_2018-06-13.log 通過crontab定時執行 [root@Server-A ~]# crontab -l 0 1 * * * /bin/bash -x /data/fw_logs/log.sh >/dev/null 2>&1 0 6 * * * /bin/bash -x /data/fw_logs/log.sh >/dev/null 2>&1 2)在A伺服器上配置logstash 安裝logstash省略(如上) [root@Server-A ~]# cat /opt/redis-input.conf input { file { path => "/data/fw_logs/firewall.log" type => "firewall-log" start_position => "beginning" codec => multiline { pattern => "^[a-zA-Z0-9]|[^ ]+" negate => true what => previous } } } output { if [type] == "firewall-log"{ redis { host => "192.168.10.217" port => "6379" db => "5" data_type => "list" key => "firewall-log" } } } [root@Server-A ~]# cat /opt/file.conf input { redis { type => "firewall-log" host => "192.168.10.217" port => "6379" db => "5" data_type => "list" key => "firewall-log" } } output { if [type] == "firewall-log"{ elasticsearch { hosts => ["192.168.10.213:9200"] index => "firewall-log-%{+YYYY.MM.dd}" } } } [root@Server-A ~]# /opt/logstash/bin/logstash -f /opt/redis-input.conf --configtest Configuration OK [root@Server-A ~]# /opt/logstash/bin/logstash -f /opt/file.conf --configtest Configuration OK [root@Server-A ~]# /opt/logstash/bin/logstash -f /opt/redis-input.conf & [root@Server-A ~]# /opt/logstash/bin/logstash -f /opt/file.conf & 注意: logstash配置檔案中的index名稱有時不注意的話,會invalid無效。 比如上面"firewall-log-%{+YYYY.MM.dd}"改為"IDC-firewall-log-%{+YYYY.MM.dd}"的話,啟動logstash就會報錯:index name is invalid! 然後登陸kibana介面,將firewall-log日誌新增進去展示即可。