前言:
ELK 是三個開源專案的首字母縮寫,這三個專案分別是:Elasticsearch、Logstash 和 Kibana。
• Elasticsearch 是一個搜尋和分析引擎。
• Logstash 是伺服器端資料處理管道,能夠同時從多個來源採集資料,轉換資料,然後將資料傳送到諸如 Elasticsearch 等儲存庫中。
• Kibana 則可以讓使用者在 Elasticsearch 中使用圖形和圖表對資料進行視覺化。
filbeat是安裝到web端的伺服器進行日誌採集
正式安裝:
!!!!!!!!!!!注意好對應的版本!!!!!!!!! (該教程使用的是7.13.2版本)
一.安裝配置es(Elasticsearch)
由於一開始裝的是7.10.2的所以截圖中是7.10.2,版本不對應的話會出現版本衝突問題,後續統一調整為了7.13.2版本
建立es使用者組:groupadd es
在es使用者組下建立es使用者:useradd es -g es -p es
進入es使用者
[root@iZj6c49h0dw85252u6oxu0Z ~]# su es
下載安裝包或者上傳安裝包
[es@iZj6c49h0dw85252u6oxu0Z data]$ wget https://artifacts.elastic.co/downloads/elasticsearch/eelasticsearch-7.13.2-linux-x86_64.tar.gz
解壓
[es@iZj6c49h0dw85252u6oxu0Z data]$ tar -zxvf elasticsearch-7.13.2-linux-x86_64.tar.gz
賦予使用者許可權:chown -R es:es /data/elasticsearch-7.13.2
啟動es
[es@iZj6c49h0dw85252u6oxu0Z es]$ cd elasticsearch-7.13.2/
[es@iZj6c49h0dw85252u6oxu0Z elasticsearch-6.5.0]$ bin/elasticsearch -d
修改三個配置檔案,檔案中新增
export JAVA_HOME=/data/elasticsearch-7.13.2/jdk
export PATH=$JAVA_HOME/bin:$PATH
在/data/elasticsearch-7.13.2/config 中修改配置
伺服器中驗證 curl http://內網ip:9200
出現該畫面就是按照配置成功啟動了!
可以修改 /elasticsearch-7.13.2/config/jvm.options 檔案
分配執行記憶體
-Xms4g
-Xmx4g
安裝時遇到的問題:
錯誤問題
一
[root@izuf672oio5mc4fbyj0s0jz ~]# curl http://47.244.38.173:9200/
curl: (7) Failed connect to 47.244.38.173:9200; Connection refused
修改elasticsearch.yml檔案,去掉註釋並修改IP:network.host: 0.0.0.0,並開通入方向的阿里雲訪問規則,再次啟動ES就可以了
[es@izuf672oio5mc4fbyj0s0jz elasticsearch-6.5.0]$ vi config/elasticsearch.yml
二
[1] bootstrap checks failed
[1]: max file descriptors [65535] for elasticsearch process is too low, increase to at least [65536]
root使用者下修改/etc/security/limits.conf
:su root
vim /etc/security/limits.conf
1.修改內容:
root soft nofile 65535
root hard nofile 65535
* soft nofile 65536
* hard nofile 65536
2.修改/etc/sysctl.conf
最後新增vm.max_map_count=655360
修改完成後,執行 sysctl -p 命令,使配置生效
二.安裝配置Logstash
上傳並解壓
tar -zxvf logstash-7.13.2-linux-x86_64.tar.gz
解壓完之後進入/config,複製一份logstash-sample.conf到bin目錄下,方便後面啟動:
cp /data/logstash-7.13.2/config/logstash-sample.conf /data/logstash-7.13.2/config/logstash.conf
編輯以下內容複製到bin的logstash.conf
# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.
input {
beats {
port => 5044
}
}
# log.file.path=/data/jars/logs/charge-server/charge-server-2023-08-29.5.log
# index => "%{[@metadata][beat]}}-%{+YYYY.MM.dd}"
# index => "%{[fields][log_type]}-%{+YYYY.MM.dd}"
output {
elasticsearch {
hosts => ["http://localhost:9200"]
index => "%{[fields][log_type]}-%{+YYYY.MM.dd}"
#user => "elastic"
#password => "changeme"
}
}
伺服器後臺啟動
nohup bin/logstash -f config/logstash.conf > /dev/null 2>&1 &
應用命令啟動(可實時看到啟動日誌)
bin/logstash -f config/logstash.conf
三.kibana安裝與配置
上傳並解壓
tar -zxvf kibana-7.13.2-linux-x86_64.tar.gz
編輯/config/kibana.yml
這裡用的是預設埠5601,這裡serverhost不能用localhost,不然外網訪問不到,在配置檔案的最後一行,還可以將系統設定為中文。
kibana也不能使用root使用者啟動
# Kibana is served by a back end server. This setting specifies the port to use.
#server.port: 5601
# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "0.0.0.0"
# Specifies locale to be used for all localizable strings, dates and number formats.
# Supported languages are the following: English - en , by default , Chinese - zh-CN .
i18n.locale: "zh-CN"
#es的hosts
elasticsearch.hosts: ["http://192.xxx.x.211:9200"]
cd kibana-7.13.2-linux-x86_64/bin/
啟動
./kibana --allow-root &
nohup ./kibana --allow-root &
日誌路徑
/data/ELK/kibana-7.13.2-linux-x86_64/config/node.options
四.filbeat安裝與配置
上傳並解壓
tar -zxvf filebeat-7.13.2-linux-x86_64.tar.gz
編輯/filebeat-7.13.2-linux-x86_64/filebeat.yml
filebeat.inputs:
- type: log
enabled: true
paths:
# 地址是服務的日誌地址
- /data/jars/logs/base-server/*.log
#- c:\programdata\elasticsearch\logs\*
fields:
log_type: "base-server"
#多個服務就在後面追加
- type: log
enabled: true
paths:
- /data/jars/logs/finance-server/*.log
#- c:\programdata\elasticsearch\logs\*
fields:
log_type: "finance-server"
- type: filestream
# Change to true to enable this input configuration.
enabled: false
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /var/log/*.log
#- c:\programdata\elasticsearch\logs\*
# ============================== Filebeat modules ==============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
# Period on which files under path should be checked for changes
#reload.period: 10s
# ======================= Elasticsearch template setting =======================
setup.template.settings:
index.number_of_shards: 1
#index.codec: best_compression
#_source.enabled: false
# =================================== Kibana ===================================
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
# Kibana Host
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify and additional path, the scheme is required: http://localhost:5601/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
#host: "localhost:5601"
# Kibana Space ID
# ID of the Kibana Space into which the dashboards should be loaded. By default,
# the Default Space will be used.
#space.id:
#日誌是推送到Logstash,
# ------------------------------ Logstash Output -------------------------------
output.logstash:
#The Logstash hosts
#配置安裝Logstash所在的伺服器ip
hosts: ["127.0.0.1:5044"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
# ================================= Processors =================================
processors:
- add_host_metadata:
when.not.contains.tags: forwarded
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
啟動執行
nohup ./filebeat -e -c filebeat.yml > /dev/null 2>&1 &
./filebeat -e -c filebeat.yml
遇到的問題:
不定時間,filebeat就會自動退出或者ssh連線斷開filebeat自動退出
1後臺啟動
`nohup ./filebeat -e -c filebeat.yml > /dev/null 2>&1 &`
2不要直接關閉終端,而是先執行命令exit後
3關閉shh連線終端
最後 ip:5601訪問kibana 至此安裝完畢!