Kibana 和 Logstash 安裝配置

小川發表於2021-06-29

Elasticsearch、Kibana、Logstash版本

  • Elasticsearch:7.2.0
  • Kibana:7.2.0
  • Logstash:7.2.0

Kibana和Logstash共同使用一臺伺服器

  • 伺服器配置:2核4G,系統盤40G固態硬碟

把Kibana從Elasticsearch節點中遷移出來,並使用RPM方式安裝。

一、安裝Kibana(RPM方式)

下載並安裝公共簽名金鑰

rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

新增yum源儲存庫配置

vim /etc/yum.repos.d/kibana.repo
[kibana-7.x]
name=Kibana repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

安裝Kibana

yum install kibana

二、Kibana配置,及開機啟動

kibana.yml

vim /etc/kibana/kibana.yml
# HTTP訪問埠
server.port: 5601

# HTTP訪問IP,內網IP、外網IP都可以訪問
server.host: "0.0.0.0"

# Elasticsearch節點地址(目前只支援單個地址)
elasticsearch.hosts: ["http://172.18.112.10:9200"]

# Elasticsearch賬號和密碼
elasticsearch.username: "elastic"
elasticsearch.password: "elasticpassword"

# Kibana Web頁面國際化【簡體中文】
i18n.locale: "zh-CN"

開機啟動

systemctl daemon-reload
systemctl enable kibana.service

啟動和關閉

systemctl start kibana.service
systemctl stop kibana.service
systemctl status kibana.service
systemctl restart kibana.service

檢視 Kibana 網站

ip:5601

三、Kibana目錄結構

Type Description Default Location
home Kibana安裝的主目錄或 $KIBANA_HOME /usr/share/kibana
bin 二進位制指令碼目錄。包括啟動Kibana伺服器和kibana-plugin安裝外掛 /usr/share/kibana/bin
config 配置檔案目錄。包括 kibana.yml /etc/kibana
data Kibana資料目錄。Kibana及其外掛寫入磁碟資料檔案的位置 /var/lib/kibana
optimize 透明的原始碼。某些管理操作(例如外掛安裝)導致原始碼在執行中重新傳輸 /usr/share/kibana/optimize
plugins 外掛檔案位置。每個外掛都將包含在一個子目錄中 /usr/share/kibana/plugins
# 新建索引,並初始化欄位
PUT index_t_settlement_info
{
  "settings": {
    "index": {
      "number_of_shards": 5,
      "number_of_replicas": 1
    }
  },
  "mappings": {
    "properties": {
      "id": {
        "type": "long"
      }
    }
  }
}

# 建立索引別名
POST /_aliases
{
    "actions": [
        {"add": {"index":"index_t_settlement_info","alias":"t_settlement_info"}}
    ]
}

# 別名切換到另外的索引上(此操作是原子操作)
POST /_aliases
{
    "actions": [
        {"remove": {"index":"index_t_settlement_info","alias":"t_settlement_info"}}
        {"add": {"index":"new_index_t_settlement_info","alias":"t_settlement_info"}}
    ]
}

# 向索引新增欄位(不能修改欄位)
PUT index_t_settlement_info
{
  "mappings": {
      "properties": {
        "user_id": {
          "type": "keyword"
        }
      }
  }
}

一、安裝Logstash(RPM方式)

下載並安裝公共簽名金鑰

rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

新增yum源儲存庫配置

vim /etc/yum.repos.d/logstash.repo
[logstash-7.x]
name=Elastic repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

安裝Logstash

yum install logstash

二、Logstash配置,及開機啟動

logstash.yml

vim /etc/logstash/logstash.yml
# 啟用定時重新載入配置
config.reload.automatic: true
# 定時重新載入配置週期
config.reload.interval: 3s

# 持久佇列
queue.type: persisted
# 控制耐久性
queue.checkpoint.writes: 1
# 死信佇列
dead_letter_queue.enable: true

# 啟用Logstash節點監控
xpack.monitoring.enabled: true
# Elasticsearch賬號和密碼
xpack.monitoring.elasticsearch.username: elastic
xpack.monitoring.elasticsearch.password: elasticpassword
# Elasticsearch節點地址列表
xpack.monitoring.elasticsearch.hosts: ["172.18.112.10", "172.18.112.11", "172.18.112.12"]
# 發現Elasticsearch叢集的其他節點
xpack.monitoring.elasticsearch.sniffing: true
# 傳送監控資料的頻率
xpack.monitoring.collection.interval: 10s
# 啟用監控管道資訊
xpack.monitoring.collection.pipeline.details.enabled: true

開機啟動

systemctl daemon-reload
systemctl enable logstash.service

啟動和關閉

systemctl start logstash.service
systemctl stop logstash.service
systemctl status logstash.service
systemctl restart logstash.service

三、Logstash目錄結構

Type Description Default Location
home Logstash安裝的主目錄 /usr/share/logstash
bin 二進位制指令碼目錄。包括啟動Logstash伺服器和logstash-plugin安裝外掛 /usr/share/logstash/bin
settings 配置檔案目錄。包括logstash.yml,jvm.options,和startup.options /etc/logstash
config Logstash管道配置檔案目錄 /etc/logstash/conf.d/*.conf
logs 日誌檔案目錄 /var/log/logstash
plugins 本地非Ruby-Gem外掛檔案。每個外掛都包含在一個子目錄中。建議僅用於開發 /usr/share/logstash/plugins
data logstash及其外掛用於任何永續性需求的資料檔案置 /var/lib/logstash

四、Mysql資料匯入Elasticsearch

1. 下載安裝Java Mysql驅動包

下載相容mysql對應版本的mysql-connector-java.jar驅動包

  • Mysql版本:5.7.20-log
  • 驅動包版本:mysql-connector-java-5.1.48.tar.gz(可以選擇5.1.*其他最新版本)
  • 官方下載地址:dev.mysql.com/downloads/connector/... (點選Looking for previous GA versions?選擇其他老版本)
  • 系統相容版本:選擇平臺無關平臺獨立對應的版本

新建java驅動包存放目錄

mkdir /usr/share/logstash/java

上傳mysql-connector-java.jar驅動包

mv mysql-connector-java-5.1.48-bin.jar /usr/share/logstash/java

修改java目錄及子目錄檔案擁有者

chown -R logstash:logstash /usr/share/logstash/java

2. 任務配置(位置/etc/logstash/conf.d/*.conf

  • Mysql匯入Elasticsearch的具體配置,一個任務一個配置檔案
  • conf.d/*.conf配置修改後,無需重啟logstash,logstash自動定時重新整理(3秒)

新建t_settlement_info目錄,獨立存放特定的任務

mkdir /etc/logstash/conf.d/t_settlement_info

建立/t_settlement_info.conf配置

vim /etc/logstash/conf.d/t_settlement_info/t_settlement_info.conf
input {
  jdbc {
    id => "t_settlement_info.input_jdbc"
    #資料庫驅動路徑(mkdir /usr/share/logstash/java)
    jdbc_driver_library => "/usr/share/logstash/java/mysql-connector-java-5.1.48-bin.jar"
    jdbc_driver_class => "com.mysql.jdbc.Driver"
    #資料庫連線相關配置
    jdbc_connection_string => "jdbc:mysql://127.0.0.1:3306/database"
    jdbc_user => "mysql_user"
    jdbc_password => "mysql_password"
    #任務計劃,多久執行一次,在此每1分鐘執行一次
    schedule => "* * * * *"
    #執行的sql語句
    statement => "SELECT * FROM t_settlement_info WHERE id > :sql_last_value ORDER BY id LIMIT 10000"
    #是否清除先前的執行狀態
    clean_run => false
    #啟用追蹤,如果為true,則需要指定tracking_column,預設是timestamp
    use_column_value => true
    #指定追蹤的欄位,在此我設定的追蹤的欄位為id
    tracking_column => "id"
    #追蹤欄位的型別,目前只有數字(numeric)和時間型別(timestamp),預設是數字型別
    tracking_column_type => "numeric"
    #記錄最後一次執行的結果
    record_last_run => true
    #上面執行結果的儲存位置(mkdir /usr/share/logstash/last-run-metadata)
    last_run_metadata_path => "/usr/share/logstash/last-run-metadata/.logstash_jdbc_last_run.t_settlement_info"
    #是否將欄位名稱轉小寫,當欄位已經為小寫時,不用此項
    lowercase_column_names => false
  }
}

output {
  elasticsearch {
    id => "t_settlement_info.output_elasticsearch"
    hosts => ["172.18.112.10","172.18.112.11","172.18.112.12"]
    index => "t_settlement_info"
    action => "update"
    doc_as_upsert => true
    document_id => "%{id}"
    user => "elastic"
    password => "elasticpassword"
  }
}

建立last-run-metadata目錄,單獨記錄每個持久佇列最後一次執行的追蹤欄位(logstash預設只使用一個檔案記錄)

# 新建目錄
mkdir /usr/share/logstash/last-run-metadata

# 修改目錄擁有者
chown -R logstash:logstash /usr/share/logstash/last-run-metadata

3. 管道配置(位置/etc/logstash/pipelines.yml

  • pipelines.yml配置修改後,無需重啟logstash,logstash自動定時重新整理(3秒)

自定義管道配置

vim /etc/logstash/pipelines.yml
# 預設管道,多工共同使用一個佇列,任務之前競爭排序執行。暫時關閉預設管道,有不強調效率的任務可以開啟
#- pipeline.id: main
#  path.config: "/etc/logstash/conf.d/*.conf"

# 單任務獨立使用一個佇列
- pipeline.id: t_settlement_info_11
  path.config: "/etc/logstash/conf.d/t_settlement_info/t_settlement_info_11.conf"

# 單任務獨立使用一個佇列
- pipeline.id: t_settlement_info_22
  path.config: "/etc/logstash/conf.d/t_settlement_info/t_settlement_info_22.conf"

# 單任務獨立使用一個佇列
- pipeline.id: t_settlement_info_33
  path.config: "/etc/logstash/conf.d/t_settlement_info/t_settlement_info_33.conf"
本作品採用《CC 協議》,轉載必須註明作者和本文連結

相關文章