Docker搭建ELK叢集

小川發表於2021-06-29

Elasticsearch、Kibana版本

  • Elasticsearch:7.8.1
  • Kibana:7.8.1
  • Logstash:7.8.1

叢集結構及伺服器配置

  • Elasticsearch叢集共3個節點,Kibana共1個節點,Logstash共1個節點
  • 伺服器:2核4G,資料盤100G固態硬碟

一、下載安裝 Docker

解除安裝舊版本

sudo yum remove docker \
                docker-client \
                docker-client-latest \
                docker-common \
                docker-latest \
                docker-latest-logrotate \
                docker-logrotate \
                docker-engine

設定遠端倉庫

1.安裝軟體依賴

sudo yum install -y yum-utils \
device-mapper-persistent-data \
lvm2

2.選擇穩定的倉庫

sudo yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo

安裝Docker

1.檢視可以安裝版本列表,並安裝特定版本

yum list docker-ce --showduplicates | sort -r

sudo yum install docker-ce-<VERSION_STRING> docker-ce-cli-<VERSION_STRING> containerd.io

2.或者直接安裝最新版本

sudo yum install docker-ce docker-ce-cli containerd.io

3.啟動Docker

sudo systemctl start docker

4.執行 hello-world 驗證Docker是否安裝成功

sudo docker run hello-world

解除安裝Docker

1.解除安裝Docker軟體包

sudo yum remove docker-ce

2.刪除映像,容器,卷或自定義配置檔案

sudo rm -rf /var/lib/docker

配置Docker加速器

1.建立或修改 /etc/docker/daemon.json 檔案,選擇合適映象

vim /etc/docker/daemon.json

# docker-cn映象
{
   "registry-mirrors": [
       "https://registry.docker-cn.com"
  ]
}

# 騰訊雲映象
{
   "registry-mirrors": [
       "https://mirror.ccs.tencentyun.com"
  ]
}

2.依次執行以下命令,重新啟動 Docker 服務

sudo systemctl daemon-reload

sudo systemctl restart docker

3.檢查加速器是否生效

# 執行 docker info 命令
docker info

# 返回結果中包含以下內容,則說明配置成功
Registry Mirrors:
  https://mirror.ccs.tencentyun.com

二、安裝 Elasticsearch

安裝並執行單節點

1.拉取映象

docker pull elasticsearch:7.8.1

2.第一次執行單個節點

docker run -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" elasticsearch:7.8.1

對映目錄

1.新建/data目錄,並掛載SSD磁碟

mkdir /data
fdisk -u /dev/vdb
mkfs.ext4  /dev/vdb1
cp /etc/fstab /etc/fstab.bak
echo "/dev/vdb1 /data/ ext4 defaults 0 0" >> /etc/fstab
mount /dev/vdb1 /data

2.建立Elasticsearch資料目錄日誌目錄(因為data和logs下的具體節點目錄沒有建立,啟動ES時可能報錯沒有許可權,chmod新增許可權即可)

mkdir -p /data/elasticsearch/data
mkdir -p /data/elasticsearch/logs

3.設定Elasticsearch安裝目錄讀寫許可權

chmod -R 777 /data/elasticsearch

使用 Docker Compose 啟動三個節點

1.安裝docker-compose

# 載Docker Compose的當前穩定版本
sudo curl -L "https://github.com/docker/compose/releases/download/1.25.4/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose

# 新增可執行許可權
sudo chmod +x /usr/local/bin/docker-compose

# 建立路徑連線符
sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose

2.建立一個 docker-compose.yml 檔案

version: '2.2'
services:
  es01:
    image: elasticsearch:7.8.1
    container_name: es01
    restart: always
    environment:
      - node.name=es01
      - cluster.name=es-docker-cluster
      - discovery.seed_hosts=es02,es03
      - cluster.initial_master_nodes=es01,es02,es03
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xmx256m -Xmx256m"
    ulimits:
      nofile:
        soft: 650000
        hard: 655360
      as:
        soft: -1
        hard: -1
      nproc:
        soft: -1
        hard: -1
      fsize:
        soft: -1
        hard: -1
      memlock:
        soft: -1
        hard: -1
    volumes:
      - ./data/es01:/usr/share/elasticsearch/data
      - ./logs/es01:/usr/share/elasticsearch/logs
    ports:
      - 9201:9200
    networks:
      - elastic
  es02:
    image: elasticsearch:7.8.1
    container_name: es02
    restart: always
    depends_on:
      - es01
    environment:
      - node.name=es02
      - cluster.name=es-docker-cluster
      - discovery.seed_hosts=es01,es03
      - cluster.initial_master_nodes=es01,es02,es03
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xmx256m -Xmx256m"
    ulimits:
      nofile:
        soft: 650000
        hard: 655360
      as:
        soft: -1
        hard: -1
      nproc:
        soft: -1
        hard: -1
      fsize:
        soft: -1
        hard: -1
      memlock:
        soft: -1
        hard: -1
    volumes:
      - ./data/es02:/usr/share/elasticsearch/data
      - ./logs/es02:/usr/share/elasticsearch/logs
    ports:
      - 9202:9200
    networks:
      - elastic
  es03:
    image: elasticsearch:7.8.1
    container_name: es03
    restart: always
    depends_on:
      - es01
    environment:
      - node.name=es03
      - cluster.name=es-docker-cluster
      - discovery.seed_hosts=es01,es02
      - cluster.initial_master_nodes=es01,es02,es03
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xmx256m -Xmx256m"
    ulimits:
      nofile:
        soft: 650000
        hard: 655360
      as:
        soft: -1
        hard: -1
      nproc:
        soft: -1
        hard: -1
      fsize:
        soft: -1
        hard: -1
      memlock:
        soft: -1
        hard: -1
    volumes:
      - ./data/es03:/usr/share/elasticsearch/data
      - ./logs/es03:/usr/share/elasticsearch/logs
    ports:
      - 9203:9200
    networks:
      - elastic

networks:
  elastic:
    driver: bridge

3.設定宿主機vm.max_map_count

# 永久設定
echo "vm.max_map_count = 655300" >>/etc/sysctl.conf

# 使/etc/sysctl.conf立即生效
sysctl -p

4.啟動並測試叢集

# 確保為Docker分配了至少4G記憶體

# 啟動叢集
docker-compose up

# 檢視節點是否已啟動
curl -X GET "localhost:9200/_cat/nodes?v&pretty"

在Elasticsearch Docker容器中加密通訊

1.建立配置檔案

instances.yml 標您需要建立證照的例項

instances:
  - name: es01
    dns:
      - es01
      - localhost
    ip:
      - 127.0.0.1

  - name: es02
    dns:
      - es02
      - localhost
    ip:
      - 127.0.0.1

  - name: es03
    dns:
      - es03
      - localhost
    ip:
      - 127.0.0.1

.env 設定環境變數,配置檔案中使用

COMPOSE_PROJECT_NAME=es
CERTS_DIR=/usr/share/elasticsearch/config/certificates
VERSION=7.8.1

create-certs.yml 是一個Docker Compose檔案,啟動一個容器來生成Elasticsearch證照

version: '2.2'

services:
  create_certs:
    image: elasticsearch:${VERSION}
    container_name: create_certs
    command: >
      bash -c '
        yum install -y -q -e 0 unzip;
        if [[ ! -f /certs/bundle.zip ]]; then
          bin/elasticsearch-certutil cert --silent --pem --in config/certificates/instances.yml -out /certs/bundle.zip;
          unzip /certs/bundle.zip -d /certs;
        fi;
        chown -R 1000:0 /certs
      '
    working_dir: /usr/share/elasticsearch
    volumes:
      - certs:/certs
      - .:/usr/share/elasticsearch/config/certificates
    networks:
      - elastic

volumes:
  certs:
    driver: local

networks:
  elastic:
    driver: bridge

修改 docker-compose.yml 檔案

version: '2.2'
services:
  es01:
    image: elasticsearch:${VERSION}
    container_name: es01
    restart: always
    environment:
      - node.name=es01
      - cluster.name=es-docker-cluster
      - discovery.seed_hosts=es02,es03
      - cluster.initial_master_nodes=es01,es02,es03
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms256m -Xmx256m"
      - xpack.security.enabled=true
      - xpack.security.transport.ssl.enabled=true
      - xpack.security.transport.ssl.verification_mode=certificate
      - xpack.security.transport.ssl.certificate_authorities=$CERTS_DIR/ca/ca.crt
      - xpack.security.transport.ssl.certificate=$CERTS_DIR/es01/es01.crt
      - xpack.security.transport.ssl.key=$CERTS_DIR/es01/es01.key
    ulimits:
      nofile:
        soft: 650000
        hard: 655360
      as:
        soft: -1
        hard: -1
      nproc:
        soft: -1
        hard: -1
      fsize:
        soft: -1
        hard: -1
      memlock:
        soft: -1
        hard: -1
    volumes:
      - ./data/es01:/usr/share/elasticsearch/data
      - ./logs/es01:/usr/share/elasticsearch/logs
      - certs:$CERTS_DIR
    ports:
      - 9201:9200
    networks:
      - elastic
  es02:
    image: elasticsearch:${VERSION}
    container_name: es02
    restart: always
    depends_on:
      - es01
    environment:
      - node.name=es02
      - cluster.name=es-docker-cluster
      - discovery.seed_hosts=es01,es03
      - cluster.initial_master_nodes=es01,es02,es03
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms256m -Xmx256m"
      - xpack.security.enabled=true
      - xpack.security.transport.ssl.enabled=true
      - xpack.security.transport.ssl.verification_mode=certificate
      - xpack.security.transport.ssl.certificate_authorities=$CERTS_DIR/ca/ca.crt
      - xpack.security.transport.ssl.certificate=$CERTS_DIR/es02/es02.crt
      - xpack.security.transport.ssl.key=$CERTS_DIR/es02/es02.key
    ulimits:
      nofile:
        soft: 650000
        hard: 655360
      as:
        soft: -1
        hard: -1
      nproc:
        soft: -1
        hard: -1
      fsize:
        soft: -1
        hard: -1
      memlock:
        soft: -1
        hard: -1
    volumes:
      - ./data/es02:/usr/share/elasticsearch/data
      - ./logs/es02:/usr/share/elasticsearch/logs
      - certs:$CERTS_DIR
    ports:
      - 9202:9200
    networks:
      - elastic
  es03:
    image: elasticsearch:${VERSION}
    container_name: es03
    restart: always
    depends_on:
      - es01
    environment:
      - node.name=es03
      - cluster.name=es-docker-cluster
      - discovery.seed_hosts=es01,es02
      - cluster.initial_master_nodes=es01,es02,es03
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms256m -Xmx256m"
      - xpack.security.enabled=true
      - xpack.security.transport.ssl.enabled=true
      - xpack.security.transport.ssl.verification_mode=certificate
      - xpack.security.transport.ssl.certificate_authorities=$CERTS_DIR/ca/ca.crt
      - xpack.security.transport.ssl.certificate=$CERTS_DIR/es03/es03.crt
      - xpack.security.transport.ssl.key=$CERTS_DIR/es03/es03.key
    ulimits:
      nofile:
        soft: 650000
        hard: 655360
      as:
        soft: -1
        hard: -1
      nproc:
        soft: -1
        hard: -1
      fsize:
        soft: -1
        hard: -1
      memlock:
        soft: -1
        hard: -1
    volumes:
      - ./data/es03:/usr/share/elasticsearch/data
      - ./logs/es03:/usr/share/elasticsearch/logs
      - certs:$CERTS_DIR
    ports:
      - 9203:9200
    networks:
      - elastic

volumes:
  certs:
    driver: local

networks:
  elastic:
    driver: bridge

2.確保為Docker Engine分配了至少2G的記憶體

3.生成為Elasticsearch證照

docker-compose -f create-certs.yml run --rm create_certs

4.構建並啟動Elasticsearch叢集

docker-compose up -d

5.使用 elasticsearch-setup-passwords 生成內建使用者密碼

# 進入容器節點 es01
docker exec -it es01 /bin/bash

# 生成內建使用者密碼
./bin/elasticsearch-setup-passwords auto

三、安裝 Kibana

安裝並執行單節點

1.拉取映象

docker pull kibana:7.8.1

2.第一次執行單個節點

docker run --link YOUR_ELASTICSEARCH_CONTAINER_NAME_OR_ID:elasticsearch -p 5601:5601 {docker-repo}:{version}

對映目錄

1.新建/data目錄,並掛載SSD磁碟

mkdir /data
fdisk -u /dev/vdb
mkfs.ext4  /dev/vdb1
cp /etc/fstab /etc/fstab.bak
echo "/dev/vdb1 /data/ ext4 defaults 0 0" >> /etc/fstab
mount /dev/vdb1 /data

2.建立Kibana資料目錄日誌目錄

mkdir -p /data/kibana/config

3.設定Kibana安裝目錄讀寫許可權

chmod -R 777 /data/kibana

使用 Docker Compose 啟動節點

1.安裝docker-compose

# 載Docker Compose的當前穩定版本
sudo curl -L "https://github.com/docker/compose/releases/download/1.25.4/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose

# 新增可執行許可權
sudo chmod +x /usr/local/bin/docker-compose

# 建立路徑連線符
sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose

2.建立一個 docker-compose.yml 檔案

version: '2'
services:
  kibana:
    image: kibana:7.8.1
    container_name: kibana
    volumes:
      - ./config/kibana.yml:/usr/share/kibana/config/kibana.yml
    ports:
      - 5601:5601

3.config/kibana.yml

# HTTP訪問埠
server.port: 5601

# HTTP訪問IP,內網IP、外網IP都可以訪問
server.host: "0.0.0.0"

# Elasticsearch節點地址(物理機內網IP,或者127.0.0.1)
elasticsearch.hosts: "http://127.0.0.1:9201"

## Elasticsearch賬號和密碼
elasticsearch.username: "elastic"
elasticsearch.password: "elastic_password"

# Kibana Web頁面國際化【簡體中文】
i18n.locale: "zh-CN"

4.啟動並測試叢集

# 啟動叢集
docker-compose up

# 檢視節點是否已啟動
curl -X GET "localhost:5601"

四、安裝 Logstash

安裝

1.拉取映象

docker pull logstash:7.8.1

對映目錄

1.新建/data目錄,並掛載SSD磁碟

mkdir /data
fdisk -u /dev/vdb
mkfs.ext4  /dev/vdb1
cp /etc/fstab /etc/fstab.bak
echo "/dev/vdb1 /data/ ext4 defaults 0 0" >> /etc/fstab
mount /dev/vdb1 /data

2.建立Logstash資料目錄日誌目錄

mkdir -p /data/logstash/config
mkdir -p /data/logstash/pipeline
mkdir -p /data/logstash/logs
mkdir -p /data/logstash/last-run-metadata
mkdir -p /data/logstash/java

3.下載相容mysql對應版本的mysql-connector-java.jar驅動包

  • Mysql版本:5.7.20-log
  • 驅動包版本:mysql-connector-java-5.1.48.tar.gz(可以選擇5.1.*其他最新版本)
  • 官方下載地址:dev.mysql.com/downloads/connector/... (點選Looking for previous GA versions?選擇其他老版本)
  • 系統相容版本:選擇平臺無關平臺獨立對應的版本

上傳mysql-connector-java.jar驅動包

mv mysql-connector-java-5.1.48-bin.jar /data/logstash/java

4.設定Logstash安裝目錄讀寫許可權

chmod -R 777 /data/logstash

使用 Docker Compose 啟動節點

1.安裝docker-compose

# 安裝步驟省略...

2.建立一個 docker-compose.yml 檔案

version: '2'
services:
  logstash:
    image: logstash:7.8.1
    container_name: logstash
    volumes:
      - ./config/:/usr/share/logstash/config/
      - ./pipeline/:/usr/share/logstash/pipeline/
      - ./logs/:/usr/share/logstash/logs/
      - ./last-run-metadata/:/usr/share/logstash/last-run-metadata/
      - ./java/:/usr/share/logstash/java/

3.config/logstash.yml

# 啟用定時重新載入配置
config.reload.automatic: true
# 定時重新載入配置週期
config.reload.interval: 3s

# 持久佇列
queue.type: persisted
# 控制耐久性
queue.checkpoint.writes: 1
# 死信佇列
dead_letter_queue.enable: true

# 啟用Logstash節點監控
xpack.monitoring.enabled: true
# Elasticsearch賬號和密碼
xpack.monitoring.elasticsearch.username: elastic
xpack.monitoring.elasticsearch.password: elasticpassword
# Elasticsearch節點地址列表(物理機內網IP,或者127.0.0.1)
xpack.monitoring.elasticsearch.hosts: ["127.0.0.1:9201", "127.0.0.1:9202", "127.0.0.1:9203"]
# 發現Elasticsearch叢集的其他節點(埠包含除9200外的其它埠時需關閉)
# xpack.monitoring.elasticsearch.sniffing: true
# 傳送監控資料的頻率
xpack.monitoring.collection.interval: 10s
# 啟用監控管道資訊
xpack.monitoring.collection.pipeline.details.enabled: true

4.config/pipelines.yml

# https://www.elastic.co/guide/en/logstash/current/multiple-pipelines.html

- pipeline.id: main
  path.config: "/usr/share/logstash/pipeline/*.conf"

4.config/log4j2.properties(最新版本參考官方配置,如果無配置檔案,預設不記錄日誌)

status = error
name = LogstashPropertiesConfig

appender.console.type = Console
appender.console.name = plain_console
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c]%notEmpty{[%X{pipeline.id}]} %m%n

appender.json_console.type = Console
appender.json_console.name = json_console
appender.json_console.layout.type = JSONLayout
appender.json_console.layout.compact = true
appender.json_console.layout.eventEol = true

appender.rolling.type = RollingFile
appender.rolling.name = plain_rolling
appender.rolling.fileName = ${sys:ls.logs}/logstash-${sys:ls.log.format}.log
appender.rolling.filePattern = ${sys:ls.logs}/logstash-${sys:ls.log.format}-%d{yyyy-MM-dd}-%i.log.gz
appender.rolling.policies.type = Policies
appender.rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.rolling.policies.time.interval = 1
appender.rolling.policies.time.modulate = true
appender.rolling.layout.type = PatternLayout
appender.rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c]%notEmpty{[%X{pipeline.id}]} %m%n
appender.rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.rolling.policies.size.size = 100MB
appender.rolling.strategy.type = DefaultRolloverStrategy
appender.rolling.strategy.max = 30
appender.rolling.avoid_pipelined_filter.type = ScriptFilter
appender.rolling.avoid_pipelined_filter.script.type = Script
appender.rolling.avoid_pipelined_filter.script.name = filter_no_pipelined
appender.rolling.avoid_pipelined_filter.script.language = JavaScript
appender.rolling.avoid_pipelined_filter.script.value = ${sys:ls.pipeline.separate_logs} == false || !(logEvent.getContextData().containsKey("pipeline.id"))

appender.json_rolling.type = RollingFile
appender.json_rolling.name = json_rolling
appender.json_rolling.fileName = ${sys:ls.logs}/logstash-${sys:ls.log.format}.log
appender.json_rolling.filePattern = ${sys:ls.logs}/logstash-${sys:ls.log.format}-%d{yyyy-MM-dd}-%i.log.gz
appender.json_rolling.policies.type = Policies
appender.json_rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.json_rolling.policies.time.interval = 1
appender.json_rolling.policies.time.modulate = true
appender.json_rolling.layout.type = JSONLayout
appender.json_rolling.layout.compact = true
appender.json_rolling.layout.eventEol = true
appender.json_rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.json_rolling.policies.size.size = 100MB
appender.json_rolling.strategy.type = DefaultRolloverStrategy
appender.json_rolling.strategy.max = 30
appender.json_rolling.avoid_pipelined_filter.type = ScriptFilter
appender.json_rolling.avoid_pipelined_filter.script.type = Script
appender.json_rolling.avoid_pipelined_filter.script.name = filter_no_pipelined
appender.json_rolling.avoid_pipelined_filter.script.language = JavaScript
appender.json_rolling.avoid_pipelined_filter.script.value = ${sys:ls.pipeline.separate_logs} == false || !(logEvent.getContextData().containsKey("pipeline.id"))

appender.routing.type = Routing
appender.routing.name = pipeline_routing_appender
appender.routing.routes.type = Routes
appender.routing.routes.script.type = Script
appender.routing.routes.script.name = routing_script
appender.routing.routes.script.language = JavaScript
appender.routing.routes.script.value = logEvent.getContextData().containsKey("pipeline.id") ? logEvent.getContextData().getValue("pipeline.id") : "sink";
appender.routing.routes.route_pipelines.type = Route
appender.routing.routes.route_pipelines.rolling.type = RollingFile
appender.routing.routes.route_pipelines.rolling.name = appender-${ctx:pipeline.id}
appender.routing.routes.route_pipelines.rolling.fileName = ${sys:ls.logs}/pipeline_${ctx:pipeline.id}.log
appender.routing.routes.route_pipelines.rolling.filePattern = ${sys:ls.logs}/pipeline_${ctx:pipeline.id}.%i.log.gz
appender.routing.routes.route_pipelines.rolling.layout.type = PatternLayout
appender.routing.routes.route_pipelines.rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %m%n
appender.routing.routes.route_pipelines.rolling.policy.type = SizeBasedTriggeringPolicy
appender.routing.routes.route_pipelines.rolling.policy.size = 100MB
appender.routing.routes.route_pipelines.strategy.type = DefaultRolloverStrategy
appender.routing.routes.route_pipelines.strategy.max = 30
appender.routing.routes.route_sink.type = Route
appender.routing.routes.route_sink.key = sink
appender.routing.routes.route_sink.null.type = Null
appender.routing.routes.route_sink.null.name = drop-appender

rootLogger.level = ${sys:ls.log.level}
rootLogger.appenderRef.console.ref = ${sys:ls.log.format}_console
rootLogger.appenderRef.rolling.ref = ${sys:ls.log.format}_rolling
rootLogger.appenderRef.routing.ref = pipeline_routing_appender

# Slowlog

appender.console_slowlog.type = Console
appender.console_slowlog.name = plain_console_slowlog
appender.console_slowlog.layout.type = PatternLayout
appender.console_slowlog.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %m%n

appender.json_console_slowlog.type = Console
appender.json_console_slowlog.name = json_console_slowlog
appender.json_console_slowlog.layout.type = JSONLayout
appender.json_console_slowlog.layout.compact = true
appender.json_console_slowlog.layout.eventEol = true

appender.rolling_slowlog.type = RollingFile
appender.rolling_slowlog.name = plain_rolling_slowlog
appender.rolling_slowlog.fileName = ${sys:ls.logs}/logstash-slowlog-${sys:ls.log.format}.log
appender.rolling_slowlog.filePattern = ${sys:ls.logs}/logstash-slowlog-${sys:ls.log.format}-%d{yyyy-MM-dd}-%i.log.gz
appender.rolling_slowlog.policies.type = Policies
appender.rolling_slowlog.policies.time.type = TimeBasedTriggeringPolicy
appender.rolling_slowlog.policies.time.interval = 1
appender.rolling_slowlog.policies.time.modulate = true
appender.rolling_slowlog.layout.type = PatternLayout
appender.rolling_slowlog.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %m%n
appender.rolling_slowlog.policies.size.type = SizeBasedTriggeringPolicy
appender.rolling_slowlog.policies.size.size = 100MB
appender.rolling_slowlog.strategy.type = DefaultRolloverStrategy
appender.rolling_slowlog.strategy.max = 30

appender.json_rolling_slowlog.type = RollingFile
appender.json_rolling_slowlog.name = json_rolling_slowlog
appender.json_rolling_slowlog.fileName = ${sys:ls.logs}/logstash-slowlog-${sys:ls.log.format}.log
appender.json_rolling_slowlog.filePattern = ${sys:ls.logs}/logstash-slowlog-${sys:ls.log.format}-%d{yyyy-MM-dd}-%i.log.gz
appender.json_rolling_slowlog.policies.type = Policies
appender.json_rolling_slowlog.policies.time.type = TimeBasedTriggeringPolicy
appender.json_rolling_slowlog.policies.time.interval = 1
appender.json_rolling_slowlog.policies.time.modulate = true
appender.json_rolling_slowlog.layout.type = JSONLayout
appender.json_rolling_slowlog.layout.compact = true
appender.json_rolling_slowlog.layout.eventEol = true
appender.json_rolling_slowlog.policies.size.type = SizeBasedTriggeringPolicy
appender.json_rolling_slowlog.policies.size.size = 100MB
appender.json_rolling_slowlog.strategy.type = DefaultRolloverStrategy
appender.json_rolling_slowlog.strategy.max = 30

logger.slowlog.name = slowlog
logger.slowlog.level = trace
logger.slowlog.appenderRef.console_slowlog.ref = ${sys:ls.log.format}_console_slowlog
logger.slowlog.appenderRef.rolling_slowlog.ref = ${sys:ls.log.format}_rolling_slowlog
logger.slowlog.additivity = false

logger.licensereader.name = logstash.licensechecker.licensereader
logger.licensereader.level = error

# Deprecation log
appender.deprecation_rolling.type = RollingFile
appender.deprecation_rolling.name = deprecation_plain_rolling
appender.deprecation_rolling.fileName = ${sys:ls.logs}/logstash-deprecation.log
appender.deprecation_rolling.filePattern = ${sys:ls.logs}/logstash-deprecation-%d{yyyy-MM-dd}-%i.log.gz
appender.deprecation_rolling.policies.type = Policies
appender.deprecation_rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.deprecation_rolling.policies.time.interval = 1
appender.deprecation_rolling.policies.time.modulate = true
appender.deprecation_rolling.layout.type = PatternLayout
appender.deprecation_rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c]%notEmpty{[%X{pipeline.id}]} %m%n
appender.deprecation_rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.deprecation_rolling.policies.size.size = 100MB
appender.deprecation_rolling.strategy.type = DefaultRolloverStrategy
appender.deprecation_rolling.strategy.max = 30

logger.deprecation.name = org.logstash.deprecation, deprecation
logger.deprecation.level = WARN
logger.deprecation.appenderRef.deprecation_rolling.ref = deprecation_plain_rolling
logger.deprecation.additivity = false

logger.deprecation_root.name = deprecation
logger.deprecation_root.level = WARN
logger.deprecation_root.appenderRef.deprecation_rolling.ref = deprecation_plain_rolling
logger.deprecation_root.additivity = false

5.啟動並測試叢集

# 啟動叢集
docker-compose up

# 檢視節點是否已啟動
tail -fn 100 logs/logstash-plain.log
本作品採用《CC 協議》,轉載必須註明作者和本文連結

相關文章