日誌採集/分析

FuShudi發表於2024-06-30

目錄
  • EFK
    • 1. 日誌系統
    • 2. 部署ElasticSearch
      • 2.1 建立handless服務
      • 2.2 建立sts
    • 3. 部署kibana
    • 4. 部署ilogtail(docker-compose)
      • 4.1 編寫docker-compose
      • 4.2 配置ilogtail採集
      • 4.3 檢視容器採集的日誌
      • 4.4 採集容器標準輸出日誌(可選)
      • 4.5 檢視採集的容器日誌
    • 5. 部署kafka
      • 5.1 kafka介紹
      • 5.2 部署kafka(docker-compose)
      • 5.3 部署kafdrop(kafka的web介面)
      • 5.4 ilogtail將日誌寫入到kafka
    • 6. 部署logstash
      • 6.1 部署logstash(docker-compose)
      • 6.2 輸出日誌到es
      • 6.3 到kibana檢視
        • 6.3.1 檢視索引

EFK

這是一個日誌收集系統,日誌收集屬於可觀測性體系

可觀測性體系

  • 監控

    • 基礎設施的維度
      • USE方法
        • CPU:
          • 利用率
          • 負載
        • 記憶體:
          • 利用率
          • 飽和度
          • 錯誤率
        • 網路卡:
          • 利用率
          • 飽和度
          • 錯誤率
    • 應用程式的維度
      • RED方法
  • 日誌

    • 作業系統維度
    • 應用維度
      • 透過日誌的錯誤進一步完善監控
      • 透過日誌排查問題
      • 行為分析
  • 鏈路追蹤

1. 日誌系統

  1. ELK
    • ElasticSearch :日誌儲存系統
    • LogStash:日誌採集器
    • Kibana:日誌查詢分析系統

ELK現在用的少,原因是

  1. jruby(java+ruby)
  2. 語法複雜:重量級日誌採集
  3. 效能差
  1. EFK

    • ElasticSearch
    • Fluneted:日誌採集器
    • Kibana
  2. PLG

    • Promtail :日誌採集器
    • Loki:日誌儲存系統
    • Grafana:日誌查詢分析系統

我們這裡部署的架構是

ilogtail ---> kafka ---> logstash ---> elasticsearch ---> kibana

使用ilogtail採集日誌寫入到kafka訊息佇列裡,再由logstash從訊息佇列裡讀取日誌寫入到 es,最後再由kibana做展示

至於第三個環節為什麼是logstash而不是ilogtail是因為,ilogtail要往es裡面寫日誌會需要配置es的認證密碼,但我們是沒有給es配置使用者名稱和密碼的,所以採用logstash

2. 部署ElasticSearch

2.1 建立handless服務

[root@master EFK]# vim es-svc.yaml 
kind: Service
apiVersion: v1
metadata:
  name: elasticsearch
  namespace: logging
  labels:
    app: elasticsearch
spec:
  selector:
    app: elasticsearch
  clusterIP: None
  ports:
  - port: 9200
    name: rest
  - port: 9300
    name: inter-node

2.2 建立sts

[root@master EFK]# vim es-sts.yaml 
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: es
  namespace: logging
spec:
  serviceName: elasticsearch
  replicas: 1
  selector:
    matchLabels:
      app: elasticsearch
  template:
    metadata:
      labels:
        app: elasticsearch
    spec:
      initContainers:
        - name: initc1
          image: busybox
          command: ["sysctl","-w","vm.max_map_count=262144"]
          securityContext:
            privileged: true
        - name: initc2
          image: busybox
          command: ["sh","-c","ulimit -n 65536"]
          securityContext:
            privileged: true
        - name: initc3
          image: busybox
          command: ["sh","-c","chmod 777 /data"]
          volumeMounts:
          - name: data
            mountPath: /data
      containers:
        - name: elasticsearch
          image: swr.cn-east-3.myhuaweicloud.com/hcie_openeuler/elasticsearch:7.17.1
          resources:
            limits:
              cpu: 1000m
            requests:
              cpu: 100m
          ports:
            - containerPort: 9200
              name: rest
              protocol: TCP
            - containerPort: 9300
              name: inter-node
              protocol: TCP
          volumeMounts:
            - name: data
              mountPath: /usr/share/elasticsearch/data
          env:
            - name: cluster.name
              value: k8s-logs
            - name: node.name
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: cluster.initial_master_nodes
              value: "es-0"
            - name: discovery.zen.minimum_master_nodes
              value: "2"
            - name: discovery.seed_hosts
              value: "elasticsearch"
            - name: ES_JAVA_OPTS
              value: "-Xms512m -Xmx512m"
            - name: network.host
              value: "0.0.0.0"
  volumeClaimTemplates:
    - metadata:
        name: data
        labels:
          app: elasticsearch
      spec:
        accessModes: ["ReadWriteOnce"]
        resources:
          requests:
            storage: 10Gi

應用yaml檔案

[root@master EFK]# kubectl create ns logging
[root@master EFK]# kubectl apply -f .
service/elasticsearch create
statefulset.apps/es create
[root@master EFK]# kubectl get pods -n logging 
NAME   READY   STATUS    RESTARTS   AGE
es-0   1/1     Running   0          46s

pod顯示running就是部署好了

3. 部署kibana

我直接將所有需要的資源放在一個yaml檔案裡面

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: logging
  name: kibana-config
  labels:
    app: kibana
data:
  kibana.yml: |
    server.name: kibana
    server.host: "0.0.0.0"
    i18n.locale: zh-CN
    elasticsearch:
      hosts: ${ELASTICSEARCH_HOSTS}
---
apiVersion: v1
kind: Service
metadata:
  name: kibana
  namespace: logging
  labels:
    app: kibana
spec:
  ports:
  - port: 5601
  type: NodePort
  selector:
    app: kibana

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kibana
  namespace: logging
  labels:
    app: kibana
spec:
  selector:
    matchLabels:
      app: kibana
  template:
    metadata:
      labels:
        app: kibana
    spec:
      containers:
      - name: kibana
        image: swr.cn-east-3.myhuaweicloud.com/hcie_openeuler/kibana:7.17.1
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            cpu: 1
          requests:
            cpu: 1
        env:
        - name: ELASTICSEARCH_URL
          value: http://elasticsearch:9200    # 寫handless的名字
        - name: ELASTICSEARCH_HOSTS
          value: http://elasticsearch:9200    # 寫handless的名字
        ports:
        - containerPort: 5601
        volumeMounts:
        - name: config
          mountPath: /usr/share/kibana/config/kibana.yml
          readOnly: true
          subPath: kibana.yml
      volumes: 
      - name: config
        configMap:
          name: kibana-config

檢視埠並訪問

[root@master EFK]# kubectl get svc
NAME            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE
elasticsearch   ClusterIP   None            <none>        9200/TCP,9300/TCP   17m
kibana          NodePort    10.104.94.122   <none>        5601:30980/TCP      4m30s

kibana的nodeport埠是30980,我們來訪問

這樣就算部署好了,接下來需要部署日誌採集工具

4. 部署ilogtail(docker-compose)

因為Fluentd配置複雜,所以我這裡採用ilogtail來採集日誌

  • ilogtail配置簡單
  • 阿里開源,介面中文

我們先使用docker-compose的方式部署,最後整個平臺搭建起來之後我們再將ilogtail部署到k8s叢集裡

4.1 編寫docker-compose

[root@master ilogtail]# vim docker-compose.yaml
version: "3"
services:
  ilogtail:
    container_name: ilogtail
    image: sls-opensource-registry.cn-shanghai.cr.aliyuncs.com/ilogtail-community-edition/ilogtail:2.0.4
    network_mode: host
    volumes:
      - /:/logtail_host:ro
      - /var/run:/var/run
      - ./checkpoing:/usr/local/ilogtail/checkpoint
      - ./config:/usr/local/ilogtail/config/local
  • /:我們將宿主機的整個 / ,目錄掛載到容器裡面,方便容器讀取日誌
  • checkpoint:這個相當於一個指標,指向當前讀取到哪一行日誌了,如果ilogtail被重啟了它可以根據這個checkpoint來回到上一次讀取的地方
  • config:這個就是放採集的配置檔案的

啟動容器

[root@master ilogtail]# docker-compose up -d
[root@master ilogtail]# docker ps |grep ilogtail
eac545d4da87        sls-opensource-registry.cn-shanghai.cr.aliyuncs.com/ilogtail-community-edition/ilogtail:2.0.4   "/usr/local/ilogtail…"   10 seconds ago      Up 9 seconds                            ilogtail

這樣容器就啟動了

4.2 配置ilogtail採集

[root@master ilogtail]# cd config/
[root@master config]# vim sample-stdout.yaml
enable: true
inputs:
  - Type: input_file          # 檔案輸入型別
    FilePaths: 
      - /logtail_host/var/log/messages
flushers:
  - Type: flusher_stdout    # 標準輸出流輸出型別
    OnlyStdout: true
[root@master config]# docker restart ilogtail
  • /logtail_host/var/log/messages:這裡是這個地址的原因是我們將宿主機的/,掛載到了容器內的logtail_host,所以我們宿主機產生的日誌會在容器的/logtail_host/var/log/messages這個目錄下

  • 配置檔案寫好之後我們還需要重啟容器讓他讀取配置,所以有一個restart

4.3 檢視容器採集的日誌

[root@master config]# docker logs ilogtail

2024-06-30 11:16:25 {"content":"Jun 30 19:16:22 master dockerd[1467]: time=\"2024-06-30T19:16:22.251108165+08:00\" level=info msg=\"handled exit event processID=9a8df40981b3609897794e50aeb2bde805eab8a75334266d7b5c2899f61d486e containerID=61770e8f88e3c6a63e88f2a09d2683c6ccce1e13f6d4a5b6f79cc4d49094bab4 pid=125402\" module=libcontainerd namespace=moby","__time__":"1719746182"}
2024-06-30 11:16:25 {"content":"Jun 30 19:16:23 master kubelet[1468]: E0630 19:16:23.594557    1468 kubelet_volumes.go:245] \"There were many similar errors. Turn up verbosity to see them.\" err=\"orphaned pod \\\"9d5ae64f-1341-4c15-b70f-1c8f71efc20e\\\" found, but error not a directory occurred when trying to remove the volumes dir\" numErrs=2","__time__":"1719746184"}

可以看到,宿主機的日誌已經被成功採集了,宿主機的日誌會被封裝到content裡,如果沒有看到輸出的日誌的話,需要進入到容器內部檢視一個叫做ilogtail.LOG的檔案,而不能使用docker logs ilogtail

4.4 採集容器標準輸出日誌(可選)

[root@master config]# cp sample-stdout.yaml docker-stdout.yaml
# 為了避免同時輸出到標準輸出而導致的日誌雜亂,我們臨時將sample-stdout關掉
[root@master config]# cat sample-stdout.yaml 
enable: false                 # 將這裡改為false
inputs:
  - Type: input_file          # 檔案輸入型別
    FilePaths: 
      - /logtail_host/var/log/messages
flushers:
  - Type: flusher_stdout    # 標準輸出流輸出型別
    OnlyStdout: true
[root@master config]# cat docker-stdout.yaml 
enable: true
inputs:
  - Type: service_docker_stdout        
    Stdout: true                 # 採集標準輸出
    Stderr: false                # 不採集錯誤輸出
flushers:
  - Type: flusher_stdout    
    OnlyStdout: true
[root@master config]# docker restart ilogtail 
ilogtail

4.5 檢視採集的容器日誌

2024-06-30 11:24:13 {"content":"2024-06-30 11:24:10 {\"content\":\"2024-06-30 11:24:07 {\\\"content\\\":\\\"2024-06-30 11:24:04.965 [INFO][66] felix/summary.go 100: Summarising 12 dataplane reconciliation loops over 1m3.4s: avg=3ms longest=12ms ()\\\",\\\"_time_\\\":\\\"2024-06-30T11:24:04.965893702Z\\\",\\\"_source_\\\":\\\"stdout\\\",\\\"_container_ip_\\\":\\\"192.168.200.200\\\",\\\"_image_name_\\\":\\\"calico/node:v3.23.5\\\",\\\"_container_name_\\\":\\\"calico-node\\\",\\\"_pod_name_\\\":\\\"calico-node-hgqzr\\\",\\\"_namespace_\\\":\\\"kube-system\\\",\\\"_pod_uid_\\\":\\\"4d0d950c-346a-4f81-817c-c19526700542\\\",\\\"__time__\\\":\\\"1719746645\\\"}\",\"_time_\":\"2024-06-30T11:24:07.968118197Z\",\"_source_\":\"stdout\",\"_container_ip_\":\"192.168.200.200\",\"_image_name_\":\"sls-opensource-registry.cn-shanghai.cr.aliyuncs.com/ilogtail-community-edition/ilogtail:2.0.4\",\"_container_name_\":\"ilogtail\",\"__time__\":\"1719746647\"}","_time_":"2024-06-30T11:24:10.971474647Z","_source_":"stdout","_container_ip_":"192.168.200.200","_image_name_":"sls-opensource-registry.cn-shanghai.cr.aliyuncs.com/ilogtail-community-edition/ilogtail:2.0.4","_container_name_":"ilogtail","__time__":"1719746650"}

能夠正常看見日誌就說明日誌採集沒有問題,接下來我們部署kafka,用來接收ilogtail的日誌,注意將日誌採集關掉,不然你的虛擬機器磁碟很快就會滿

5. 部署kafka

kafka作為訊息佇列,會有消費者和生產者,生產者在這裡就是ilogtail,也就是將日誌寫入到kafka,消費者就是logstash,從kafka裡面讀取日誌寫入到es

5.1 kafka介紹

Apache kafka是分散式的,基於釋出/訂閱的容錯訊息系統,主要特性如下

  • 高吞吐,低延遲:可以做到每秒百萬級的吞吐量,並且延遲低(其他的訊息佇列基本也都可以)

  • 永續性,可靠性:訊息會被持久化到本地磁碟,支援資料備份防止資料丟失,並且可以配置訊息有效期,以便消費者可以多次消費

  • kafka官方不支援docker部署,我們可以使用第三方的映象

5.2 部署kafka(docker-compose)

version: '3'
services:
  zookeeper:
    image: quay.io/3330878296/zookeeper:3.8
    network_mode: host
    container_name: zookeeper-test
    volumes:
      - zookeeper_vol:/data
      - zookeeper_vol:/datalog
      - zookeeper_vol:/logs
  kafka:
    image: quay.io/3330878296/kafka:2.13-2.8.1
    network_mode: host
    container_name: kafka
    environment:
      KAFKA_ADVERTISED_HOST_NAME: "192.168.200.200"
      KAFKA_ZOOKEEPER_CONNECT: "192.168.200.200:2181"
      KAFKA_LOG_DIRS: "/kafka/logs"
    volumes:
      - kafka_vol:/kafka
    depends_on:
      - zookeeper
volumes:
  zookeeper_vol: {}
  kafka_vol: {}
  • KAFKA_LOG_DIRS: "/kafka/logs":這個地方需要注意,在kafka的名詞裡面,他把資料叫做日誌,這個地方看似是定義的日誌目錄,其實是kafka的資料目錄

5.3 部署kafdrop(kafka的web介面)

[root@master kafka]# docker run -d --rm -p 9000:9000 \
    -e KAFKA_BROKERCONNECT=192.168.200.200:9092 \
    -e SERVER_SERVLET_CONTEXTPATH="/" \
    quay.io/3330878296/kafdrop

部署好之後就可以使用web介面檢視了,部署web介面的原因是我們將日誌寫入到kafka之後可以直接使用web介面檢視也沒有寫入進去,比kafka命令列更加的直觀

在瀏覽器輸入ip:9000

5.4 ilogtail將日誌寫入到kafka

[root@master config]# cd /root/ilogtail/config
[root@master config]# cp sample-stdout.yaml kafka.yaml
[root@master config]# vim kafka.yaml
enable: true
inputs:
  - Type: input_file         
    FilePaths:
      - /logtail_host/var/log/messages
flushers:
  - Type: flusher_kafka_v2  
    Brokers:
      - 192.168.200.200:9092
    Topic: KafkaTopic
[root@master config]# docker restart ilogtail
ilogtail

這個時候我們再回到web介面就會出現一個topic

點進去可以檢視有哪些日誌被寫入進去了

能看見日誌就沒問題了,接下來部署logstash

6. 部署logstash

logstash會從kafka讀取訊息然後寫入到es裡面去

6.1 部署logstash(docker-compose)

[root@master ~]# mkdir logstash
[root@master ~]# cd logstash
[root@master logstash]# vim docker-compose.yaml
version: '3'
services:
  logstash:
    image: quay.io/3330878296/logstash:8.10.1
    container_name: logstash
    network_mode: host
    environment:
      LS_JAVA_OPTS: "-Xmx1g -Xms1g"
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - /apps/logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml
      - /apps/logstash/pipeline:/usr/share/logstash/pipeline
      - /var/log:/var/log
  • config裡面放的是logstash本身的配置檔案
  • pipeline裡面放的是採集/輸出日誌的規則

docker-compose寫好之後先不要著急啟動,因為我們給他掛載的配置檔案還沒有啟動

現在編寫配置檔案

[root@master logstash]# mkdir /apps/logstash/{config,pipeline}
[root@master logstash]# cd /apps/logstash/config/
[root@master config]# vim logstash.yml 
pipeline.workers: 2
pipeline.batch.size: 10
pipeline.batch.delay: 5
config.reload.automatic: true
config.reload.interval: 60s

寫好這個檔案之後我們啟動這個logstash容器

[root@master logstash]# /root/logstash
[root@master logstash]# docker-compose up -d
[root@master logstash]# docker ps |grep logstash
60dfde4df40d        quay.io/3330878296/logstash:8.10.1                                                              "/usr/local/bin/dock…"   2 minutes ago       Up 2 minutes                                 logstash

啟動之後就沒問題了

6.2 輸出日誌到es

Logstash官方文件地址

我們要使用logstash輸出日誌到es的話就需要到pipeline裡面去寫一些規則

[root@master EFK]# cd /apps/logstash/pipeline/
[root@master pipeline]# vim logstash.conf
input {
  kafka {
    # 指定kafka地址
    bootstrap_servers => "192.168.200.200:9092"
    # 從哪些topic獲取資料,要寫已經存在topic
    topics => ["KafkaTopic"]
    # 從哪個地方開始讀取,earliest是從頭開始讀取
    auto_offset_reset => "earliest"
    codec => "json"
    # 當一個logstash中有多個input外掛時,建議每個外掛定義一個id
    # id => "kubernetes"
    # group_id => "kubernetes"
  }
}


filter {
  json {
    source => "event.original"
  }
  mutate {
    remove_field => ["event.original","event"]
  }
}

output {
  elasticsearch {
    hosts => ["http://192.168.200.200:9200"]
    index => "kubernetes-logs-%{+YYYY.mm}"
  }
}
  • ​ hosts => ["http://192.168.200.200:9200"]:這個地方的9200,因為我們的logstash是用docker部署的,但是es是部署在k8s叢集內部的,所以這個地方9200埠是通不了的,所以我們需要給k8s的es建立一個nodeport型別的svc,來讓docker可以訪問到
[root@master EFK]# kubectl expose pod es-0 --type NodePort --port 9200 --target-port 9200
service/es-0 exposed
[root@master EFK]# kubectl get svc
NAME            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE
elasticsearch   ClusterIP   None            <none>        9200/TCP,9300/TCP   3h38m
es-0            NodePort    10.97.238.173   <none>        9200:32615/TCP      2s
kibana          NodePort    10.106.1.52     <none>        5601:30396/TCP      3h38m

這裡他將9200對映到了本地的32615埠,所以我們將logstash的地址改到32615

output {
  elasticsearch {
    hosts => ["http://192.168.200.200:32615"]
    index => "kubernetes-logs-%{+YYYY.mm}"
  }
}

然後重啟logstash

[root@master pipeline]# docker restart logstash 

6.3 到kibana檢視

6.3.1 檢視索引

  1. 點選stack management

  2. 點選索引管理,會看到有索引存在就是正常

  3. 點選索引模式,建立索引

  1. 進入discover

相關文章