日誌分析系統 - k8s部署ElasticSearch叢集

wubolive發表於2022-01-05

K8s部署ElasticSearch叢集

1.前提準備工作

1.1 建立elastic的名稱空間

namespace編排檔案如下:

elastic.namespace.yaml 
---
apiVersion: v1
kind: Namespace
metadata:
   name: elastic
---

建立elastic名稱空間

$ kubectl apply elastic.namespace.yaml
namespace/elastic created

1.2 生成Xpack認證證照檔案

ElasticSearch提供了生成證照的工具elasticsearch-certutil,我們可以在docker例項中先生成它,然後複製出來,後面統一使用。

1.2.1 建立ES臨時容器

$ docker run -it -d --name elastic-cret docker.elastic.co/elasticsearch/elasticsearch:7.8.0 /bin/bash
62acfabc85f220941fcaf08bc783c4e305813045683290fe7b15f95e37e70cd0

1.2.2 進入容器生成金鑰檔案

$ docker exec -it elastic-cret /bin/bash
$ ./bin/elasticsearch-certutil ca
This tool assists you in the generation of X.509 certificates and certificate
signing requests for use with SSL/TLS in the Elastic stack.

The 'ca' mode generates a new 'certificate authority'
This will create a new X.509 certificate and private key that can be used
to sign certificate when running in 'cert' mode.

Use the 'ca-dn' option if you wish to configure the 'distinguished name'
of the certificate authority

By default the 'ca' mode produces a single PKCS#12 output file which holds:
    * The CA certificate
    * The CA s private key

If you elect to generate PEM format certificates (the -pem option), then the output will
be a zip file containing individual files for the CA certificate and private key

Please enter the desired output file [elastic-stack-ca.p12]: 
Enter password for elastic-stack-ca.p12 : 

./bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12 
This tool assists you in the generation of X.509 certificates and certificate
signing requests for use with SSL/TLS in the Elastic stack.
......

Enter password for CA (elastic-stack-ca.p12) : 
Please enter the desired output file [elastic-certificates.p12]: 
Enter password for elastic-certificates.p12 : 

Certificates written to /usr/share/elasticsearch/elastic-certificates.p12

This file should be properly secured as it contains the private key for 
your instance.

This file is a self contained file and can be copied and used 'as is'
For each Elastic product that you wish to configure, you should copy
this '.p12' file to the relevant configuration directory
and then follow the SSL configuration instructions in the product guide.

For client applications, you may only need to copy the CA certificate and
configure the client to trust this certificate.

$ ls *.p12
elastic-certificates.p12  elastic-stack-ca.p12

注:以上所有選項無需填寫,直接回車即可

1.2.3 將證照檔案從容器內複製出來備用

$ docker cp elastic-cret:/usr/share/elasticsearch/elastic-certificates.p12 .
$ docker rm -f elastic-cret

2 建立Master節點

建立Master主節點用於控制整個叢集,編排檔案如下:

2.1 為Master節點配置資料持久化

# 建立編排檔案
elasticsearch-master.pvc.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-elasticsearch-master
  namespace: elastic
spec:
  accessModes:
  - ReadWriteMany
  storageClassName: nfs-client  # 此處指定StorageClass儲存卷
  resources:
    requests:
      storage: 10Gi
      
# 建立pvc儲存卷
kubectl apply -f elasticsearch-master.pvc.yaml
kubectl get pvc -n elastic
NAME                        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS     AGE
pvc-elasticsearch-master    Bound    pvc-9ef037b7-c4b2-11ea-8237-ac1f6bd6d98e   10Gi       RWX            nfs-client-ssd   38d

將之前生成的證照檔案存放到建立好pvc的crets目錄中,例:

$ mkdir ${MASTER-PVC_HOME}/crets
$ cp elastic-certificates.p12 ${MASTER-PVC_HOME}/crets/

2.2 建立master節點ConfigMap編排檔案

ConfigMap物件用於存放Master叢集配置資訊,方便ElasticSearch的配置並開啟Xpack認證功能,資源物件如下:

elasticsearch-master.configmap.yaml 
---
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: elastic
  name: elasticsearch-master-config
  labels:
    app: elasticsearch
    role: master
data:
  elasticsearch.yml: |-
    cluster.name: ${CLUSTER_NAME}
    node.name: ${NODE_NAME}
    discovery.seed_hosts: ${NODE_LIST}
    cluster.initial_master_nodes: ${MASTER_NODES}

    network.host: 0.0.0.0

    node:
      master: true
      data: false
      ingest: false

    xpack.security.enabled: true
    xpack.monitoring.collection.enabled: true
    xpack.security.transport.ssl.enabled: true
    xpack.security.transport.ssl.verification_mode: certificate
    xpack.security.transport.ssl.keystore.path: /usr/share/elasticsearch/data/certs/elastic-certificates.p12
    xpack.security.transport.ssl.truststore.path: /usr/share/elasticsearch/data/certs/elastic-certificates.p12
---

2.3 建立master節點Service編排檔案

Master節點只需要用於叢集通訊的9300埠,資源清單如下:

elasticsearch-master.service.yaml 
---
apiVersion: v1
kind: Service
metadata:
  namespace: elastic
  name: elasticsearch-master
  labels:
    app: elasticsearch
    role: master
spec:
  ports:
  - port: 9300
    name: transport
  selector:
    app: elasticsearch
    role: master
---

2.4 建立master節點Deployment編排檔案

Deployment用於定於Master節點應用Pod,資源清單如下:

elasticsearch-master.deployment.yaml 
---
apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: elastic
  name: elasticsearch-master
  labels:
    app: elasticsearch
    role: master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: elasticsearch
      role: master
  template:
    metadata:
      labels:
        app: elasticsearch
        role: master
    spec:
      containers:
      - name: elasticsearch-master
        image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
        env:
        - name: CLUSTER_NAME
          value: elasticsearch
        - name: NODE_NAME
          value: elasticsearch-master
        - name: NODE_LIST
          value: elasticsearch-master,elasticsearch-data,elasticsearch-client
        - name: MASTER_NODES
          value: elasticsearch-master
        - name: ES_JAVA_OPTS
          value: "-Xms2048m -Xmx2048m"
        - name: ELASTIC_USERNAME
          valueFrom:
            secretKeyRef:
              name: elastic-credentials
              key: username
        - name: ELASTIC_PASSWORD
          valueFrom:
            secretKeyRef:
              name: elastic-credentials
              key: password
        ports:
        - containerPort: 9300
          name: transport
        volumeMounts:
        - name: config
          mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
          readOnly: true
          subPath: elasticsearch.yml
        - name: storage
          mountPath: /usr/share/elasticsearch/data
      volumes:
      - name: config
        configMap:
          name: elasticsearch-master-config
      - name: storage
        persistentVolumeClaim:
          claimName: pvc-elasticsearch-master
---

2.5 建立3個master資源物件

$ kubectl apply  -f elasticsearch-master.configmap.yaml \
                 -f elasticsearch-master.service.yaml \
                 -f elasticsearch-master.deployment.yaml
configmap/elasticsearch-master-config created
service/elasticsearch-master created
deployment.apps/elasticsearch-master created

$ kubectl get pods -n elastic -l app=elasticsearch
NAME                                    READY   STATUS    RESTARTS   AGE
elasticsearch-master-7fc5cc8957-jfjmr   1/1     Running   0          23m

直到 Pod 變成 Running 狀態就表明 master 節點安裝成功。

3 安裝ElasticSearch資料節點

接下來安裝的是ES的資料節點,主要用於負責叢集的資料託管和執行查詢

3.1 建立data節點ConfigMap編排檔案

跟Master節點一樣,ConfigMap用於存放資料節點ES的配置資訊,編排檔案如下:

elasticsearch-data.configmap.yaml 
---
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: elastic
  name: elasticsearch-data-config
  labels:
    app: elasticsearch
    role: data
data:
  elasticsearch.yml: |-
    cluster.name: ${CLUSTER_NAME}
    node.name: ${NODE_NAME}
    discovery.seed_hosts: ${NODE_LIST}
    cluster.initial_master_nodes: ${MASTER_NODES}

    network.host: 0.0.0.0

    node:
      master: false
      data: true
      ingest: false

    xpack.security.enabled: true
    xpack.monitoring.collection.enabled: true
    xpack.security.transport.ssl.enabled: true
    xpack.security.transport.ssl.verification_mode: certificate
    xpack.security.transport.ssl.keystore.path: /usr/share/elasticsearch/data/certs/elastic-certificates.p12
    xpack.security.transport.ssl.truststore.path: /usr/share/elasticsearch/data/certs/elastic-certificates.p12
---

3.2 建立data節點Service編排檔案

data節點同master一樣只需通過9300埠與其它節點通訊,資源物件如下:

elasticsearch-data.service.yaml 
---
apiVersion: v1
kind: Service
metadata:
  namespace: elastic
  name: elasticsearch-data
  labels:
    app: elasticsearch
    role: data
spec:
  ports:
  - port: 9300
    name: transport
  selector:
    app: elasticsearch
    role: data
---

3.3 建立data節點StatefulSet控制器

data節點需要建立StatefulSet控制器,因為存在多個資料節點,且每個資料節點的資料不是一樣的,需要單獨儲存,其中volumeClaimTemplates用於定於每個資料節點的儲存卷,對應的清單檔案如下:

elasticsearch-data.statefulset.yaml 
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  namespace: elastic
  name: elasticsearch-data
  labels:
    app: elasticsearch
    role: data
spec:
  serviceName: "elasticsearch-data"
  replicas: 2
  selector:
    matchLabels:
      app: elasticsearch
      role: data
  template:
    metadata:
      labels:
        app: elasticsearch
        role: data
    spec:
      containers:
      - name: elasticsearch-data
        image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
        env:
        - name: CLUSTER_NAME
          value: elasticsearch
        - name: NODE_NAME
          value: elasticsearch-data
        - name: NODE_LIST
          value: elasticsearch-master,elasticsearch-data,elasticsearch-client
        - name: MASTER_NODES
          value: elasticsearch-master
        - name: "ES_JAVA_OPTS"
          value: "-Xms4096m -Xmx4096m"
        - name: ELASTIC_USERNAME
          valueFrom:
            secretKeyRef:
              name: elastic-credentials
              key: username
        - name: ELASTIC_PASSWORD
          valueFrom:
            secretKeyRef:
              name: elastic-credentials
              key: password
        ports:
        - containerPort: 9300
          name: transport
        volumeMounts:
        - name: config
          mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
          readOnly: true
          subPath: elasticsearch.yml
        - name: elasticsearch-data-persistent-storage
          mountPath: /usr/share/elasticsearch/data
      volumes:
      - name: config
        configMap:
          name: elasticsearch-data-config
  volumeClaimTemplates:
  - metadata:
      name: elasticsearch-data-persistent-storage
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: nfs-client-ssd
      resources:
        requests:
          storage: 500Gi
---

3.4 建立data節點資源物件

$ kubectl apply -f elasticsearch-data.configmap.yaml \
                -f elasticsearch-data.service.yaml \
                -f elasticsearch-data.statefulset.yaml

configmap/elasticsearch-data-config created
service/elasticsearch-data created
statefulset.apps/elasticsearch-data created

將之前準備好的ES證照檔案同Master節點一樣複製到PVC的目錄中(每個資料節點都放一份)

$ mkdir ${DATA-PVC_HOME}/crets
$ cp elastic-certificates.p12 ${DATA-PVC_HOME}/crets/

等待Pod變成Running狀態說明節點啟動成功

$ kubectl get pods -n elastic -l app=elasticsearch
NAME                                    READY   STATUS    RESTARTS   AGE
elasticsearch-data-0                    1/1     Running   0          47m
elasticsearch-data-1                    1/1     Running   0          47m
elasticsearch-master-7fc5cc8957-jfjmr   1/1     Running   0          100m

4 安裝ElasticSearch客戶端節點

Client節點主要用於負責暴露一個HTTP的介面用於查詢資料及將資料傳遞給資料節點

4.1 建立Client節點ConfigMap編排檔案

---
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: elastic
  name: elasticsearch-client-config
  labels:
    app: elasticsearch
    role: client
data:
  elasticsearch.yml: |-
    cluster.name: ${CLUSTER_NAME}
    node.name: ${NODE_NAME}
    discovery.seed_hosts: ${NODE_LIST}
    cluster.initial_master_nodes: ${MASTER_NODES}

    network.host: 0.0.0.0

    node:
      master: false
      data: false
      ingest: true

    xpack.security.enabled: true
    xpack.monitoring.collection.enabled: true
    xpack.security.transport.ssl.enabled: true
    xpack.security.transport.ssl.verification_mode: certificate
    xpack.security.transport.ssl.keystore.path: /usr/share/elasticsearch/data/certs/elastic-certificates.p12
    xpack.security.transport.ssl.truststore.path: /usr/share/elasticsearch/data/certs/elastic-certificates.p12
---

4.2 建立Client節點Service編排檔案

客戶端節點需要暴露兩個埠,9300埠用於與叢集其它節點進行通訊,9200埠用於HTTP API使用,資源物件如下:

elasticsearch-client.service.yaml 
---
apiVersion: v1
kind: Service
metadata:
  namespace: elastic
  name: elasticsearch-client
  labels:
    app: elasticsearch
    role: client
spec:
  ports:
  - port: 9200
    name: client
    nodePort: 9200
  - port: 9300
    name: transport
  selector:
    app: elasticsearch
    role: client
  type: NodePort
---

4.3 建立Client節點Deployment編排檔案

elasticsearch-client.deployment.yaml 
---
apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: elastic
  name: elasticsearch-client
  labels:
    app: elasticsearch
    role: client
spec:
  selector:
    matchLabels:
      app: elasticsearch
      role: client
  template:
    metadata:
      labels:
        app: elasticsearch
        role: client
    spec:
      containers:
      - name: elasticsearch-client
        image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
        env:
        - name: CLUSTER_NAME
          value: elasticsearch
        - name: NODE_NAME
          value: elasticsearch-client
        - name: NODE_LIST
          value: elasticsearch-master,elasticsearch-data,elasticsearch-client
        - name: MASTER_NODES
          value: elasticsearch-master
        - name: ES_JAVA_OPTS
          value: "-Xms2048m -Xmx2048m"
        - name: ELASTIC_USERNAME
          valueFrom:
            secretKeyRef:
              name: elastic-credentials
              key: username
        - name: ELASTIC_PASSWORD
          valueFrom:
            secretKeyRef:
              name: elastic-credentials
              key: password
        ports:
        - containerPort: 9200
          name: client
        - containerPort: 9300
          name: transport
        volumeMounts:
        - name: config
          mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
          readOnly: true
          subPath: elasticsearch.yml
        - name: storage
          mountPath: /usr/share/elasticsearch/data
      volumes:
      - name: config
        configMap:
          name: elasticsearch-client-config
      - name: storage
        persistentVolumeClaim:
          claimName: pvc-elasticsearch-client
---

4.4 建立Client節點資源物件

$ kubectl apply  -f elasticsearch-client.configmap.yaml \
                 -f elasticsearch-client.service.yaml \
                 -f elasticsearch-client.deployment.yaml

configmap/elasticsearch-client-config created
service/elasticsearch-client created
deployment.apps/elasticsearch-client createdt

知道所有節點都部署成功為Running狀態說明安裝成功

kubectl get pods -n elastic -l app=elasticsearch
NAME                                    READY   STATUS    RESTARTS   AGE
elasticsearch-client-f4d4ff794-6gxpz    1/1     Running   0          23m
elasticsearch-data-0                    1/1     Running   0          47m
elasticsearch-data-1                    1/1     Running   0          47m
elasticsearch-master-7fc5cc8957-jfjmr   1/1     Running   0          54m

部署Client過程中可使用如下命令檢視叢集狀態變化

$ kubectl logs -f -n elastic \
>   $(kubectl get pods -n elastic | grep elasticsearch-master | sed -n 1p | awk '{print $1}') \
>   | grep "Cluster health status changed from"
{"type": "server", "timestamp": "2020-08-18T06:35:20,859Z", "level": "INFO", "component": "o.e.c.r.a.AllocationService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master", "message": "Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.kibana_1][0]]]).", "cluster.uuid": "Yy1ctnq7SjmRsuYfbJGSzA", "node.id": "z7vrjgYcTUiiB7tb0kXQ1Q"  }

5 生成初始化密碼

因為我們啟用了Xpack安全模組來保護我們叢集,所以需要一個初始化密碼,實用客戶端節點容器內的bin/elasticsearch-setup-passwords 命令來生成,如下所示

$ kubectl exec $(kubectl get pods -n elastic | grep elasticsearch-client | sed -n 1p | awk '{print $1}') \
  -n elastic \
  -- bin/elasticsearch-setup-passwords auto -b

Changed password for user apm_system
PASSWORD apm_system = 5wg8JbmKOKiLMNty90l1

Changed password for user kibana_system
PASSWORD kibana_system = 1bT0U5RbPX1e9zGNlWFL

Changed password for user kibana
PASSWORD kibana = 1bT0U5RbPX1e9zGNlWFL

Changed password for user logstash_system
PASSWORD logstash_system = 1ihEyA5yAPahNf9GuRJ9

Changed password for user beats_system
PASSWORD beats_system = WEWDpPndnGvgKY7ad0T9

Changed password for user remote_monitoring_user
PASSWORD remote_monitoring_user = MOCszTmzLmEXQrPIOW4T

Changed password for user elastic
PASSWORD elastic = bbkrgVrsE3UAfs2708aO

生成完後將elastic使用者名稱和密碼需要新增到Kubernetes的Secret物件中:

$ kubectl create secret generic elasticsearch-pw-elastic \
  -n elastic \
  --from-literal password=bbkrgVrsE3UAfs2708aO

6 建立Kibana應用

ElasticSearch叢集安裝完後,需要安裝Kibana用於ElasticSearch資料的視覺化工具。

6.1 建立Kibana的ConfigMap編排檔案

建立一個ConfigMap資源物件用於Kibana的配置檔案,裡面定義了ElasticSearch的訪問地址、使用者及密碼資訊,對應的清單檔案如下:

kibana.configmap.yaml 
---
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: elastic
  name: kibana-config
  labels:
    app: kibana
data:
  kibana.yml: |-
    server.host: 0.0.0.0

    elasticsearch:
      hosts: ${ELASTICSEARCH_HOSTS}
      username: ${ELASTICSEARCH_USER}
      password: ${ELASTICSEARCH_PASSWORD}
---

6.2 建立Kibana的Service編排檔案

kibana.service.yaml 
---
apiVersion: v1
kind: Service
metadata:
  namespace: elastic
  name: kibana
  labels:
    app: kibana
spec:
  ports:
  - port: 5601
    name: webinterface
    nodePort: 5601
  selector:
    app: kibana
---

6.3 建立Kibana的Deployment編排檔案

kibana.deployment.yaml 
---
apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: elastic
  name: kibana
  labels:
    app: kibana
spec:
  selector:
    matchLabels:
      app: kibana
  template:
    metadata:
      labels:
        app: kibana
    spec:
      containers:
      - name: kibana
        image: docker.elastic.co/kibana/kibana:7.8.0
        ports:
        - containerPort: 5601
          name: webinterface
        env:
        - name: ELASTICSEARCH_HOSTS
          value: "http://elasticsearch-client.elastic.svc.cluster.local:9200"
        - name: ELASTICSEARCH_USER
          value: "elastic"
        - name: ELASTICSEARCH_PASSWORD
          valueFrom:
            secretKeyRef:
              name: elasticsearch-pw-elastic
              key: password
        - name: "I18N_LOCALE"
          value: "zh-CN"
        volumeMounts:
        - name: config
          mountPath: /usr/share/kibana/config/kibana.yml
          readOnly: true
          subPath: kibana.yml
      volumes:
      - name: config
        configMap:
          name: kibana-config
---

6.4 建立Kibana的Ingress編排檔案

這裡使用Ingress來暴露Kibana服務,用於通過域名訪問,編排檔案如下:

kibana.ingress.yaml 
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: kibana
  namespace: elastic
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: kibana.demo.com
    http:
      paths:
      - backend:
          serviceName: kibana
          servicePort: 5601
        path: /

6.5 通過Kibana編排檔案建立資源物件

$ kubectl apply  -f kibana.configmap.yaml \
                 -f kibana.service.yaml \
                 -f kibana.deployment.yaml \
                 -f kibana.ingress.yaml

configmap/kibana-config created
service/kibana created
deployment.apps/kibana created
ingress/kibana created

部署完成後通過檢視Kibana日誌檢視啟動狀態:

kubectl logs -f -n elastic $(kubectl get pods -n elastic | grep kibana | sed -n 1p | awk '{print $1}') \
>      | grep "Status changed from yellow to green"
{"type":"log","@timestamp":"2020-08-18T06:35:29Z","tags":["status","plugin:elasticsearch@7.8.0","info"],"pid":8,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}

當狀態變成green後,我們就可以通過ingress的域名到瀏覽器訪問Kibana服務了

$ kubectl get ingress -n elastic
NAME     HOSTS            ADDRESS   PORTS     AGE
kibana   kibana.demo.cn             80        40d

6.5 登入Kibana並配置

如圖所示,使用上面建立的Secret物件中的elastic使用者和生成的密碼進行登入:

建立一個超級使用者進行訪問,依次點選 Stack Management > 使用者 > 建立使用者 > 輸入如下資訊:

建立完成後就可以用自定義的admin使用者進行管理

相關文章