kubernetes實踐之十一:EFK

百聯達發表於2018-04-07
一:前言

1.在安裝Kubernetes叢集的時候我們有下載過壓縮包https://dl.k8s.io/v1.8.5/kubernetes-client-linux-amd64.tar.gz
解壓縮後 在目錄cluster\addons 下有各外掛的yaml檔案,大部分情況僅需少量改動即可使用。

2.在搭建Kubernetes的叢集過程中,涉及到很多映象的下載,建議可以在阿里雲購買一個香港所在地的ECS伺服器,映象下載完成後通過docker save -o 將映象匯出,在通過docker load 匯入映象或者上傳映象到個人映象倉庫。

3.Kubernetes從1.8版本開始,EFK的安裝中,elasticsearch-logging採用StatefulSet型別,但存在bug,會導致elasticsearch-logging-0 POD 一直無法成功建立。 所以建議還是採用1.8之前的版本採用ReplicationController。

4.要成功安裝EFK,一定要先安裝kube-dns前面的文章已有介紹。

5.EFK安裝過程中elasticsearch和kibana版本要相容。這裡採用的映象如下:
gcr.io/google_containers/elasticsearch:v2.4.1-2
gcr.io/google_containers/fluentd-elasticsearch:1.22
gcr.io/google_containers/kibana:v4.6.1-1

二:yaml檔案


efk-rbac.yaml

點選(此處)摺疊或開啟

  1. apiVersion: v1
  2. kind: ServiceAccount
  3. metadata:
  4.   name: efk
  5.   namespace: kube-system

  6. ---

  7. kind: ClusterRoleBinding
  8. apiVersion: rbac.authorization.k8s.io/v1beta1
  9. metadata:
  10.   name: efk
  11. subjects:
  12.   - kind: ServiceAccount
  13.     name: efk
  14.     namespace: kube-system
  15. roleRef:
  16.   kind: ClusterRole
  17.   name: cluster-admin
  18.   apiGroup: rbac.authorization.k8s.io
es-controller.yaml

點選(此處)摺疊或開啟

  1. apiVersion: v1
  2. kind: ReplicationController
  3. metadata:
  4.   name: elasticsearch-logging-v1
  5.   namespace: kube-system
  6.   labels:
  7.     k8s-app: elasticsearch-logging
  8.     version: v1
  9.     kubernetes.io/cluster-service: "true"
  10.     addonmanager.kubernetes.io/mode: Reconcile
  11. spec:
  12.   replicas: 2
  13.   selector:
  14.     k8s-app: elasticsearch-logging
  15.     version: v1
  16.   template:
  17.     metadata:
  18.       labels:
  19.         k8s-app: elasticsearch-logging
  20.         version: v1
  21.         kubernetes.io/cluster-service: "true"
  22.     spec:
  23.       serviceAccountName: efk
  24.       containers:
  25.       - image: gcr.io/google_containers/elasticsearch:v2.4.1-2
  26.         name: elasticsearch-logging
  27.         resources:
  28.           # need more cpu upon initialization, therefore burstable class
  29.           limits:
  30.             cpu: 1000m
  31.           requests:
  32.             cpu: 100m
  33.         ports:
  34.         - containerPort: 9200
  35.           name: db
  36.           protocol: TCP
  37.         - containerPort: 9300
  38.           name: transport
  39.           protocol: TCP
  40.         volumeMounts:
  41.         - name: es-persistent-storage
  42.           mountPath: /data
  43.         env:
  44.         - name: "NAMESPACE"
  45.           valueFrom:
  46.             fieldRef:
  47.               fieldPath: metadata.namespace
  48.       volumes:
  49.       - name: es-persistent-storage
  50.         emptyDir: {}
es-service.yaml

點選(此處)摺疊或開啟

  1. apiVersion: v1
  2. kind: Service
  3. metadata:
  4.   name: elasticsearch-logging
  5.   namespace: kube-system
  6.   labels:
  7.     k8s-app: elasticsearch-logging
  8.     kubernetes.io/cluster-service: "true"
  9.     addonmanager.kubernetes.io/mode: Reconcile
  10.     kubernetes.io/name: "Elasticsearch"
  11. spec:
  12.   ports:
  13.   - port: 9200
  14.     protocol: TCP
  15.     targetPort: db
  16.   selector:
  17.     k8s-app: elasticsearch-logging
fluentd-es-ds.yaml

點選(此處)摺疊或開啟

  1. apiVersion: extensions/v1beta1
  2. kind: DaemonSet
  3. metadata:
  4.   name: fluentd-es-v1.22
  5.   namespace: kube-system
  6.   labels:
  7.     k8s-app: fluentd-es
  8.     kubernetes.io/cluster-service: "true"
  9.     addonmanager.kubernetes.io/mode: Reconcile
  10.     version: v1.22
  11. spec:
  12.   template:
  13.     metadata:
  14.       labels:
  15.         k8s-app: fluentd-es
  16.         kubernetes.io/cluster-service: "true"
  17.         version: v1.22
  18.       # This annotation ensures that fluentd does not get evicted if the node
  19.       # supports critical pod annotation based priority scheme.
  20.       # Note that this does not guarantee admission on the nodes (#40573).
  21.       annotations:
  22.         scheduler.alpha.kubernetes.io/critical-pod: ''
  23.     spec:
  24.       serviceAccountName: efk
  25.       containers:
  26.       - name: fluentd-es
  27.         image: gcr.io/google_containers/fluentd-elasticsearch:1.22
  28.         command:
  29.           - '/bin/sh'
  30.           - '-c'
  31.           - '/usr/sbin/td-agent 2>&1 >> /var/log/fluentd.log'
  32.         resources:
  33.           limits:
  34.             memory: 200Mi
  35.           requests:
  36.             cpu: 100m
  37.             memory: 200Mi
  38.         volumeMounts:
  39.         - name: varlog
  40.           mountPath: /var/log
  41.         - name: varlibdockercontainers
  42.           mountPath: /var/lib/docker/containers
  43.           readOnly: true
  44.       nodeSelector:
  45.         beta.kubernetes.io/fluentd-ds-ready: "true"
  46.       tolerations:
  47.       - key : "node.alpha.kubernetes.io/ismaster"
  48.         effect: "NoSchedule"
  49.       terminationGracePeriodSeconds: 30
  50.       volumes:
  51.       - name: varlog
  52.         hostPath:
  53.           path: /var/log
  54.       - name: varlibdockercontainers
  55.         hostPath:
  56.           path: /var/lib/docker/containers
kibana-controller.yaml  此處需要特殊說明,綠色標識的部分KIBANA_BASE_URL 的value要設定為空,預設值會導致Kibana訪問出現問題。

點選(此處)摺疊或開啟

  1. apiVersion: extensions/v1beta1
  2. kind: Deployment
  3. metadata:
  4.   name: kibana-logging
  5.   namespace: kube-system
  6.   labels:
  7.     k8s-app: kibana-logging
  8.     kubernetes.io/cluster-service: "true"
  9.     addonmanager.kubernetes.io/mode: Reconcile
  10. spec:
  11.   replicas: 1
  12.   selector:
  13.     matchLabels:
  14.       k8s-app: kibana-logging
  15.   template:
  16.     metadata:
  17.       labels:
  18.         k8s-app: kibana-logging
  19.     spec:
  20.       serviceAccountName: efk
  21.       containers:
  22.       - name: kibana-logging
  23.         image: gcr.io/google_containers/kibana:v4.6.1-1
  24.         resources:
  25.           # keep request = limit to keep this container in guaranteed class
  26.           limits:
  27.             cpu: 100m
  28.           requests:
  29.             cpu: 100m
  30.         env:
  31.           - name: "ELASTICSEARCH_URL"
  32.             value: "http://elasticsearch-logging:9200"
  33.           - name: "KIBANA_BASE_URL"
  34.             value: ""
  35.         ports:
  36.         - containerPort: 5601
  37.           name: ui
  38.           protocol: TCP
kibana-service.yaml

點選(此處)摺疊或開啟

  1. apiVersion: v1
  2. kind: Service
  3. metadata:
  4.   name: kibana-logging
  5.   namespace: kube-system
  6.   labels:
  7.     k8s-app: kibana-logging
  8.     kubernetes.io/cluster-service: "true"
  9.     addonmanager.kubernetes.io/mode: Reconcile
  10.     kubernetes.io/name: "Kibana"
  11. spec:
  12.   ports:
  13.   - port: 5601
  14.     protocol: TCP
  15.     targetPort: ui
  16.   selector:
  17.     k8s-app: kibana-logging

三:啟動與驗證

1. 建立資源
kubectl create -f .


2.通過 kubectl logs -f  檢視相關pod的日誌,確認是否正常啟動。 其中kibana-logging-* POD 啟動需要一定的時間。


3.elasticsearch驗證(可以通過kube proxy建立代理
http://IP:PORT/_cat/nodes?v

點選(此處)摺疊或開啟

  1. host ip heap.percent ram.percent load node.role master name
  2. 10.1.88.4 10.1.88.4 9 87 0.45 d m elasticsearch-logging-v1-hnfv2
  3. 10.1.67.4 10.1.67.4 6 91 0.03 d * elasticsearch-logging-v1-zmtdl
http://IP:PORT/_cat/indices?v

點選(此處)摺疊或開啟

  1. health status index pri rep docs.count docs.deleted store.size pri.store.size
  2. green open logstash-2018.04.07 5 1 515 0 1.1mb 584.4kb
  3. green open .kibana 1 1 2 0 22.2kb 9.7kb
  4. green open logstash-2018.04.06 5 1 15364 0 7.3mb 3.6mb
4.kibana驗證
http://IP:PORT/app/kibana#/discover?_g


四:備註
要成功搭建EFK,需要注意一下幾點:
1.確保已經成功安裝了kube-dns
2.當前版本elasticsearch-logging採用ReplicationController
3.elasticsearch和kibana的版本要相容
4.
KIBANA_BASE_URL value設定為“”

來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/28624388/viewspace-2152648/,如需轉載,請註明出處,否則將追究法律責任。

相關文章