kubernetes實踐之十一:EFK
一:前言
1.在安裝Kubernetes叢集的時候我們有下載過壓縮包r.gz
解壓縮後 在目錄cluster\addons 下有各外掛的yaml檔案,大部分情況僅需少量改動即可使用。
2.在搭建Kubernetes的叢集過程中,涉及到很多映象的下載,建議可以在阿里雲購買一個香港所在地的ECS伺服器,映象下載完成後透過docker save -o 將映象匯出,在透過docker load 匯入映象或者上傳映象到個人映象倉庫。
3.Kubernetes從1.8版本開始,EFK的安裝中,elasticsearch-logging採用StatefulSet型別,但存在bug,會導致elasticsearch-logging-0 POD 一直無法成功建立。 所以建議還是採用1.8之前的版本採用ReplicationController。
4.要成功安裝EFK,一定要先安裝kube-dns前面的文章已有介紹。
5.EFK安裝過程中elasticsearch和kibana版本要相容。這裡採用的映象如下:
gcr.io/google_containers/elasticsearch:v2.4.1-2
二:yaml檔案
efk-rbac.yaml
es-controller.yaml
es-service.yaml
fluentd-es-ds.yaml
kibana-controller.yaml 此處需要特殊說明,綠色標識的部分KIBANA_BASE_URL 的value要設定為空,預設值會導致Kibana訪問出現問題。
kibana-service.yaml
三:啟動與驗證
1. 建立資源
kubectl create -f .
2.透過 kubectl logs -f 檢視相關pod的日誌,確認是否正常啟動。 其中kibana-logging-* POD 啟動需要一定的時間。
3.elasticsearch驗證(可以透過kube proxy建立代理)
:PORT/_cat/nodes?v
:PORT/_cat/indices?v
4.kibana驗證
http://IP:PORT/app/kibana#/discover?_g
四:備註
要成功搭建EFK,需要注意一下幾點:
1.確保已經成功安裝了kube-dns
2.當前版本elasticsearch-logging採用ReplicationController
3.elasticsearch和kibana的版本要相容
4.KIBANA_BASE_URL value設定為“”
1.在安裝Kubernetes叢集的時候我們有下載過壓縮包r.gz
解壓縮後 在目錄cluster\addons 下有各外掛的yaml檔案,大部分情況僅需少量改動即可使用。
2.在搭建Kubernetes的叢集過程中,涉及到很多映象的下載,建議可以在阿里雲購買一個香港所在地的ECS伺服器,映象下載完成後透過docker save -o 將映象匯出,在透過docker load 匯入映象或者上傳映象到個人映象倉庫。
3.Kubernetes從1.8版本開始,EFK的安裝中,elasticsearch-logging採用StatefulSet型別,但存在bug,會導致elasticsearch-logging-0 POD 一直無法成功建立。 所以建議還是採用1.8之前的版本採用ReplicationController。
4.要成功安裝EFK,一定要先安裝kube-dns前面的文章已有介紹。
5.EFK安裝過程中elasticsearch和kibana版本要相容。這裡採用的映象如下:
gcr.io/google_containers/elasticsearch:v2.4.1-2
gcr.io/google_containers/fluentd-elasticsearch:1.22
gcr.io/google_containers/kibana:v4.6.1-1
二:yaml檔案
efk-rbac.yaml
點選(此處)摺疊或開啟
-
apiVersion: v1
-
kind: ServiceAccount
-
metadata:
-
name: efk
-
namespace: kube-system
-
-
---
-
-
kind: ClusterRoleBinding
-
apiVersion: rbac.authorization.k8s.io/v1beta1
-
metadata:
-
name: efk
-
subjects:
-
- kind: ServiceAccount
-
name: efk
-
namespace: kube-system
-
roleRef:
-
kind: ClusterRole
-
name: cluster-admin
- apiGroup: rbac.authorization.k8s.io
點選(此處)摺疊或開啟
-
apiVersion: v1
-
kind: ReplicationController
-
metadata:
-
name: elasticsearch-logging-v1
-
namespace: kube-system
-
labels:
-
k8s-app: elasticsearch-logging
-
version: v1
-
kubernetes.io/cluster-service: "true"
-
addonmanager.kubernetes.io/mode: Reconcile
-
spec:
-
replicas: 2
-
selector:
-
k8s-app: elasticsearch-logging
-
version: v1
-
template:
-
metadata:
-
labels:
-
k8s-app: elasticsearch-logging
-
version: v1
-
kubernetes.io/cluster-service: "true"
-
spec:
-
serviceAccountName: efk
-
containers:
-
- image: gcr.io/google_containers/elasticsearch:v2.4.1-2
-
name: elasticsearch-logging
-
resources:
-
# need more cpu upon initialization, therefore burstable class
-
limits:
-
cpu: 1000m
-
requests:
-
cpu: 100m
-
ports:
-
- containerPort: 9200
-
name: db
-
protocol: TCP
-
- containerPort: 9300
-
name: transport
-
protocol: TCP
-
volumeMounts:
-
- name: es-persistent-storage
-
mountPath: /data
-
env:
-
- name: "NAMESPACE"
-
valueFrom:
-
fieldRef:
-
fieldPath: metadata.namespace
-
volumes:
-
- name: es-persistent-storage
- emptyDir: {}
點選(此處)摺疊或開啟
-
apiVersion: v1
-
kind: Service
-
metadata:
-
name: elasticsearch-logging
-
namespace: kube-system
-
labels:
-
k8s-app: elasticsearch-logging
-
kubernetes.io/cluster-service: "true"
-
addonmanager.kubernetes.io/mode: Reconcile
-
kubernetes.io/name: "Elasticsearch"
-
spec:
-
ports:
-
- port: 9200
-
protocol: TCP
-
targetPort: db
-
selector:
- k8s-app: elasticsearch-logging
點選(此處)摺疊或開啟
-
apiVersion: extensions/v1beta1
-
kind: DaemonSet
-
metadata:
-
name: fluentd-es-v1.22
-
namespace: kube-system
-
labels:
-
k8s-app: fluentd-es
-
kubernetes.io/cluster-service: "true"
-
addonmanager.kubernetes.io/mode: Reconcile
-
version: v1.22
-
spec:
-
template:
-
metadata:
-
labels:
-
k8s-app: fluentd-es
-
kubernetes.io/cluster-service: "true"
-
version: v1.22
-
# This annotation ensures that fluentd does not get evicted if the node
-
# supports critical pod annotation based priority scheme.
-
# Note that this does not guarantee admission on the nodes (#40573).
-
annotations:
-
scheduler.alpha.kubernetes.io/critical-pod: ''
-
spec:
-
serviceAccountName: efk
-
containers:
-
- name: fluentd-es
-
image: gcr.io/google_containers/fluentd-elasticsearch:1.22
-
command:
-
- '/bin/sh'
-
- '-c'
-
- '/usr/sbin/td-agent 2>&1 >> /var/log/fluentd.log'
-
resources:
-
limits:
-
memory: 200Mi
-
requests:
-
cpu: 100m
-
memory: 200Mi
-
volumeMounts:
-
- name: varlog
-
mountPath: /var/log
-
- name: varlibdockercontainers
-
mountPath: /var/lib/docker/containers
-
readOnly: true
-
nodeSelector:
-
beta.kubernetes.io/fluentd-ds-ready: "true"
-
tolerations:
-
- key : "node.alpha.kubernetes.io/ismaster"
-
effect: "NoSchedule"
-
terminationGracePeriodSeconds: 30
-
volumes:
-
- name: varlog
-
hostPath:
-
path: /var/log
-
- name: varlibdockercontainers
-
hostPath:
- path: /var/lib/docker/containers
點選(此處)摺疊或開啟
-
apiVersion: extensions/v1beta1
-
kind: Deployment
-
metadata:
-
name: kibana-logging
-
namespace: kube-system
-
labels:
-
k8s-app: kibana-logging
-
kubernetes.io/cluster-service: "true"
-
addonmanager.kubernetes.io/mode: Reconcile
-
spec:
-
replicas: 1
-
selector:
-
matchLabels:
-
k8s-app: kibana-logging
-
template:
-
metadata:
-
labels:
-
k8s-app: kibana-logging
-
spec:
-
serviceAccountName: efk
-
containers:
-
- name: kibana-logging
-
image: gcr.io/google_containers/kibana:v4.6.1-1
-
resources:
-
# keep request = limit to keep this container in guaranteed class
-
limits:
-
cpu: 100m
-
requests:
-
cpu: 100m
-
env:
-
- name: "ELASTICSEARCH_URL"
-
value: ""
-
- name: "KIBANA_BASE_URL"
-
value: ""
-
ports:
-
- containerPort: 5601
-
name: ui
- protocol: TCP
點選(此處)摺疊或開啟
-
apiVersion: v1
-
kind: Service
-
metadata:
-
name: kibana-logging
-
namespace: kube-system
-
labels:
-
k8s-app: kibana-logging
-
kubernetes.io/cluster-service: "true"
-
addonmanager.kubernetes.io/mode: Reconcile
-
kubernetes.io/name: "Kibana"
-
spec:
-
ports:
-
- port: 5601
-
protocol: TCP
-
targetPort: ui
-
selector:
- k8s-app: kibana-logging
三:啟動與驗證
1. 建立資源
kubectl create -f .
2.透過 kubectl logs -f 檢視相關pod的日誌,確認是否正常啟動。 其中kibana-logging-* POD 啟動需要一定的時間。
3.elasticsearch驗證(可以透過kube proxy建立代理)
:PORT/_cat/nodes?v
點選(此處)摺疊或開啟
-
host ip heap.percent ram.percent load node.role master name
-
10.1.88.4 10.1.88.4 9 87 0.45 d m elasticsearch-logging-v1-hnfv2
- 10.1.67.4 10.1.67.4 6 91 0.03 d * elasticsearch-logging-v1-zmtdl
點選(此處)摺疊或開啟
-
health status index pri rep docs.count docs.deleted store.size pri.store.size
-
green open logstash-2018.04.07 5 1 515 0 1.1mb 584.4kb
-
green open .kibana 1 1 2 0 22.2kb 9.7kb
- green open logstash-2018.04.06 5 1 15364 0 7.3mb 3.6mb
http://IP:PORT/app/kibana#/discover?_g
四:備註
要成功搭建EFK,需要注意一下幾點:
1.確保已經成功安裝了kube-dns
2.當前版本elasticsearch-logging採用ReplicationController
3.elasticsearch和kibana的版本要相容
4.KIBANA_BASE_URL value設定為“”
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/28624388/viewspace-2152648/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- kubernetes實踐之七十一:Istio之流量管理(下)
- kubernetes實踐之六十一:kubectl port-forwardForward
- kubernetes實踐之三十一:kubectl
- EFK 配置geo-ip落地實踐
- kubernetes實踐之五十一:kube-proxy執行機制分析
- kubernetes實踐之四十一:Pod自動擴容與縮容
- kubernetes實踐之二十一:物件與元件簡介物件元件
- kubernetes實踐之五十二:Helm
- kubernetes實踐之五十七:PodPreset
- kubernetes實踐之五十八:CronJob
- kubernetes實踐之十七:架構架構
- kubernetes實踐之十九:API概述API
- kubernetes實踐之六十:Cabin-Manage Kubernetes
- kubernetes實踐之五十九:NetworkPolicy
- kubernetes實踐之六十四:CoreDNSDNS
- kubernetes實踐之九:kube-dnsDNS
- kubernetes實踐之五:網路模型模型
- kubernetes實踐之五十六:雲原生
- kubernetes實踐之四十二:StatefulSet
- Kubernetes 中 搭建 EFK 日誌搜尋中心
- kubernetes生產實踐之redis-clusterRedis
- GitOps實踐之kubernetes安裝argocdGitGo
- kubernetes實踐之六十二:Secret 使用
- kubernetes實踐之六十三:使用技巧
- kubernetes實踐之六十五:Service Mesh
- kubernetes實踐之八:TLS bootstrappingTLSbootAPP
- kubernetes實踐之十二:部署Traefik Ingress
- kubernetes實踐之十四:Service Account與Secret
- EFK 配置geo-ip落地實踐(二)fluentd外掛編寫
- kubernetes實踐之七十三:Istio之配置請求路由路由
- kubernetes實踐之七十二:Istio之策略與遙測
- kubernetes實踐之五十五:kubectl之配置kubeconfig
- kubernetes實踐之七十:Istio之流量管理(上)
- kubernetes實踐之六十七:Istio介紹
- kubernetes實踐之四十九:Scheduler原理分析
- kubernetes實踐之六:CFSSL構建本地CA
- kubernetes實踐之五:Node節點安裝
- kubernetes實踐之五十四:垃圾回收機制