【原創】一層Nginx反向代理K8S化部署實踐

wsjhk 發表於 2020-09-22
k8s

目錄:

1)背景介紹
2)方案分析
3)實現細節
4)監控告警
5)日誌收集
6)測試

  

一、背景介紹

    如下圖所示,傳統方式部署一層Nginx,隨著業務擴大,維護管理變得複雜,繁瑣,耗時耗力和易出錯等問題。我們的Nginx是有按照業務來分組的,不同的業務使用不同分組的Nginx例項區分開。通過nginx.conf中include不同分組的配置檔案來實現。

【原創】一層Nginx反向代理K8S化部署實踐

 

 

  

 

    如果有一種方式可以簡化Nginx的部署,擴縮容的管理。日常只需關注nginx的配置檔案釋出上線即可。當前最受歡迎的管理模式莫過於容器化部署,而nginx本身也是無狀態服務,非常適合這樣的場景。於是,通過一個多月的設計,實踐,測試。最終實現了Nginx的“上雲”。

 

二、方案分析

1)架構圖如下所示:

【原創】一層Nginx反向代理K8S化部署實踐

 

 

     

2)整體流程:
在釋出機(nginx003)上的對應目錄修改配置後,推送最新配置到gitlab倉庫,我們會有一個reloader的
容器,每10s 拉取gitlab倉庫到本地pod,pod中會根據nginx.conf檔案include的
物件 /usr/local/nginx/conf-configmap/中是否有include該分組來判斷是否進行reload 。

三、實現細節

    在K8S上部署Nginx例項,由於Nginx是有分組管理的。所以我們使用一個Deployment對應一個分組,Deployment的yaml宣告檔案除了名稱和引用的include檔案不一樣之外,其他的配置都是一樣的。 一個Deployment根據分組的業務負載了來設定replicas數量,每個pod由四個容器組成。包括:1個initContainer容器init-reloader和3個業務容器nginx,reloader和nginx-exporter。下面,我們著重分析每個容器實現的功能。

 

1)init-reloader容器
這個容器是一個initContainer容器,是做一些初始化的工作。

1.1)映象:

# cat Dockerfile
FROM fulcrum/ssh-git:latest

COPY init-start.sh /init-start.sh
COPY start.sh /start.sh
COPY Dockerfile /Dockerfile
RUN apk add --no-cache tzdata ca-certificates libc6-compat inotify-tools bc bash && echo 'hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4' >> /etc/nsswitch.conf && ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime && echo "Asia/Shanghai" >> /etc/timezone

        1.2)執行init-start.sh指令碼

功能:
(1)從倉庫拉取最新配置並cp 至/usr/local/nginx/conf.d/目錄
(2)建立代理快取相關的目錄/data/proxy_cache_path/
(3)在/usr/local/nginx/conf/servers/下建立對應的對應的conf 檔案記錄後端服務 realserver:port

 2)nginx-exporter容器

    該容器是實現對接prometheus監控nginx的exporter

2.1)映象:

# cat Dockerfile
FROM busybox:1.28

COPY nginx_exporter /nginx_exporter/nginx_exporter
COPY start.sh /start.sh
ENV start_cmd="/nginx_exporter/nginx_exporter -nginx.scrape-uri http://127.0.0.1:80/ngx_status"

  

    2.2)執行start.sh指令碼

功能
(1) num=$(netstat -anlp | grep -w 80 | grep nginx | grep LISTEN | wc -l)
(2) /nginx_exporter/nginx_exporter -nginx.scrape-uri http://127.0.0.1:80/ngx_status

      

3)nginx容器
該容器是openresty例項的業務容器

3.1)映象

FROM centos:7.3.1611

COPY Dockerfile /dockerfile/
#COPY sysctl.conf /etc/sysctl.conf
USER root
RUN yum install -y logrotate cronie initscripts bc wget git && yum clean all
ADD nginx /etc/logrotate.d/nginx
ADD root /var/spool/cron/root
ADD kill_shutting_down.sh /kill_shutting_down.sh
ADD etc-init.d-nginx /etc-init.d-nginx
COPY openresty.zip /usr/local/openresty.zip
COPY start.sh /start.sh
COPY reloader-start.sh /reloader-start.sh
RUN chmod +x /start.sh /kill_shutting_down.sh reloader-start.sh && unzip /usr/local/openresty.zip -d /usr/local/ && cd /usr/local/openresty && echo "y" | bash install.sh && rm -rf /usr/local/openresty /var/cache/yum && localedef -c -f UTF-8 -i zh_CN zh_CN.utf8 && mkdir -p /usr/local/nginx/conf/servers && chmod -R 777 /usr/local/nginx/conf/servers && cp -f /etc-init.d-nginx /etc/init.d/nginx && chmod +x /etc/init.d/nginx

ENTRYPOINT ["/start.sh"]

  

3.2)執行start.sh指令碼

功能:
(1)啟動crond定時任務實現日誌輪轉
(2)判斷目錄(/usr/local/nginx/conf.d) 不為空,啟動nginx

      

4)reloader容器
改容器是實現釋出流程邏輯的輔助容器
4.1)映象和nginx容器一樣

4.2)執行reloader-start.sh指令碼

功能:
(1)get_reload_flag函式
       通過對比/gitrepo/diff.files 檔案 改變的檔名和/usr/local/nginx/conf-configmap/中 是否include 此檔名發生改變的分組 來判斷是否需要reload (flag=1 則reload)

(2)check_mem函式
       判斷記憶體少於30% 返回1

(3)kill_shutting_down函式
       先執行記憶體剩餘量判斷,如果小於30%,殺掉shutdown 程式

(4)nginx_force_reload函式(只會進行reload)
       kill -HUP ${nginxpid}

(5)reload函式
     (5.1) 首先將倉庫中的配置檔案cp至/usr/local/nginx/conf.d ;
     (5.2) /usr/local/nginx/conf.d不為空時
            建立proxy_cache_path 目錄---/usr/local/nginx/conf/servers/檔案--- nginx -t ---kill_shutting_down -----nginx_force_reload

  總結整體實現流程如下 :
    1)拉取倉庫pull 重新命名舊的commit id 檔案(/gitrepo/local_current_commit_id.old),並生成獲取新的commit id(/gitrepo/local_current_commit_id.new);
    2)通過對比old和new commit id 獲得發生了變更檔案到/gitrepo/diff.files ;
    3)然後呼叫 et_reload_flag 判斷改組nginx是否需要reload
    4)如果/gitrepo/diff.files中有“nginx_force_reload” 欄位 然後kill_shutting_down -- nginx_force_reload       

  

5)Deployment的實現

    通過實現以上容器的功能後,打包成映象用於部署。以下是Deployment的yaml詳細內容:

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: slb-nginx-group01
  name: slb-nginx-group01
  namespace: slb-nginx
spec:
  replicas: 3  // 3個副本數,即:3個pod
  selector:
    matchLabels:
      app: slb-nginx-group01
  strategy:  // 滾動更新的策略,
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: slb-nginx-group01
        exporter: nginx
      annotations:  // 註解,實現和prometheus的對接
        prometheus.io/path: /metrics
        prometheus.io/port: "9113"
        prometheus.io/scrape: "true"
    spec:
      nodeSelector:  // 節點label選擇
        app: slb-nginx-label-group01
      tolerations:  // 容忍度設定
      - key: "node-type"
        operator: "Equal"
        value: "slb-nginx-label-group01"
        effect: "NoExecute"
      affinity:  // pod的反親和性,儘量部署到阿里雲不同的可用區
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: app
                  operator: In
                  values:
                  - slb-nginx-group01
              topologyKey: "failure-domain.beta.kubernetes.io/zone"
      shareProcessNamespace: true   // 容器間程式空間共享
      hostAliases:  // 設定hosts
      - ip: "xxx.xxx.xxx.xxx"
        hostnames:
        - "www.test.com"
      initContainers:
      - image: www.test.com/library/reloader:v0.0.1
        name: init-reloader
        command: ["/bin/sh"]
        args: ["/init-start.sh"]
        env:
        - name: nginx_git_repo_address
          value: "[email protected]:psd/nginx-conf.git"
        volumeMounts:
          - name: code-id-rsa
            mountPath: /root/.ssh/code_id_rsa
            subPath: code_id_rsa
          - name: nginx-shared-confd
            mountPath: /usr/local/nginx/conf.d/
          - name: nginx-gitrepo
            mountPath: /gitrepo/
      containers:
      - image: www.test.com/library/nginx-exporter:v0.4.2
        name: nginx-exporter
        command: ["/bin/sh", "-c", "/start.sh"]
        resources:
          limits:
            cpu: 50m
            memory: 50Mi
          requests:
            cpu: 50m
            memory: 50Mi
        volumeMounts:
          - name: time-zone
            mountPath: /etc/localtime
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      - image: www.test.com/library/openresty:1.13.6
        name: nginx
        command: ["/bin/sh", "-c", "/start.sh"]
        lifecycle:
          preStop:
            exec:
              command:
              - sh
              - -c
              - sleep 10
        livenessProbe:
          failureThreshold: 3
          initialDelaySeconds: 90
          periodSeconds: 3
          successThreshold: 1
          httpGet:
            path: /healthz
            port: 8999
          timeoutSeconds: 4
        readinessProbe:
          failureThreshold: 3
          initialDelaySeconds: 4
          periodSeconds: 3
          successThreshold: 1
          tcpSocket:
            port: 80
          timeoutSeconds: 4
        resources:
          limits:
            cpu: 8
            memory: 8192Mi
          requests:
            cpu: 2
            memory: 8192Mi
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
          - name: nginx-start-shell
            mountPath: /start.sh
            subPath: start.sh
            readOnly: true
          - name: conf-include
            mountPath: /usr/local/nginx/conf-configmap/
          - name: nginx-shared-confd
            mountPath: /usr/local/nginx/conf.d/
          - name: nginx-logs
            mountPath: /data/log/nginx/
          - name: data-nfs-webroot
            mountPath: /data_nfs/WebRoot
          - name: data-nfs-httpd
            mountPath: /data_nfs/httpd
          - name: data-nfs-crashdump
            mountPath: /data_nfs/crashdump
          - name: data-cdn
            mountPath: /data_cdn
      - image: www.test.com/library/openresty:1.13.6
        name: reloader
        command: ["/bin/sh", "-c", "/reloader-start.sh"]
        env:
        - name: nginx_git_repo_address
          value: "[email protected]:psd/nginx-conf.git"
        - name: MY_MEM_LIMIT
          valueFrom:
            resourceFieldRef:
              containerName: nginx
              resource: limits.memory
        securityContext:
          capabilities:
            add:
            - SYS_PTRACE
        resources:
          limits:
            cpu: 100m
            memory: 550Mi
          requests:
            cpu: 100m
            memory: 150Mi
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
          - name: code-id-rsa
            mountPath: /root/.ssh/code_id_rsa
            subPath: code_id_rsa
            readOnly: true
          - name: reloader-start-shell
            mountPath: /reloader-start.sh
            subPath: reloader-start.sh
            readOnly: true
          - name: conf-include
            mountPath: /usr/local/nginx/conf-configmap/
          - name: nginx-shared-confd
            mountPath: /usr/local/nginx/conf.d/
          - name: nginx-gitrepo
            mountPath: /gitrepo/
      volumes:
        - name: code-id-rsa
          configMap:
            name: code-id-rsa
            defaultMode: 0600
        - name: nginx-start-shell
          configMap:
            name: nginx-start-shell
            defaultMode: 0755
        - name: reloader-start-shell
          configMap:
            name: reloader-start-shell
            defaultMode: 0755
        - name: conf-include
          configMap:
            name: stark-conf-include
        - name: nginx-shared-confd
          emptyDir: {}
        - name: nginx-gitrepo
          emptyDir: {}
        - name: nginx-logs
          emptyDir: {}
        - name: time-zone
          hostPath:
            path: /etc/localtime
        - name: data-nfs-webroot
          nfs:
            server: xxx.nas.aliyuncs.com
            path: "/WebRoot"
        - name: data-nfs-httpd
          nfs:
            server: xxx.nas.aliyuncs.com
            path: "/httpd"
        - name: data-nfs-crashdump
          nfs:
            server: xxx.nas.aliyuncs.com
            path: "/crashdump"
        - name: data-cdn
          persistentVolumeClaim:
            claimName: oss-pvc

      如上所示,deployment的關鍵配置有:nodeSelector,tolerations,pod反親和性affinity,shareProcessNamespace,資源限制(是否超賣),容器實名週期lifecycle,存活探針livenessProbe,就緒探針readinessProbe,安全上下文授權securityContext和儲存掛載(NFS,OSS,emptyDir和configmap的掛載)。

 

6)對接阿里雲SLB的service宣告檔案:

# cat external-group01-svc.yaml
apiVersion: v1
kind: Service
metadata:
  annotations:
    service.beta.kubernetes.io/alibaba-cloud-loadbalancer-id: "xxx"
    #service.beta.kubernetes.io/alibaba-cloud-loadbalancer-force-override-listeners: "true"
    service.beta.kubernetes.io/alibaba-cloud-loadbalancer-scheduler: "wrr"
    service.beta.kubernetes.io/alibaba-cloud-loadbalancer-remove-unscheduled-backend: "on"
  name: external-grou01-svc
  namespace: slb-nginx
spec:
  externalTrafficPolicy: Local
  ports:
  - port: 80
    name: http
    protocol: TCP
    targetPort: 80
  - port: 443
    name: https
    protocol: TCP
    targetPort: 443
  selector:
    app: slb-nginx-group01
  type: LoadBalancer

# cat inner-group01-svc.yaml
apiVersion: v1
kind: Service
metadata:
  annotations:
    service.beta.kubernetes.io/alibaba-cloud-loadbalancer-id: "xxx"
    service.beta.kubernetes.io/alibaba-cloud-loadbalancer-scheduler: "wrr"
    service.beta.kubernetes.io/alibaba-cloud-loadbalancer-remove-unscheduled-backend: "on"
  name: inner-stark-svc
  namespace: slb-nginx
spec:
  externalTrafficPolicy: Local
  ports:
  - port: 80
    name: http
    protocol: TCP
    targetPort: 80
  - port: 443
    name: https
    protocol: TCP
    targetPort: 443
  selector:
    app: slb-nginx-group01
  type: LoadBalancer

  

    如上所示,對接阿里雲SLB分別建立內網外的service。通過註解指定使用的負載均衡演算法,指定的SLB,以及是否覆蓋已有監聽。externalTrafficPolicy引數指定SLB的後端列表只有部署了pod的宿主機。部署後可在阿里雲SLB控制檯檢視負載情況。

 

四、監控告警

在叢集中以prometheus-operator方式部署監控系統,配置監控有兩種方式。分別如下:

1)第一種:建立service和ServiceMonitor來實現:
// 建立service
# cat slb-nginx-exporter-svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: slb-nginx-exporter-svc
  labels:
    app: slb-nginx-exporter-svc
  namespace: slb-nginx
spec:
  type: ClusterIP
  ports:
    - name: exporter
      port: 9113
      targetPort: 9113
  selector:
    exporter: nginx  // 這裡的selector對應depolyment中的label

// 建立ServiceMonitor
# cat nginx-exporter-serviceMonitor.yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  labels:
    k8s-app: nginx-exporter
  name: nginx-exporter
  namespace: monitoring
spec:
  selector:
    matchLabels:
      app: slb-nginx-exporter-svc  //這裡的選擇的label和service對應
  namespaceSelector:
    matchNames:
    - slb-nginx
  endpoints:
  - interval: 3s
    port: "exporter"  //這裡port的名稱也需要和service對應
    scheme: http
    path: '/metrics'
  jobLabel: k8s-nginx-exporter

#建立完這兩個資源後,prometheus會自動新增生效以下配置:
# kubectl -n monitoring exec -ti  prometheus-k8s-0 -c prometheus -- cat /etc/prometheus/config_out/prometheus.env.yaml
...
scrape_configs:
- job_name: monitoring/nginx-exporter/0
  honor_labels: false
  kubernetes_sd_configs:
  - role: endpoints
    namespaces:
      names:
      - slb-nginx
  scrape_interval: 3s
  metrics_path: /metrics
  scheme: http
  relabel_configs:
  - action: keep
    source_labels:
    - __meta_kubernetes_service_label_app
    regex: slb-nginx-exporter-svc
  - action: keep
    source_labels:
    - __meta_kubernetes_endpoint_port_name
    regex: exporter
  - source_labels:
    - __meta_kubernetes_endpoint_address_target_kind
    - __meta_kubernetes_endpoint_address_target_name
    separator: ;
    regex: Node;(.*)
    replacement: ${1}
    target_label: node
  - source_labels:
    - __meta_kubernetes_endpoint_address_target_kind
    - __meta_kubernetes_endpoint_address_target_name
    separator: ;
    regex: Pod;(.*)
    replacement: ${1}
    target_label: pod
  - source_labels:
    - __meta_kubernetes_namespace
    target_label: namespace
  - source_labels:
    - __meta_kubernetes_service_name
    target_label: service
  - source_labels:
    - __meta_kubernetes_pod_name
    target_label: pod
  - source_labels:
    - __meta_kubernetes_service_name
    target_label: job
    replacement: ${1}
  - source_labels:
    - __meta_kubernetes_service_label_k8s_nginx_exporter
    target_label: job
    regex: (.+)
    replacement: ${1}
  - target_label: endpoint
    replacement: exporter
...

          這樣,監控資料就被採集到prometheus中了。可以配置對應的告警規則了。如下:

【原創】一層Nginx反向代理K8S化部署實踐

 

 2)第二種:直接在prometheus新增對應的配置來實現:

// 在deployment中新增如下pod的annotation
annotations:
  prometheus.io/path: /metrics
  prometheus.io/port: "9113"
  prometheus.io/scrape: "true"

// 新增role:pods的配置,prometheus會自動去採集資料
- job_name: 'slb-nginx-pods'
  honor_labels: false
  kubernetes_sd_configs:
  - role: pod
  tls_config:
    insecure_skip_verify: true
  relabel_configs:
  - target_label: dc
    replacement: guangzhou
  - target_label: cluster
    replacement: guangzhou-test2
  - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape] // 以下三個引數和annotation想對應
    action: keep
    regex: true
  - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
    action: replace
    target_label: __metrics_path__
    regex: (.+)
  - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
    action: replace
    regex: ([^:]+)(?::\d+)?;(\d+)
    replacement: $1:$2
    target_label: __address__
  - action: labelmap
    regex: __meta_kubernetes_pod_label_(.+)
  - source_labels: [__meta_kubernetes_namespace]
    action: replace
    target_label: kubernetes_namespace
  - source_labels: [__meta_kubernetes_pod_name]
    action: replace
    target_label: kubernetes_pod_name

// 新增告警規則
# cat slb-nginx-pods.rules
groups:
- name: "內網一層Nginx pods監控"
  rules:
  - alert: 內網Nginx pods 例項down
    expr: nginx_up{dc="guangzhou",namespace="slb-nginx"} == 0
    for: 5s
    labels:
      severity: 0
      key: "nginx-k8s"
    annotations:
      description: "5秒鐘內一層Nginx {{ $labels.instance }} 發生當機."
      summary: "內網k8s1.18叢集{{ $labels.namespace }} 名稱空間下的pod: {{ $labels.pod }} down"
      hint: "登入內網k8s1.18叢集檢視{{ $labels.namespace }} 名稱空間下的pod: {{ $labels.pod }} 是否正常。或者聯絡k8s管理員進行處理。"

  

        測試告警如下:

【原創】一層Nginx反向代理K8S化部署實踐【原創】一層Nginx反向代理K8S化部署實踐

 

 

  

五、日誌收集

    日誌收集通過在K8S叢集中部署DaemonSet實現收集每個節點上的Nginx和容器日誌。這裡使用Filebeat做收集,然後傳送到Kafka叢集,再由Logstash從Kafka中讀取日誌過濾後傳送到ES叢集。最後通過Kibana檢視日誌。

    流程如下:

            Filebeat --> Kafka --> Logstash --> ES --> Kibana

 

        1)部署

    Filebeat的DaemonSet部署yaml內容:
# cat filebeat.yml
filebeat.inputs:
  - type: container
    #enabled: true
    #ignore_older: 1h
    paths:
      - /var/log/containers/slb-nginx-*.log
    fields:
      nodeIp: ${_node_ip_}
      kafkaTopic: 'log-collect-filebeat'
    fields_under_root: true
    processors:
      - add_kubernetes_metadata:
          host: ${_node_name_}
          default_indexers.enabled: false
          default_matchers.enabled: false
          indexers:
            - container:
          matchers:
            - logs_path:
                logs_path: '/var/log/containers'
                resource_type: 'container'
          include_annotations: ['DISABLE_STDOUT_LOG_COLLECT']
      - rename:
          fields:
            - from: "kubernetes.pod.ip"
              to: "containerIp"
            - from: "host.name"
              to: "nodeName"
            - from: "kubernetes.pod.name"
              to: "podName"
          ignore_missing: true
          fail_on_error: true

  - type: log
    paths:
      - "/var/lib/kubelet/pods/*/volumes/kubernetes.io~empty-dir/nginx-logs/*access.log"
    fields:
      nodeIp: ${_node_ip_}
      kafkaTopic: 'nginx-access-log-filebeat'
      topic: 'slb-nginx-filebeat'
    fields_under_root: true

processors:
  - drop_fields:
      fields: ["ecs", "agent", "input", "host", "kubernetes", "log"]

output.kafka:
  hosts: ["kafka-svc.kafka.svc.cluster.local:9092"]
  topic: '%{[kafkaTopic]}'
  required_acks: 1
  compression: gzip
  max_message_bytes: 1000000

filebeat.config:
  inputs:
    enabled: true

# cat filebeat-ds.yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: filebeat
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: filebeat
subjects:
  - kind: ServiceAccount
    name: filebeat
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  generation: 1
  labels:
    k8s-app: slb-nginx-filebeat
  name: slb-nginx-filebeat
  namespace: kube-system
spec:
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: slb-nginx-filebeat
  template:
    metadata:
      labels:
        k8s-app: slb-nginx-filebeat
    spec:
      nodeSelector:
        app: slb-nginx-guangzhou
      serviceAccount: filebeat
      serviceAccountName: filebeat
      containers:
      - args:
        - -c
        - /etc/filebeat/filebeat.yml
        - -e
        env:
        - name: _node_name_
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: spec.nodeName
        - name: _node_ip_
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: status.hostIP
        image: www.test.com/library/filebeat:7.6.1
        imagePullPolicy: IfNotPresent
        name: slb-nginx-filebeat
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
        securityContext:
          procMount: Default
          runAsUser: 0
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /etc/filebeat
          name: filebeat-config
          readOnly: true
        - mountPath: /var/lib/kubelet/pods
          name: kubeletpods
          readOnly: true
        - mountPath: /var/log/containers
          name: containerslogs
          readOnly: true
        - mountPath: /var/log/pods
          name: pods-logs
          readOnly: true
        - mountPath: /var/lib/docker/containers
          name: docker-logs
          readOnly: true
      dnsPolicy: ClusterFirstWithHostNet
      hostNetwork: true
      restartPolicy: Always
      volumes:
      - configMap:
          defaultMode: 384
          name: slb-nginx-filebeat-ds-config
        name: filebeat-config
      - hostPath:
          path: /var/lib/kubelet/pods
          type: ""
        name: kubeletpods
      - hostPath:
          path: /var/log/containers
          type: ""
        name: containerslogs
      - hostPath:
          path: /var/log/pods
          type: ""
        name: pods-logs
      - hostPath:
          path: /var/lib/docker/containers
          type: ""
        name: docker-logs
  updateStrategy:
    rollingUpdate:
      maxUnavailable: 1
    type: RollingUpdate

  

        2)檢視日誌:

【原創】一層Nginx反向代理K8S化部署實踐

 

六、測試

    測試邏輯功能和pod的健壯性。釋出nginx的邏輯驗證;修改nginx配置檔案和執行nginx -t,nginx -s reload等功能;pod殺死後自動恢復;擴縮容功能等等。如下是釋出流程的日誌:

【原創】一層Nginx反向代理K8S化部署實踐

  

    可以看到,Nginx釋出時,會顯示更新的配置檔案,並做語法檢測,然後判斷記憶體大小是否要做記憶體回收,最後執行reload。如果更新的不是自己分組的配置檔案則不會執行reload。

 

總結:

    最後,經過一個月的時間我們實現一層Nginx的容器化的遷移。實現了更加自動化和簡便的Nginx的管理方式。同時,也更加熟悉對K8S的使用。在此分享記錄,讓大家對遷移傳統應用到K8S等容器化平臺做個參考。如果會開發,當然要擁抱Operator這樣的好東西。

 

 

附:歡迎關注本人公眾號(內有其他分享):

【原創】一層Nginx反向代理K8S化部署實踐