kubernetes叢集斷電後etcd啟動失敗之etcd備份方案

yuhaohao發表於2020-12-09

一.問題描述

二進位制部署的單Master節點的v1.13.10版本的叢集,etcd部署的是3.3.10版本,部署在master節點上。在異常斷電後,kubernetes叢集無法正常啟動。這裡通過檢視kubernetes和etcd的服務日誌資訊,發現etcd服務異常,無法重新啟動,具體日誌資訊如下:

Jun 29 09:39:37 k8s001 etcd[3348]: recovered store from snapshot at index 2600026
Jun 29 09:39:37 k8s001 etcd[3348]: recovering backend from snapshot error: database snapshot file path error: snap: snapshoJun 29 09:39:37 k8s001.wf etcd[3348]: panic: r
ecovering backend from snapshot error: database snapshot file path error: snap: Jun 29 09:39:37 k8s001 etcd[3348]: panic: runtime error: invalid memory address or nil pointer dereferenceJun 29 09:39:37 k8s001 etcd[3348]: [signal SIGSEGV: segmentation violation code=0x1 addr=0x20 pc=0xb8cb90]
Jun 29 09:39:37 k8s001 etcd[3348]: goroutine 1 [running]:
Jun 29 09:39:37 k8s001 etcd[3348]: github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/etcdserver.NewServer.func1(0xc4Jun 29 09:39:37 k8s001 etcd[3348]: /tmp/etc
d-release-3.3.10/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/Jun 29 09:39:37 k8s001.wf etcd[3348]: panic(0xde0ce0, 0xc4200b10a0)Jun 29 09:39:37 k8s001 etcd[3348]: /usr/local/go/src/runtime/panic.go:502 +0x229
Jun 29 09:39:37 k8s001 etcd[3348]: github.com/coreos/etcd/cmd/vendor/github.com/coreos/pkg/capnslog.(*PackageLogger).PanicfJun 29 09:39:37 k8s001 etcd[3348]: /tmp/etc
d-release-3.3.10/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/Jun 29 09:39:37 k8s001.wf etcd[3348]: github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/etcdserver.NewServer(0x7ffe787eJun 29 09:39:37 k8s001.wf etcd[3348]: /tmp/etcd-release-3.3.10/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/Jun 29 09:39:37 k8s001 etcd[3348]: github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/embed.StartEtcd(0xc42019d680, 0Jun 29 09:39:37 k8s001 etcd[3348]: /tmp/etcd-release-3.3.10/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/

通過檢視異常日誌來看,etcd執行了恢復操作,但是無法從現有的快照資料進行資料恢復。這裡檢視了資料,發現社群也有類似的問題,此問題暫未修復:
https://github.com/etcd-io/etcd/issues/11949
https://github.com/kubernetes/kubernetes/issues/88574

二.問題解決方案

對於單master節點的叢集,master作為整個叢集的核心,如果etcd服務掛掉,將影響我們整個叢集的使用。因此這裡對etcd做一個備份方案,以備不時之需。

2.1備份方案

這個我們採用kubernetes的CronJob來實現etcd資料的定時備份。也就是kubernetes叢集正常時,CronJob執行定時備份任務,如果kubernetes叢集異常,則CrobJob也將不會執行。

  • 備份etcd資料的yaml檔案
[root@k8s001 home]# cat etcd_cronjob.yaml
---
apiVersion: batch/v2alpha1
kind: CronJob
metadata:
  name: etcd-backup
spec:
 # 30分鐘執行一次備份
 schedule: "*/30 * * * *"
 jobTemplate:
  spec:
    template:
      metadata:
       labels:
        app: etcd-disaster-recovery
      spec:
        affinity:
          nodeAffinity:
            requiredDuringSchedulingIgnoredDuringExecution:
                  nodeSelectorTerms:
                  - matchExpressions:
                    - key: kubernetes.io/role
                      operator: In
                      values:
                      - master
        containers:
        - name: etcd
          image: etcd:backup
          command:
          - sh
          - -c
          - "export ETCDCTL_API=3; \
             # 備份前執行下清理的操作,最多保留6個快照
             sh -x /usr/bin/delete_image_reserver_5.sh; \
             etcdctl --endpoints $ENDPOINT snapshot save /snapshot/$(date +%Y%m%d_%H%M%S)_snapshot.db; \
             echo etcd backup sucess"
          env:
          - name: ENDPOINT
            value: "127.0.0.1:2379"
          volumeMounts:
            - mountPath: "/snapshot"
              name: snapshot
              subPath: etcd-snapshot
            - mountPath: /etc/localtime
              name: lt-config
            - mountPath: /etc/timezone
              name: tz-config
        restartPolicy: OnFailure
        volumes:
          - name: snapshot
            hostPath:
              path: /var
          - name: lt-config
            hostPath:
              path: /etc/localtime
          - name: tz-config
            hostPath:
              path: /etc/timezone
        hostNetwork: true
# 設定master節點可排程
[root@k8s001 ~]# kubectl uncordon ${masterip}
# 建立定時備份任務
# 建立etcd定時備份的job
[root@k8s001 home]# kubectl apply -f etcd_cronjob.yaml
# 檢視備份的快照
[root@k8s001 ~]# ls /var/etcd-snapshot/ -alh
total 45M
drwxr-xr-x   2 root root  216 Jun 30 16:05 .
drwxr-xr-x. 20 root root  288 Jun 28 16:10 ..
-rw-r--r--   1 root root 7.5M Jun 30 15:45 20200630_154509_snapshot.db
-rw-r--r--   1 root root 7.5M Jun 30 15:50 20200630_155009_snapshot.db
-rw-r--r--   1 root root 7.5M Jun 30 15:55 20200630_155510_snapshot.db
-rw-r--r--   1 root root 7.5M Jun 30 16:00 20200630_160010_snapshot.db
-rw-r--r--   1 root root 7.5M Jun 30 16:03 20200630_160357_snapshot.db
-rw-r--r--   1 root root 7.5M Jun 30 16:05 20200630_160510_snapshot.d
# 監控任務執行情況
[root@k8s001 ~]# kubectl get job --watch
NAME                                   COMPLETIONS   DURATION   AGE
etcd-backup-1593504000   1/1           1s         9m20s

2.2 驗證

這裡我們驗證下建立的快照是否可以進行資料的恢復:

2.2.1 停止叢集的etcd服務和刪除etcd資料

# 停止etcd服務
[root@k8s001 ~]# systemctl stop etcd
# 刪除etcd資料
[root@k8s001 ~]# rm -rf /var/lib/etcd
# 檢視叢集服務是否還正常
[root@k8s001 ~]# kubectl get pod
The connection to the server 172.16.33.5:6443 was refused - did you specify the right host or port?

2.2.2 基於etcd快照恢復資料目錄

[root@k8s001 ~]# cd /var/etcd_backup/
# 這裡選用最新的一個快照進行資料目錄恢復
[root@k8s001 ~]# export ETCDCTL_API=3
[root@k8s001 ~]# etcdctl snapshot restore 20200630_161001_snapshot.db --data-dir /var/lib/etcd
2020-06-30 16:19:29.789757 I | mvcc: restore compact to 6142751
2020-06-30 16:19:29.807133 I | etcdserver/membership: added member 8e9e05c52164694d [http://localhost:2380] to cluster cdf818194e3a8c32
# 檢視執行快照恢復後的資料目錄
[root@k8s001 etcd-snapshot]# tree /var/lib/etcd/
/var/lib/etcd/
└── member
    ├── snap
    │   ├── 0000000000000001-0000000000000001.snap
    │   └── db
    └── wal
        └── 0000000000000000-0000000000000000.wal
# 啟動etcd服務
[root@k8s001 etcd-snapshot]# systemctl restart etcd
[root@k8s001 etcd-snapshot]# systemctl status etcd
● etcd.service - Etcd Server
   Loaded: loaded (/etc/systemd/system/etcd.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2020-06-30 16:20:27 CST; 6s ago
     Docs: https://github.com/coreos
 Main PID: 3069327 (etcd)
    Tasks: 15
   Memory: 17.5M
   CGroup: /system.slice/etcd.service
           └─3069327 /usr/bin/etcd --name=k8s001 --cert-file=/etc/etcd/ssl/etcd.pem --key-file=/etc/etcd/ssl/etcd-key.pem --peer-cert-file=/etc/etcd/ssl/etcd.pem --pe...
 
Jun 30 16:20:27 k8s001 etcd[3069327]: raft.node: 8e9e05c52164694d elected leader 8e9e05c52164694d at term 2
Jun 30 16:20:27 k8s001 etcd[3069327]: setting up the initial cluster version to 3.3
Jun 30 16:20:27 k8s001 etcd[3069327]: set the initial cluster version to 3.3
Jun 30 16:20:27 k8s001 etcd[3069327]: published {Name:k8s001 ClientURLs:[https://172.16.33.5:2379]} to cluster cdf818194e3a8c32
Jun 30 16:20:27 k8s001 etcd[3069327]: enabled capabilities for version 3.3
Jun 30 16:20:27 k8s001 etcd[3069327]: ready to serve client requests
Jun 30 16:20:27 k8s001 etcd[3069327]: ready to serve client requests
Jun 30 16:20:27 k8s001 systemd[1]: Started Etcd Server.
Jun 30 16:20:27 k8s001 etcd[3069327]: serving insecure client requests on 127.0.0.1:2379, this is strongly discouraged!
Jun 30 16:20:27 k8s001 etcd[3069327]: serving client requests on 172.16.33.5:2379
# 檢視業務的服務是否丟失,從下面可知業務服務恢復正常
[root@k8s001 etcd-snapshot]# kubectl get pod -n business -o wide
NAME                          READY   STATUS      RESTARTS   AGE   IP             NODE            NOMINATED NODE   READINESS GATES
redis-4ghyausyd-9hejh         1/1     Running     1          28d   172.20.0.30    172.16.33.5   <none>           <none>
mysql-c6994b67c-jx9rb         1/1     Running     0          28d   172.20.0.233   172.16.33.5   <none>           <none>

相關文章