k8s叢集V1.15.3升級到V1.16.0
升級文件:
1.檢視k8s當前版本號及最新版本
[root@k8s01 ~]# kubectl get nodes --檢視當前叢集節點數和版本號
NAME STATUS ROLES AGE VERSION
k8s01 Ready master 41d v1.15.3
k8s02 Ready <none> 41d v1.15.3
k8s03 Ready <none> 41d v1.15.3
[root@k8s01 ~]# kubectl version --檢視服務端和客戶端版本號
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T11:13:54Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T11:05:50Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
[root@k8s01 ~]# yum list --showduplicates kubeadm --disableexcludes=kubernetes --檢視倉庫叢集版本
2.升級kubeadm版本及檢視叢集是否滿足升級需求
[root@k8s01 ~]# yum install -y kubeadm-1.16.0-0 --disableexcludes=kubernetes --升級kubeadm版本
[root@k8s01 ~]# kubeadm version --檢視升級後的版本
kubeadm version: &version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:34:01Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
[root@k8s01 ~]# kubeadm upgrade plan --檢視叢集是否可以升級,升級後各元件的版本資訊
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.15.3
[upgrade/versions] kubeadm version: v1.16.0
W1019 13:11:18.402833 66426 version.go:101] could not fetch a Kubernetes version from the internet: unable to get URL " Get net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
W1019 13:11:18.402860 66426 version.go:102] falling back to the local client version: v1.16.0
[upgrade/versions] Latest stable version: v1.16.0
W1019 13:11:28.427246 66426 version.go:101] could not fetch a Kubernetes version from the internet: unable to get URL " Get net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
W1019 13:11:28.427289 66426 version.go:102] falling back to the local client version: v1.16.0
[upgrade/versions] Latest version in the v1.15 series: v1.16.0
Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT AVAILABLE
Kubelet 3 x v1.15.3 v1.16.0
Upgrade to the latest version in the v1.15 series:
COMPONENT CURRENT AVAILABLE
API Server v1.15.3 v1.16.0
Controller Manager v1.15.3 v1.16.0
Scheduler v1.15.3 v1.16.0
Kube Proxy v1.15.3 v1.16.0
CoreDNS 1.3.1 1.6.2
Etcd 3.3.10 3.3.15-0
You can now apply the upgrade by executing the following command:
kubeadm upgrade apply v1.16.0
_____________________________________________________________________
[root@k8s01 ~]#
3.下載升級元件(在谷歌下載基礎元件後升級,加快升級速度)
[root@k8s01 ~]# cat 16.sh
#!/bin/bash
# download k8s 1.15.2 images
# get image-list by 'kubeadm config images list --kubernetes-version=v1.15.2'
# gcr.azk8s.cn/google-containers == k8s.gcr.io
images=(
kube-apiserver:v1.16.0
kube-controller-manager:v1.16.0
kube-scheduler:v1.16.0
kube-proxy:v1.16.0
pause:3.1
etcd:3.3.15-0
coredns:1.6.2
)
for imageName in ${images[@]};do
docker pull gcr.azk8s.cn/google-containers/$imageName
docker tag gcr.azk8s.cn/google-containers/$imageName k8s.gcr.io/$imageName
docker rmi gcr.azk8s.cn/google-containers/$imageName
done
[root@k8s01 ~]# sh 16.sh
v1.16.0: Pulling from google-containers/kube-apiserver
39fafc05754f: Already exists
f7d981e9e2f5: Pull complete
Digest: sha256:f4168527c91289da2708f62ae729fdde5fb484167dd05ffbb7ab666f60de96cd
Status: Downloaded newer image for gcr.azk8s.cn/google-containers/kube-apiserver:v1.16.0
gcr.azk8s.cn/google-containers/kube-apiserver:v1.16.0
Untagged: gcr.azk8s.cn/google-containers/kube-apiserver:v1.16.0
Untagged: gcr.azk8s.cn/google-containers/kube-apiserver@sha256:f4168527c91289da2708f62ae729fdde5fb484167dd05ffbb7ab666f60de96cd
v1.16.0: Pulling from google-containers/kube-controller-manager
39fafc05754f: Already exists
9fc21167a2c9: Pull complete
Digest: sha256:c156a05ee9d40e3ca2ebf9337f38a10558c1fc6c9124006f128a82e6c38cdf3e
Status: Downloaded newer image for gcr.azk8s.cn/google-containers/kube-controller-manager:v1.16.0
gcr.azk8s.cn/google-containers/kube-controller-manager:v1.16.0
Untagged: gcr.azk8s.cn/google-containers/kube-controller-manager:v1.16.0
Untagged: gcr.azk8s.cn/google-containers/kube-controller-manager@sha256:c156a05ee9d40e3ca2ebf9337f38a10558c1fc6c9124006f128a82e6c38cdf3e
v1.16.0: Pulling from google-containers/kube-scheduler
39fafc05754f: Already exists
c589747bc37c: Pull complete
Digest: sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0
Status: Downloaded newer image for gcr.azk8s.cn/google-containers/kube-scheduler:v1.16.0
gcr.azk8s.cn/google-containers/kube-scheduler:v1.16.0
Untagged: gcr.azk8s.cn/google-containers/kube-scheduler:v1.16.0
Untagged: gcr.azk8s.cn/google-containers/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0
v1.16.0: Pulling from google-containers/kube-proxy
39fafc05754f: Already exists
db3f71d0eb90: Already exists
1531d95908fb: Pull complete
Digest: sha256:e7f0f8e320cfeeaafdc9c0cb8e23f51e542fa1d955ae39c8131a0531ba72c794
Status: Downloaded newer image for gcr.azk8s.cn/google-containers/kube-proxy:v1.16.0
gcr.azk8s.cn/google-containers/kube-proxy:v1.16.0
Untagged: gcr.azk8s.cn/google-containers/kube-proxy:v1.16.0
Untagged: gcr.azk8s.cn/google-containers/kube-proxy@sha256:e7f0f8e320cfeeaafdc9c0cb8e23f51e542fa1d955ae39c8131a0531ba72c794
3.1: Pulling from google-containers/pause
Digest: sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea
Status: Downloaded newer image for gcr.azk8s.cn/google-containers/pause:3.1
gcr.azk8s.cn/google-containers/pause:3.1
Untagged: gcr.azk8s.cn/google-containers/pause:3.1
Untagged: gcr.azk8s.cn/google-containers/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea
3.3.15-0: Pulling from google-containers/etcd
39fafc05754f: Already exists
aee6f172d490: Pull complete
e6aae814a194: Pull complete
Digest: sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa
Status: Downloaded newer image for gcr.azk8s.cn/google-containers/etcd:3.3.15-0
gcr.azk8s.cn/google-containers/etcd:3.3.15-0
Untagged: gcr.azk8s.cn/google-containers/etcd:3.3.15-0
Untagged: gcr.azk8s.cn/google-containers/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa
1.6.2: Pulling from google-containers/coredns
c6568d217a00: Pull complete
3970bc7cbb16: Pull complete
Digest: sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5
Status: Downloaded newer image for gcr.azk8s.cn/google-containers/coredns:1.6.2
gcr.azk8s.cn/google-containers/coredns:1.6.2
Untagged: gcr.azk8s.cn/google-containers/coredns:1.6.2
Untagged: gcr.azk8s.cn/google-containers/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5
[root@k8s01 ~]#
4.升級k8s叢集(master節點)
[root@k8s01 ~]# kubeadm upgrade apply v1.16.0 -v 5
I1019 13:37:55.767778 87227 apply.go:118] [upgrade/apply] verifying health of cluster
I1019 13:37:55.767819 87227 apply.go:119] [upgrade/apply] retrieving configuration from cluster
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
I1019 13:37:55.803144 87227 common.go:122] running preflight checks
[preflight] Running pre-flight checks.
I1019 13:37:55.803169 87227 preflight.go:78] validating if there are any unsupported CoreDNS plugins in the Corefile
I1019 13:37:55.820014 87227 preflight.go:103] validating if migration can be done for the current CoreDNS release.
[upgrade] Making sure the cluster is healthy:
I1019 13:37:55.837178 87227 apply.go:131] [upgrade/apply] validating requested and actual version
I1019 13:37:55.837241 87227 apply.go:147] [upgrade/version] enforcing version skew policies
[upgrade/version] You have chosen to change the cluster version to "v1.16.0"
[upgrade/versions] Cluster version: v1.15.3
[upgrade/versions] kubeadm version: v1.16.0
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
I1019 13:37:58.228724 87227 apply.go:163] [upgrade/apply] creating prepuller
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
[upgrade/prepull] Prepulling image for component etcd.
[upgrade/prepull] Prepulling image for component kube-apiserver.
[upgrade/prepull] Prepulling image for component kube-controller-manager.
[upgrade/prepull] Prepulling image for component kube-scheduler.
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[upgrade/prepull] Prepulled image for component etcd.
[upgrade/prepull] Prepulled image for component kube-apiserver.
[upgrade/prepull] Prepulled image for component kube-scheduler.
[upgrade/prepull] Prepulled image for component kube-controller-manager.
[upgrade/prepull] Successfully prepulled the images for all the control plane components
I1019 13:38:00.888210 87227 apply.go:174] [upgrade/apply] performing upgrade
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.16.0"...
Static pod: kube-apiserver-k8s01 hash: 5bfb05e7cb17fe8298d61706cb2263b6
Static pod: kube-controller-manager-k8s01 hash: 9c5db0eef4ba8d433ced5874b5688886
Static pod: kube-scheduler-k8s01 hash: 7d5d3c0a6786e517a8973fa06754cb75
I1019 13:38:00.974269 87227 etcd.go:107] etcd endpoints read from pods: https://192.168.54.128:2379
I1019 13:38:01.033816 87227 etcd.go:156] etcd endpoints read from etcd: https://192.168.54.128:2379
I1019 13:38:01.033856 87227 etcd.go:125] update etcd endpoints: https://192.168.54.128:2379
[upgrade/etcd] Upgrading to TLS for etcd
Static pod: etcd-k8s01 hash: af0f40c2a1ce2695115431265406ca0d
I1019 13:38:04.475950 87227 local.go:69] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests460824650/etcd.yaml"
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Renewing etcd-server certificate
[upgrade/staticpods] Renewing etcd-peer certificate
[upgrade/staticpods] Renewing etcd-healthcheck-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-10-19-13-38-00/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: etcd-k8s01 hash: af0f40c2a1ce2695115431265406ca0d
Static pod: etcd-k8s01 hash: af0f40c2a1ce2695115431265406ca0d
Static pod: etcd-k8s01 hash: 8b854fdc3768d8f9aac3dfb09c123400
[apiclient] Found 1 Pods for label selector component=etcd
[apiclient] Found 0 Pods for label selector component=etcd
[apiclient] Found 1 Pods for label selector component=etcd
[apiclient] Found 0 Pods for label selector component=etcd
[apiclient] Found 1 Pods for label selector component=etcd
[upgrade/staticpods] Component "etcd" upgraded successfully!
I1019 13:38:35.604122 87227 etcd.go:107] etcd endpoints read from pods: https://192.168.54.128:2379
I1019 13:38:35.618217 87227 etcd.go:156] etcd endpoints read from etcd: https://192.168.54.128:2379
I1019 13:38:35.618242 87227 etcd.go:125] update etcd endpoints: https://192.168.54.128:2379
[upgrade/etcd] Waiting for etcd to become available
I1019 13:38:35.618254 87227 etcd.go:372] [etcd] attempting to see if all cluster endpoints ([https://192.168.54.128:2379]) are available 1/10
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests460824650"
I1019 13:38:35.631290 87227 manifests.go:42] [control-plane] creating static Pod files
I1019 13:38:35.631334 87227 manifests.go:91] [control-plane] getting StaticPodSpecs
I1019 13:38:35.639202 87227 manifests.go:116] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests460824650/kube-apiserver.yaml"
I1019 13:38:35.639809 87227 manifests.go:116] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests460824650/kube-controller-manager.yaml"
I1019 13:38:35.640103 87227 manifests.go:116] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests460824650/kube-scheduler.yaml"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-10-19-13-38-00/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-k8s01 hash: 5bfb05e7cb17fe8298d61706cb2263b6
Static pod: kube-apiserver-k8s01 hash: 5bfb05e7cb17fe8298d61706cb2263b6
Static pod: kube-apiserver-k8s01 hash: c7a6a6cd079e4034a3258c4d94365d5a
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[apiclient] Found 0 Pods for label selector component=kube-apiserver
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[apiclient] Found 0 Pods for label selector component=kube-apiserver
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-10-19-13-38-00/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-k8s01 hash: 9c5db0eef4ba8d433ced5874b5688886
Static pod: kube-controller-manager-k8s01 hash: a174e7fbc474c3449c0ee50ba7220e8e
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-10-19-13-38-00/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-k8s01 hash: 7d5d3c0a6786e517a8973fa06754cb75
Static pod: kube-scheduler-k8s01 hash: b8e7c07b524b78e0b03577d5f61f79ef
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
I1019 13:38:54.548181 87227 apply.go:180] [upgrade/postupgrade] upgrading RBAC rules and addons
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1019 13:38:54.670273 87227 patchnode.go:30] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s01" as an annotation
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I1019 13:38:55.222748 87227 clusterinfo.go:79] creating the RBAC rules for exposing the cluster-info ConfigMap in the kube-public namespace
I1019 13:38:55.376508 87227 request.go:538] Throttling request took 145.385879ms, request: PUT:https://192.168.54.128:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/kubeadm:bootstrap-signer-clusterinfo
I1019 13:38:55.576272 87227 request.go:538] Throttling request took 197.037818ms, request: POST:https://192.168.54.128:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings
I1019 13:38:55.776513 87227 request.go:538] Throttling request took 196.268563ms, request: PUT:https://192.168.54.128:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/kubeadm:bootstrap-signer-clusterinfo
I1019 13:38:55.976281 87227 request.go:538] Throttling request took 181.099838ms, request: POST:https://192.168.54.128:6443/apis/rbac.authorization.k8s.io/v1/clusterroles
I1019 13:38:56.176460 87227 request.go:538] Throttling request took 184.088286ms, request: PUT:https://192.168.54.128:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:coredns
I1019 13:38:56.376299 87227 request.go:538] Throttling request took 196.69921ms, request: POST:https://192.168.54.128:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings
I1019 13:38:56.576335 87227 request.go:538] Throttling request took 187.238443ms, request: PUT:https://192.168.54.128:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:coredns
[addons] Applied essential addon: CoreDNS
I1019 13:38:56.776530 87227 request.go:538] Throttling request took 122.128829ms, request: POST:https://192.168.54.128:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings
I1019 13:38:56.976951 87227 request.go:538] Throttling request took 194.821608ms, request: PUT:https://192.168.54.128:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/kubeadm:node-proxier
I1019 13:38:57.176476 87227 request.go:538] Throttling request took 192.024614ms, request: POST:https://192.168.54.128:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles
I1019 13:38:57.376234 87227 request.go:538] Throttling request took 180.872083ms, request: PUT:https://192.168.54.128:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/kube-proxy
I1019 13:38:57.576309 87227 request.go:538] Throttling request took 197.323877ms, request: POST:https://192.168.54.128:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings
I1019 13:38:57.776253 87227 request.go:538] Throttling request took 190.156387ms, request: PUT:https://192.168.54.128:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/kube-proxy
[addons] Applied essential addon: kube-proxy
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.16.0". Enjoy!
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
[root@k8s01 ~]#
5.如果有多個master節點,升級其它master節點(單個master可以忽略)
[root@k8s01 ~]# kubeadm upgrade plan --測試升級過程
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.16.0
[upgrade/versions] kubeadm version: v1.16.0
W1019 13:46:26.923622 92337 version.go:101] could not fetch a Kubernetes version from the internet: unable to get URL " Get net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
W1019 13:46:26.923660 92337 version.go:102] falling back to the local client version: v1.16.0
[upgrade/versions] Latest stable version: v1.16.0
W1019 13:46:36.952719 92337 version.go:101] could not fetch a Kubernetes version from the internet: unable to get URL " Get net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
W1019 13:46:36.952743 92337 version.go:102] falling back to the local client version: v1.16.0
[upgrade/versions] Latest version in the v1.16 series: v1.16.0
Awesome, you're up-to-date! Enjoy!
[root@k8s01 ~]# kubeadm upgrade node --升級其它master節點
[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade] Upgrading your Static Pod-hosted control plane instance to version "v1.16.0"...
Static pod: kube-apiserver-k8s01 hash: c7a6a6cd079e4034a3258c4d94365d5a
Static pod: kube-controller-manager-k8s01 hash: a174e7fbc474c3449c0ee50ba7220e8e
Static pod: kube-scheduler-k8s01 hash: b8e7c07b524b78e0b03577d5f61f79ef
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests902864317"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Current and new manifests of kube-apiserver are equal, skipping upgrade
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Current and new manifests of kube-controller-manager are equal, skipping upgrade
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Current and new manifests of kube-scheduler are equal, skipping upgrade
[upgrade] The control plane instance for this node was successfully updated!
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.
[root@k8s01 ~]#
6.升級所有master節點的kubelet和kubectl(單個master忽略)
[root@k8s01 ~]# yum install -y kubelet-1.16.0 kubectl-1.16.0 --disableexcludes=kubernetes
[root@k8s01 ~]# systemctl daemon-reload
[root@k8s01 ~]# systemctl restart kubelet
7.在所有node節點升級kubeadm元件(在所有node節點執行)
[root@k8s02 ~]# yum install -y kubeadm-1.16.0 --disableexcludes=kubernetes
8.將node節點標記為不可除錯,維護節點升級(在master執行)
[root@k8s01 ~]# kubectl drain k8s02 --ignore-daemonsets --先升級k8s02節點,如果多節點要重複升級,每個節點都要執行,如果報錯根據提示新增--delete-local-data引數
node/k8s02 already cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-flannel-ds-amd64-888x6, kube-system/kube-proxy-xm9fn
evicting pod "tiller-deploy-8557598fbc-x96gq"
evicting pod "myapp-5f57d6857b-xgj8l"
evicting pod "coredns-5644d7b6d9-lmpd5"
evicting pod "metrics-server-6b445cb696-zp94w"
evicting pod "myapp-5f57d6857b-2g8ss"
pod/tiller-deploy-8557598fbc-x96gq evicted
pod/metrics-server-6b445cb696-zp94w evicted
pod/myapp-5f57d6857b-2g8ss evicted
pod/coredns-5644d7b6d9-lmpd5 evicted
pod/myapp-5f57d6857b-xgj8l evicted
node/k8s02 evicted
[root@k8s01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s01 Ready master 41d v1.16.0
k8s02 Ready,SchedulingDisabled <none> 41d v1.15.3
k8s03 Ready <none> 41d v1.15.3
[root@k8s01 ~]#
9.在k8s02節點升級kubelet和kubectl
[root@k8s02 ~]# kubeadm upgrade node
[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade] Skipping phase. Not a control plane node[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.
[root@k8s02 ~]# yum install -y kubelet-1.16.0 kubectl-1.16.0 --disableexcludes=kubernetes
[root@k8s02 ~]# systemctl daemon-reload
[root@k8s02 ~]# systemctl restart kubelet
10.在master節點恢復k8s02節點的除錯策略
[root@k8s01 ~]# kubectl uncordon k8s02
node/k8s02 uncordoned
[root@k8s01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s01 Ready master 41d v1.16.0
k8s02 Ready <none> 41d v1.16.0
k8s03 Ready <none> 41d v1.15.3
[root@k8s01 ~]#
11.使用步驟8到10升級k8s03節點
12.檢視整個k8s叢集狀態
[root@k8s01 ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-5644d7b6d9-8wvgt 1/1 Running 0 11m
coredns-5644d7b6d9-pzr7g 1/1 Running 0 11m
coredns-5c98db65d4-rtktb 1/1 Running 0 11m
etcd-k8s01 1/1 Running 0 34m
kube-apiserver-k8s01 1/1 Running 0 34m
kube-controller-manager-k8s01 1/1 Running 0 34m
kube-flannel-ds-amd64-888x6 1/1 Running 5 41d
kube-flannel-ds-amd64-d648v 1/1 Running 15 41d
kube-flannel-ds-amd64-rc9bc 1/1 Running 2 46h
kube-proxy-d4rd5 1/1 Running 1 46h
kube-proxy-wtk2j 1/1 Running 11 41d
kube-proxy-xm9fn 1/1 Running 0 45m
kube-scheduler-k8s01 1/1 Running 0 34m
metrics-server-6b445cb696-65r5k 1/1 Running 0 11m
tiller-deploy-8557598fbc-6jfp7 1/1 Running 0 11m
[root@k8s01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s01 Ready master 41d v1.16.0
k8s02 Ready <none> 41d v1.16.0
k8s03 Ready <none> 41d v1.16.0
[root@k8s01 ~]#
錯誤總結:
Oct 19 14:46:20 k8s01 kubelet[653]: E1019 14:46:20.701679 653 pod_workers.go:191] Error syncing pod e641b551-7f22-40fa-b847-658f6c7696fa ("tiller-deploy-8557598fbc-6jfp7_kube-system(e641b551-7f22-40fa-b847-658f6c7696fa)"), skipping: network is not ready: runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Oct 19 14:46:20 k8s01 kubelet[653]: E1019 14:46:20.702091 653 pod_workers.go:191] Error syncing pod bd45bbe0-8529-4ee4-9fcf-90528178dc0d ("coredns-5c98db65d4-rtktb_kube-system(bd45bbe0-8529-4ee4-9fcf-90528178dc0d)"), skipping: network is not ready: runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Oct 19 14:46:20 k8s01 kubelet[653]: E1019 14:46:20.702396 653 pod_workers.go:191] Error syncing pod 87d24c8c-bba8-420b-8901-9e2b8bc339ac ("coredns-5644d7b6d9-8wvgt_kube-system(87d24c8c-bba8-420b-8901-9e2b8bc339ac)"), skipping: network is not ready: runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
解決方法:(如果CNI外掛報錯,需要重新安裝flannel)
[root@k8s01 ~]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
[root@k8s01 ~]# kubectl apply -f kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged configured
clusterrole.rbac.authorization.k8s.io/flannel unchanged
clusterrolebinding.rbac.authorization.k8s.io/flannel unchanged
serviceaccount/flannel unchanged
configmap/kube-flannel-cfg configured
daemonset.apps/kube-flannel-ds-amd64 configured
daemonset.apps/kube-flannel-ds-arm64 configured
daemonset.apps/kube-flannel-ds-arm configured
daemonset.apps/kube-flannel-ds-ppc64le configured
daemonset.apps/kube-flannel-ds-s390x configured
[root@k8s01 ~]#
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/25854343/viewspace-2660612/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- k8s 叢集升級K8S
- Openshift叢集3.9升級到3.10
- 升級 kubeadm 部署的 k8s 叢集K8S
- Centos7升級K8S叢集CentOSK8S
- Elasticsearch叢集升級指引Elasticsearch
- Kubernetes 叢集升級指南:從理論到實踐
- Kubernetes 叢集無損升級實踐
- 基於Ubuntu部署企業級kubernetes叢集---k8s叢集容部署UbuntuK8S
- 升級kubeadm 叢集(只有master單節點)AST
- 老闆:把系統從單體架構升級到叢集架構!架構
- Kubernetes(k8s)叢集部署(k8s企業級Docker容器集K8SDocker
- 記一次阿里雲 Redis 主從版升級到 Redis 叢集版的坑阿里Redis
- 刪除k8s叢集K8S
- python管理k8s叢集PythonK8S
- Ansible部署K8s叢集K8S
- 多k8s叢集管理K8S
- kubeadm部署K8S叢集K8S
- k8s之叢集管理K8S
- K8s 從懵圈到熟練-叢集伸縮原理K8S
- 用 edgeadm 一鍵安裝邊緣 K8s 叢集和原生 K8s 叢集K8S
- Kubeadm安裝k8s叢集升級100年證書時報錯:Unable to connect to the server: EOF:求解決方法.K8SServer
- K8s 從懵圈到熟練 – 叢集網路詳解K8S
- Oracle 字符集從GBK升級到Utf8Oracle
- 教你如何搭建K8S叢集。K8S
- k8s——搭建叢集環境K8S
- k8s叢集搭建--kubeadm方式K8S
- Kubeadm方式搭建K8S叢集K8S
- prometheus監控k8s叢集PrometheusK8S
- K8s叢集證書更新K8S
- centos安裝k8s叢集CentOSK8S
- Ubuntu 安裝k8s叢集UbuntuK8S
- 螞蟻集團萬級規模 K8s 叢集 etcd 高可用建設之路K8S
- 阿里雲 ACK One 多叢集管理全面升級:多叢集服務、多叢集監控、兩地三中心應用容災阿里
- oracle 10 rac 升級 10.2.0.1升級到10.2.0.5Oracle
- Mac + Docker + K8S 本地搭建K8S叢集MacDockerK8S
- 在K8S上搭建Redis叢集K8SRedis
- k8s 部署生產vault叢集K8S
- 如何配置K8S儲存叢集?K8S