升级kubernetes到1.15

Mon Jul 1, 2019

1000 Words|Read in about 5 Min
Tags: Devops   k8s  

Kubernetes v1.15的重点关注的更新:
kubeadm证书管理在1.15中变得更加强大,kubeadm现在可以在它们到期之前无缝转动所有证书(升级时)有关如何管理证书的信息,请查看kubeadm文档。

Kubernetes 1.15 consists of 25 enhancements: 2 moving to stable, 13 in beta, and 10 in alpha. The main themes of this release are:

Continuous Improvement
Project sustainability is not just about features. Many SIGs have been working on improving test coverage, ensuring the basics stay reliable, and stability of the core feature set and working on maturing existing features and cleaning up the backlog.
Extensibility
The community has been asking for continuing support of extensibility, so this cycle features more work around CRDs and API Machinery. Most of the enhancements in this cycle were from SIG API Machinery and related areas.
具体详情请看
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.15.md#115-whats-new

master:

1.先升级kubeadm kubelet kubectl

@centos7.x
yum install  -y kubelet kubeadm kubectl
or
ubuntu 16 or 18
apt install  -y kubelet kubeadm kubectl
root@node20:~# kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.14.0
[upgrade/versions] kubeadm version: v1.15.0

保证你的master节点可以拉起海外image。如果不行请提前在海外拉取回来在主节点load。

COMPONENT   CURRENT       AVAILABLE
Kubelet     2 x v1.14.0   v1.15.0
            1 x v1.14.1   v1.15.0
            1 x v1.15.0   v1.15.0

Upgrade to the latest version in the v1.14 series:

COMPONENT            CURRENT   AVAILABLE
API Server           v1.14.0   v1.15.0
Controller Manager   v1.14.0   v1.15.0
Scheduler            v1.14.0   v1.15.0
Kube Proxy           v1.14.0   v1.15.0
CoreDNS              1.3.1     1.3.1
Etcd                 3.3.10    3.3.10

You can now apply the upgrade by executing the following command:

	kubeadm upgrade apply v1.15.0

执行

kubeadm upgrade apply v1.15.0

然后会发现下面的信息:

[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/version] You have chosen to change the cluster version to "v1.15.0"
[upgrade/versions] Cluster version: v1.14.0
[upgrade/versions] kubeadm version: v1.15.0
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
[upgrade/prepull] Prepulling image for component etcd.
[upgrade/prepull] Prepulling image for component kube-apiserver.
[upgrade/prepull] Prepulling image for component kube-controller-manager.
[upgrade/prepull] Prepulling image for component kube-scheduler.
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[upgrade/prepull] Prepulled image for component kube-apiserver.
[upgrade/prepull] Prepulled image for component etcd.
[upgrade/prepull] Prepulled image for component kube-controller-manager.
[upgrade/prepull] Prepulled image for component kube-scheduler.
[upgrade/prepull] Successfully prepulled the images for all the control plane components
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.15.0"...
Static pod: kube-apiserver-node20 hash: abf03cd700e41a72b6ed17c8f22f2b1c
Static pod: kube-controller-manager-node20 hash: ab6c58bb7e8650fee56e97401fb72f03
Static pod: kube-scheduler-node20 hash: b9b98173c3f4bbf002d9b1d0d7e3328f
[upgrade/etcd] Upgrading to TLS for etcd
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests437209780"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-07-01-16-06-10/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-node20 hash: abf03cd700e41a72b6ed17c8f22f2b1c
Static pod: kube-apiserver-node20 hash: abf03cd700e41a72b6ed17c8f22f2b1c
Static pod: kube-apiserver-node20 hash: abf03cd700e41a72b6ed17c8f22f2b1c
Static pod: kube-apiserver-node20 hash: 6cd4bd360d0cf02d1c0e07c586905d6a
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-07-01-16-06-10/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-node20 hash: ab6c58bb7e8650fee56e97401fb72f03
Static pod: kube-controller-manager-node20 hash: 50fc181fd2b3cc384fd8c5b5e286c1c4
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-07-01-16-06-10/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-node20 hash: b9b98173c3f4bbf002d9b1d0d7e3328f
Static pod: kube-scheduler-node20 hash: 31d9ee8b7fb12e797dc981a8686f6b2b
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.15.0". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.

主节点就升级成功了!

可以重启下,也可以不重启
systemctl daemon-reload
systemctl restart kubelet

升级其它master节点,如果有的话执行下面的:

kubeadm upgrade node control-plane

node:

@centos7.x
yum install  -y kubelet kubeadm kubectl
or
ubuntu 16 or 18
apt install  -y kubelet kubeadm kubectl

systemctl daemon-reload
systemctl restart kubelet

开始升级计算节点:

kubeadm upgrade node

核验升级成果示范:

➜  ~ kubectl get nodes
NAME     STATUS   ROLES    AGE   VERSION
node20   Ready    master   84d   v1.15.0
node21   Ready    <none>   84d   v1.15.0
node22   Ready    <none>   82d   v1.15.0
node23   Ready    <none>   55d   v1.15.0

See Also

Mon Jul 1, 2019

1000 Words|Read in about 5 Min
Tags: Devops   k8s