Upgrade the Control Plane

After installing a Kubernetes cluster, I suggest we continue upgrading the cluster using kubeadm installation and configuration tools. So that you know, upgrading a cluster is done from version to version +1.

In the context of this blog, we assume that we have a minimal version Kubernetes cluster, with one Master and two Worker nodes, at our disposal.
The whole is hosted on a Debian distribution, with a rather minimalist sizing with 2 CPUs and 4GB of RAM each.

Our current version of Kubernetes is version 1.24, meaning we can only upgrade to version 1.25. In this blog, I will voluntarily upgrade the cluster to version 1.26 from version 1.24, allowing us to see how to correct the situation by downgrading the version. Let’s go ahead and get started.

Let’s start by upgrading our kubeadm version first!
To remind you, kubeadm is a tool that allows us to install and configure our Kubernetes cluster.

sudo apt-get install -y --allow-change-held-packages kubeadm=1.26.0-00
dbinla@dbisbx-master01:~$ sudo apt-get install -y --allow-change-held-packages kubeadm=1.26.0-00
[sudo] password for dbinla:
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following held packages will be changed:
  kubeadm
The following packages will be upgraded:
  kubeadm
1 upgraded, 0 newly installed, 0 to remove and 14 not upgraded.
Need to get 9730 kB of archives.
After this operation, 2396 kB of additional disk space will be used.
Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubeadm amd64 1.26.0-00 [9730 kB]
Fetched 9730 kB in 1s (12.5 MB/s)
(Reading database ... 195447 files and directories currently installed.)
Preparing to unpack .../kubeadm_1.26.0-00_amd64.deb ...
Unpacking kubeadm (1.26.0-00) over (1.24.0-00) ...
Setting up kubeadm (1.26.0-00) ...
dbinla@dbisbx-master01:~$

Once the packages are installed, double-check the tool version; it’s important to ensure that everything is up to date!

kubeadm version
dbinla@dbisbx-master01:~$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"26", GitVersion:"v1.26.0", GitCommit:"b46a3f887ca979b1a5d14fd39cb1af43e7e5d12d", GitTreeState:"clean", BuildDate:"2022-12-08T19:57:06Z", GoVersion:"go1.19.4", Compiler:"gc", Platform:"linux/amd64"}
dbinla@dbisbx-master01:~$

Next, we’ll remove our first node, our control plane, by draining it from the cluster. In other words, we’ll isolate it from the cluster.

kubectl drain <ControlPlane> --ignore-daemonsets
dbinla@dbisbx-master01:~$ kubectl drain dbisbx-master01 --ignore-daemonsets
node/dbisbx-master01 cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-2f8xk, kube-system/kube-proxy-jsjjc
evicting pod kube-system/calico-kube-controllers-84c476996d-px2xc
evicting pod kube-system/coredns-6d4b75cb6d-9jg7k
evicting pod kube-system/coredns-6d4b75cb6d-2zp45
pod/calico-kube-controllers-84c476996d-px2xc evicted
pod/coredns-6d4b75cb6d-2zp45 evicted
pod/coredns-6d4b75cb6d-9jg7k evicted
node/dbisbx-master01 drained
dbinla@dbisbx-master01:~$

Alright, now we’re going to plan the cluster upgrade.
The command we run will help us analyse our cluster, considering the current node version, configuration, and add-ons, to generate a detailed plan to upgrade the cluster to the chosen target version safely.
In our case, this will allow us to point out that our version of kubeadm is incorrect since a cluster upgrade can only be done to version +1.

sudo kubeadm upgrade plan v1.26.0
dbinla@dbisbx-master01:~$ sudo kubeadm upgrade plan v1.26.0
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[upgrade/config] FATAL: this version of kubeadm only supports deploying clusters with the control plane version >= 1.25.0. Current version: v1.24.0
To see the stack trace of this error execute with --v=5 or higher

The upgrade plan command returns a FATAL error indicating that our version of kubeadm is only compatible with a control plane version >= 1.25.0 while the version being used is v1.24.0.
But don’t worry, downgrading the kubeadm version is a common solution to this problem, and it’s a simple process that can be accomplished with just a few commands:

sudo apt-get install -y --allow-change-held-packages kubeadm=1.25.0-00 --allow-downgrades
dbinla@dbisbx-master01:~$ sudo apt-get install -y --allow-change-held-packages kubeadm=1.25.0-00 --allow-downgrades
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages will be DOWNGRADED:
  kubeadm
0 upgraded, 0 newly installed, 1 downgraded, 0 to remove and 14 not upgraded.
Need to get 9213 kB of archives.
After this operation, 2974 kB disk space will be freed.
Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubeadm amd64 1.25.0-00 [9213 kB]
Fetched 9213 kB in 1s (7367 kB/s)
dpkg: warning: downgrading kubeadm from 1.26.0-00 to 1.25.0-00
(Reading database ... 195447 files and directories currently installed.)
Preparing to unpack .../kubeadm_1.25.0-00_amd64.deb ...
Unpacking kubeadm (1.25.0-00) over (1.26.0-00) ...
Setting up kubeadm (1.25.0-00) ...
dbinla@dbisbx-master01:~$

Once we have successfully downgraded kubeadm, we must re-run the “upgrade plan” command to perform pre-checks and obtain the green light to begin the upgrade.
This step is crucial as it ensures that the cluster is stable and that all the necessary prerequisites for the upgrade have been met. It also helps identify somes potential issues before upgrading, saving us time and effort in the long run.

sudo kubeadm upgrade plan v1.25.0
dbinla@dbisbx-master01:~$ sudo kubeadm upgrade plan v1.25.0
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.24.0
[upgrade/versions] kubeadm version: v1.25.0
[upgrade/versions] Target version: v1.25.0
[upgrade/versions] Latest version in the v1.24 series: v1.25.0

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       TARGET
kubelet     3 x v1.24.0   v1.25.0

Upgrade to the latest version in the v1.24 series:

COMPONENT                 CURRENT   TARGET
kube-apiserver            v1.24.0   v1.25.0
kube-controller-manager   v1.24.0   v1.25.0
kube-scheduler            v1.24.0   v1.25.0
kube-proxy                v1.24.0   v1.25.0
CoreDNS                   v1.8.6    v1.9.3
etcd                      3.5.3-0   3.5.4-0

You can now apply the upgrade by executing the following command:

	kubeadm upgrade apply v1.25.0

_____________________________________________________________________


The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column.

API GROUP                 CURRENT VERSION   PREFERRED VERSION   MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io   v1alpha1          v1alpha1            no
kubelet.config.k8s.io     v1beta1           v1beta1             no
_____________________________________________________________________

dbinla@dbisbx-master01:~$

It’s always a relief when everything checks out during the pre-upgrade verification.
Now we can move forward with the actual upgrade process by running the following command, which consists of upgrading all the components of the control plane, including the Kubernetes API server, the etcd database, and the controller manager :

sudo kubeadm upgrade apply v1.25.0
dbinla@dbisbx-master01:~$ sudo kubeadm upgrade apply v1.25.0
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.25.0"
[upgrade/versions] Cluster version: v1.24.0
[upgrade/versions] kubeadm version: v1.25.0
[upgrade] Are you sure you want to proceed? [y/N]: y
[upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
[upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
[upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.25.0" (timeout: 5m0s)...
[upgrade/etcd] Upgrading to TLS for etcd
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Renewing etcd-server certificate
[upgrade/staticpods] Renewing etcd-peer certificate
[upgrade/staticpods] Renewing etcd-healthcheck-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2023-01-13-15-22-52/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
E0113 15:23:18.784912   45754 request.go:977] Unexpected error when reading response body: net/http: request canceled (Client.Timeout or context cancellation while reading body)
[apiclient] Found 1 Pods for label selector component=etcd
[upgrade/staticpods] Component "etcd" upgraded successfully!
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests3801252369"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2023-01-13-15-22-52/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2023-01-13-15-22-52/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2023-01-13-15-22-52/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upgrade/postupgrade] Removing the old taint &Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,} from all control plane Nodes. After this step only the &Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,} taint will be present on control plane Nodes.
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.25.0". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
dbinla@dbisbx-master01:~$

We must receive the “[upgrade/successful]” confirmation message indicating that the upgrade process was completed successfully. Also, we should carefully look over the output log to check for any errors that may have occurred during the upgrade process.

By checking for these indicators, we can be confident that our Kubernetes cluster runs on the right version and is stable and secure. It’s also worth noting that keeping track of the upgrade process and reviewing the output log can help to early identify potential issues and take corrective action as needed.

Then we will upgrade kubelet and kubectl on the control plane node through Unix packages:

sudo apt-get install -y --allow-change-held-packages kubelet=1.25.0-00 kubectl=1.25.0-00
dbinla@dbisbx-master01:~$ sudo apt-get install -y --allow-change-held-packages kubelet=1.25.0-00 kubectl=1.25.0-00
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following held packages will be changed:
  kubectl kubelet
The following packages will be upgraded:
  kubectl kubelet
2 upgraded, 0 newly installed, 0 to remove and 13 not upgraded.
Need to get 29.0 MB of archives.
After this operation, 2825 kB disk space will be freed.
Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubectl amd64 1.25.0-00 [9500 kB]
Get:2 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubelet amd64 1.25.0-00 [19.5 MB]
Fetched 29.0 MB in 5s (5792 kB/s)
(Reading database ... 195447 files and directories currently installed.)
Preparing to unpack .../kubectl_1.25.0-00_amd64.deb ...
Unpacking kubectl (1.25.0-00) over (1.24.0-00) ...
Preparing to unpack .../kubelet_1.25.0-00_amd64.deb ...
Unpacking kubelet (1.25.0-00) over (1.24.0-00) ...
Setting up kubectl (1.25.0-00) ...
Setting up kubelet (1.25.0-00) ...
dbinla@dbisbx-master01:~$

After downloading and upgrading the packages, all that remains is to restart the service kubelet and check the nodes status:

sudo systemctl daemon-reload
sudo systemctl restart kubelet
kubectl get nodes
dbinla@dbisbx-master01:~$ sudo systemctl daemon-reload
dbinla@dbisbx-master01:~$ sudo systemctl restart kubelet
dbinla@dbisbx-master01:~$ kubectl get nodes
NAME              STATUS                     ROLES           AGE     VERSION
dbisbx-master01   Ready,SchedulingDisabled   control-plane   23h     v1.25.0
dbisbx-worker01   Ready                      <none>          5h58m   v1.24.0
dbisbx-worker02   Ready                      <none>          5h54m   v1.24.0

This should return a status "Ready,SchedulingDisabled", as the node is still outside of the cluster.
The next step, as you may have guessed, is to reintegrate the node into the cluster simply by performing an “Uncordon” of the control plane:

dbinla@dbisbx-master01:~$ kubectl uncordon dbisbx-master01
node/dbisbx-master01 uncordoned

Now you should see the control plane in “Ready” status:

kubectl get node
dbinla@dbisbx-master01:~$ kubectl get nodes
NAME              STATUS   ROLES           AGE     VERSION
dbisbx-master01   Ready    control-plane   23h     v1.25.0
dbisbx-worker01   Ready    <none>          5h58m   v1.24.0
dbisbx-worker02   Ready    <none>          5h54m   v1.24.0
dbinla@dbisbx-master01:~$

If it shows a NotReady status, rerun the command after a minute or so. It should be in “Ready” status after a while.

Upgrade the Worker Nodes

Now that we’ve upgraded the control plane, the next step is to proceed with upgrading the nodes by following the same steps, which are as follows:

  • Drain the node
  • Upgrade the kubeadm
  • Upgrade the kubelet configuration
  • Restart the kubelet
  • Uncordon the node

Worker Node 1

Run the following on the control plane node to drain worker node 1:

kubectl drain dbisbx-worker01 --ignore-daemonsets --force
dbinla@dbisbx-master01:~$ kubectl drain dbisbx-worker01 --ignore-daemonsets --force
node/dbisbx-worker01 cordoned
Warning: ignoring DaemonSet-managed Pods: kube-system/calico-node-tv9rj, kube-system/kube-proxy-9889z
evicting pod kube-system/coredns-565d847f94-vfkxs
evicting pod kube-system/calico-kube-controllers-84c476996d-zgrp6
pod/calico-kube-controllers-84c476996d-zgrp6 evicted
pod/coredns-565d847f94-vfkxs evicted
node/dbisbx-worker01 drained
dbinla@dbisbx-master01:~$

You may get an error message that certain pods couldn’t be deleted, which is fine.

Upgrade kubeadm on worker node 1::

sudo apt-get update && \
sudo apt-get install -y --allow-change-held-packages kubeadm=1.25.0-00
dbinla@dbisbx-worker01:~$ sudo apt-get install -y --allow-change-held-packages kubeadm=1.25.0-00
[sudo] password for dbinla:
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following held packages will be changed:
  kubeadm
The following packages will be upgraded:
  kubeadm
1 upgraded, 0 newly installed, 0 to remove and 10 not upgraded.
Need to get 9213 kB of archives.
After this operation, 578 kB disk space will be freed.
Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubeadm amd64 1.25.0-00 [9213 kB]
Fetched 9213 kB in 2s (5799 kB/s)
(Reading database ... 195442 files and directories currently installed.)
Preparing to unpack .../kubeadm_1.25.0-00_amd64.deb ...
Unpacking kubeadm (1.25.0-00) over (1.24.0-00) ...
Setting up kubeadm (1.25.0-00) ...
dbinla@dbisbx-worker01:~$

Make sure it is upgraded correctly:

kubeadm version
dbinla@dbisbx-worker01:~$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.0", GitCommit:"a866cbe2e5bbaa01cfd5e969aa3e033f3282a8a2", GitTreeState:"clean", BuildDate:"2022-08-23T17:43:25Z", GoVersion:"go1.19", Compiler:"gc", Platform:"linux/amd64"}
dbinla@dbisbx-worker01:~$

upgrade the kubelet configuration on the worker node:

sudo kubeadm upgrade node
dbinla@dbisbx-worker01:~$ sudo kubeadm upgrade node
[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks
[preflight] Skipping prepull. Not a control plane node.
[upgrade] Skipping phase. Not a control plane node.
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.
dbinla@dbisbx-worker01:~$

Mark the kubelet, kubeadm, and kubectl packages as “held” to prevent them from being automatically upgraded:

sudo apt-mark hold kubelet kubeadm kubectl

Restart kubelet:

sudo systemctl daemon-reload
sudo systemctl restart kubelet
dbinla@dbisbx-worker01:~$ sudo systemctl daemon-reload
dbinla@dbisbx-worker01:~$ sudo systemctl restart kubelet

From the control plane node, uncordon worker node 1:

kubectl uncordon dbisbx-worker01

kubectl get nodes
dbinla@dbisbx-master01:~$ kubectl uncordon dbisbx-worker01
node/dbisbx-worker01 uncordoned
dbinla@dbisbx-master01:~$ kubectl get nodes
NAME              STATUS   ROLES           AGE     VERSION
dbisbx-master01   Ready    control-plane   23h     v1.25.0
dbisbx-worker01   Ready    <none>          6h13m   v1.25.0
dbisbx-worker02   Ready    <none>          6h9m    v1.24.0
dbinla@dbisbx-master01:~$

Worker Node 2

Here we go again. We will repeat the same steps as for worker01, which involves isolating the node from the cluster and upgrading it safely.
From the control plane node, drain worker node 2:

kubectl drain dbisbx-worker02 –ignore-daemonsets –force

dbinla@dbisbx-master01:~$ kubectl drain dbisbx-worker02 --ignore-daemonsets --force
node/dbisbx-worker02 cordoned
Warning: ignoring DaemonSet-managed Pods: kube-system/calico-node-4vkp5, kube-system/kube-proxy-55nks
evicting pod kube-system/coredns-565d847f94-hmr7t
evicting pod kube-system/calico-kube-controllers-84c476996d-g4bqj
pod/calico-kube-controllers-84c476996d-g4bqj evicted
pod/coredns-565d847f94-hmr7t evicted
node/dbisbx-worker02 drained

Of course, we make sure that the node is correctly set to “SchedulingDisabled”. This is a critical step in the process of isolating the node from the cluster.

kubectl get nodes
dbinla@dbisbx-master01:~$ kubectl get nodes
NAME              STATUS                     ROLES           AGE     VERSION
dbisbx-master01   Ready                      control-plane   23h     v1.25.0
dbisbx-worker01   Ready                      <none>          6h16m   v1.25.0
dbisbx-worker02   Ready,SchedulingDisabled   <none>          6h12m   v1.24.0
dbinla@dbisbx-master01:~$

In a new terminal window, log into your worker node 2:

Now, we’re installing the kubeadm packages with version 1.25.0-00, and making sure to allow changes to held packages.

sudo apt-get install -y --allow-change-held-packages kubeadm=1.25.0-00
dbinla@dbisbx-worker02:~$ sudo apt-get install -y --allow-change-held-packages kubeadm=1.25.0-00
[sudo] password for dbinla:
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following held packages will be changed:
  kubeadm
The following packages will be upgraded:
  kubeadm
1 upgraded, 0 newly installed, 0 to remove and 14 not upgraded.
Need to get 9213 kB of archives.
After this operation, 578 kB disk space will be freed.
Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubeadm amd64 1.25.0-00 [9213 kB]
Fetched 9213 kB in 1s (11.9 MB/s)
(Reading database ... 195446 files and directories currently installed.)
Preparing to unpack .../kubeadm_1.25.0-00_amd64.deb ...
Unpacking kubeadm (1.25.0-00) over (1.24.0-00) ...
Setting up kubeadm (1.25.0-00) ...
dbinla@dbisbx-worker02:~$

Once the package has been downloaded and installed, we check the version to ensure that the package upgrade was successful and that we are indeed running the desired version of kubeadm.:

kubeadm version
dbinla@dbisbx-worker02:~$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.0", GitCommit:"a866cbe2e5bbaa01cfd5e969aa3e033f3282a8a2", GitTreeState:"clean", BuildDate:"2022-08-23T17:43:25Z", GoVersion:"go1.19", Compiler:"gc", Platform:"linux/amd64"}
dbinla@dbisbx-worker02:~$

And now, all that’s left is to initiate the upgrade process.

sudo kubeadm upgrade node
dbinla@dbisbx-worker02:~$ sudo kubeadm upgrade node
[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks
[preflight] Skipping prepull. Not a control plane node.
[upgrade] Skipping phase. Not a control plane node.
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.
dbinla@dbisbx-worker02:~$

So far, so good! Everything is progressing smoothly with the upgrade process.
We have confirmation that the configuration for this node was successfully updated, and we can go ahead and upgrade the kubelet package.

sudo apt-get install -y --allow-change-held-packages kubelet=1.25.0-00 kubectl=1.25.0-00
dbinla@dbisbx-worker02:~$ sudo apt-get install -y --allow-change-held-packages kubelet=1.25.0-00 kubectl=1.25.0-00
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following held packages will be changed:
  kubectl kubelet
The following packages will be upgraded:
  kubectl kubelet
2 upgraded, 0 newly installed, 0 to remove and 18 not upgraded.
Need to get 29.0 MB of archives.
After this operation, 2825 kB disk space will be freed.
Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubectl amd64 1.25.0-00 [9500 kB]
Get:2 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubelet amd64 1.25.0-00 [19.5 MB]
Fetched 29.0 MB in 1s (23.2 MB/s)
(Reading database ... 195446 files and directories currently installed.)
Preparing to unpack .../kubectl_1.25.0-00_amd64.deb ...
Unpacking kubectl (1.25.0-00) over (1.24.0-00) ...
Preparing to unpack .../kubelet_1.25.0-00_amd64.deb ...
Unpacking kubelet (1.25.0-00) over (1.24.0-00) ...
Setting up kubectl (1.25.0-00) ...
Setting up kubelet (1.25.0-00) ...
dbinla@dbisbx-worker02:~$

Mark the kubelet, kubeadm, and kubectl packages as “held” to prevent them from being automatically upgraded:

sudo apt-mark hold kubelet kubeadm kubectl

After completing the upgrade process, we will reload the systemd manager configuration files and restart the kubelet service.

sudo systemctl daemon-reload
sudo systemctl restart kubelet
dbinla@dbisbx-worker02:~$ sudo systemctl daemon-reload
dbinla@dbisbx-worker02:~$ sudo systemctl restart kubelet

Now that we’ve completed the upgrade process and restarted the kubelet service, we’re ready to reintegrate our node into the cluster.
From the control plane node, uncordon worker node 2, to remove the SchedulingDisabled state from the node and allow Kubernetes to schedule new pods onto it once again.

kubectl uncordon dbisbx-worker02
dbinla@dbisbx-master01:~$ kubectl uncordon dbisbx-worker02
node/dbisbx-worker02 uncordoned
dbinla@dbisbx-master01:~$

Still, in the control plane node, verify the cluster is upgraded and working:

kubectl get nodes
dbinla@dbisbx-master01:~$ kubectl get nodes
NAME              STATUS   ROLES           AGE     VERSION
dbisbx-master01   Ready    control-plane   23h     v1.25.0
dbisbx-worker01   Ready    <none>          6h23m   v1.25.0
dbisbx-worker02   Ready    <none>          6h19m   v1.25.0
dbinla@dbisbx-master01:~$

If they show a NotReady status, don’t worry, as it takes a minute or two for the node to transition into the “Ready” status. Just rerun the command after a minute or so. They should become Ready.

Conclusion

We’ve completed the upgrade process for our cluster! As you may have noticed, the process itself is relatively simple and easy. The kubeadm tools are designed to guide us through the upgrade process (as well as a downgrade if needed) smoothly and straightforwardly.