{"id":21603,"date":"2023-03-28T11:04:11","date_gmt":"2023-03-28T09:04:11","guid":{"rendered":"https:\/\/www.dbi-services.com\/blog\/?p=21603"},"modified":"2023-03-31T14:21:59","modified_gmt":"2023-03-31T12:21:59","slug":"lets-upgrade-kubernetes-with-kubeadm","status":"publish","type":"post","link":"https:\/\/www.dbi-services.com\/blog\/lets-upgrade-kubernetes-with-kubeadm\/","title":{"rendered":"Let&#8217;s upgrade Kubernetes with kubeadm"},"content":{"rendered":"\n<h3 class=\"wp-block-heading\">Upgrade the Control Plane<\/h3>\n\n\n\n<p>After<a href=\"https:\/\/www.dbi-services.com\/blog\/how-to-build-a-kubernetes-cluster-with-kubeadm\/\" target=\"_blank\" rel=\"noreferrer noopener\"> installing a Kubernetes cluster,<\/a> I suggest we continue upgrading the cluster using kubeadm installation and configuration tools. So that you know, upgrading a cluster is done from version to version +1. <\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><em>In the context of this blog, we assume that we have a minimal version Kubernetes cluster, with one Master and two Worker nodes, at our disposal.\nThe whole is hosted on a Debian distribution, with a rather minimalist sizing with 2 CPUs and 4GB of RAM each.<\/em>\n<\/pre>\n\n\n\n<p>Our current version of Kubernetes is version 1.24, meaning we can only upgrade to version 1.25. In this blog, I will voluntarily upgrade the cluster to version 1.26 from version 1.24, allowing us to see how to correct the situation by downgrading the version. Let&#8217;s go ahead and get started.<br><\/p>\n\n\n\n<p>Let&#8217;s start by upgrading our kubeadm version first!<br>To remind you, kubeadm is a tool that allows us to install and configure our Kubernetes cluster.<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: plain; title: ; notranslate\" title=\"\">\nsudo apt-get install -y --allow-change-held-packages kubeadm=1.26.0-00\n<\/pre><\/div>\n\n\n<pre class=\"wp-block-code\"><code>dbinla@dbisbx-master01:~$ sudo apt-get install -y --allow-change-held-packages kubeadm=1.26.0-00\n&#091;sudo] password for dbinla:\nReading package lists... Done\nBuilding dependency tree\nReading state information... Done\nThe following held packages will be changed:\n  kubeadm\nThe following packages will be upgraded:\n  kubeadm\n1 upgraded, 0 newly installed, 0 to remove and 14 not upgraded.\nNeed to get 9730 kB of archives.\nAfter this operation, 2396 kB of additional disk space will be used.\nGet:1 https:\/\/packages.cloud.google.com\/apt kubernetes-xenial\/main amd64 kubeadm amd64 1.26.0-00 &#091;9730 kB]\nFetched 9730 kB in 1s (12.5 MB\/s)\n(Reading database ... 195447 files and directories currently installed.)\nPreparing to unpack ...\/kubeadm_1.26.0-00_amd64.deb ...\nUnpacking kubeadm (1.26.0-00) over (1.24.0-00) ...\nSetting up kubeadm (1.26.0-00) ...\ndbinla@dbisbx-master01:~$<\/code><\/pre>\n\n\n\n<p>Once the packages are installed, double-check the tool version; it&#8217;s important to ensure that everything is up to date!<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: plain; title: ; notranslate\" title=\"\">\nkubeadm version\n<\/pre><\/div>\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: plain; title: ; notranslate\" title=\"\">\ndbinla@dbisbx-master01:~$ kubeadm version\nkubeadm version: &amp;amp;version.Info{Major:&quot;1&quot;, Minor:&quot;26&quot;, GitVersion:&quot;v1.26.0&quot;, GitCommit:&quot;b46a3f887ca979b1a5d14fd39cb1af43e7e5d12d&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2022-12-08T19:57:06Z&quot;, GoVersion:&quot;go1.19.4&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux\/amd64&quot;}\ndbinla@dbisbx-master01:~$\n<\/pre><\/div>\n\n\n<p>Next, we&#8217;ll remove our first node, our control plane, by draining it from the cluster. In other words, we&#8217;ll isolate it from the cluster. <\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: plain; title: ; notranslate\" title=\"\">\nkubectl drain &amp;lt;ControlPlane&amp;gt; --ignore-daemonsets\n<\/pre><\/div>\n\n\n<pre class=\"wp-block-code\"><code>dbinla@dbisbx-master01:~$ kubectl drain dbisbx-master01 --ignore-daemonsets\nnode\/dbisbx-master01 cordoned\nWARNING: ignoring DaemonSet-managed Pods: kube-system\/calico-node-2f8xk, kube-system\/kube-proxy-jsjjc\nevicting pod kube-system\/calico-kube-controllers-84c476996d-px2xc\nevicting pod kube-system\/coredns-6d4b75cb6d-9jg7k\nevicting pod kube-system\/coredns-6d4b75cb6d-2zp45\npod\/calico-kube-controllers-84c476996d-px2xc evicted\npod\/coredns-6d4b75cb6d-2zp45 evicted\npod\/coredns-6d4b75cb6d-9jg7k evicted\nnode\/dbisbx-master01 drained\ndbinla@dbisbx-master01:~$<\/code><\/pre>\n\n\n\n<p>Alright, now we&#8217;re going to plan the cluster upgrade. <br>The command we run will help us analyse our cluster, considering the current node version, configuration, and add-ons, to generate a detailed plan to upgrade the cluster to the chosen target version safely. <br>In our case, this will allow us to point out that our version of kubeadm is incorrect since a cluster upgrade can only be done to version +1.<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: plain; title: ; notranslate\" title=\"\">\nsudo kubeadm upgrade plan v1.26.0\n<\/pre><\/div>\n\n\n<pre class=\"wp-block-code\"><code>dbinla@dbisbx-master01:~$ sudo kubeadm upgrade plan v1.26.0\n&#091;upgrade\/config] Making sure the configuration is correct:\n&#091;upgrade\/config] Reading configuration from the cluster...\n&#091;upgrade\/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'\n&#091;upgrade\/config] FATAL: this version of kubeadm only supports deploying clusters with the control plane version &gt;= 1.25.0. Current version: v1.24.0\nTo see the stack trace of this error execute with --v=5 or higher<\/code><\/pre>\n\n\n\n<p>The upgrade plan command returns a FATAL error indicating that our version of kubeadm is only compatible with a control plane version &gt;= 1.25.0 while the version being used is v1.24.0. <br>But don&#8217;t worry, downgrading the kubeadm version is a common solution to this problem, and it&#8217;s a simple process that can be accomplished with just a few commands:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: plain; title: ; notranslate\" title=\"\">\nsudo apt-get install -y --allow-change-held-packages kubeadm=1.25.0-00 --allow-downgrades\n<\/pre><\/div>\n\n\n<pre class=\"wp-block-code\"><code>dbinla@dbisbx-master01:~$ sudo apt-get install -y --allow-change-held-packages kubeadm=1.25.0-00 --allow-downgrades\nReading package lists... Done\nBuilding dependency tree\nReading state information... Done\nThe following packages will be DOWNGRADED:\n  kubeadm\n0 upgraded, 0 newly installed, 1 downgraded, 0 to remove and 14 not upgraded.\nNeed to get 9213 kB of archives.\nAfter this operation, 2974 kB disk space will be freed.\nGet:1 https:\/\/packages.cloud.google.com\/apt kubernetes-xenial\/main amd64 kubeadm amd64 1.25.0-00 &#091;9213 kB]\nFetched 9213 kB in 1s (7367 kB\/s)\ndpkg: warning: downgrading kubeadm from 1.26.0-00 to 1.25.0-00\n(Reading database ... 195447 files and directories currently installed.)\nPreparing to unpack ...\/kubeadm_1.25.0-00_amd64.deb ...\nUnpacking kubeadm (1.25.0-00) over (1.26.0-00) ...\nSetting up kubeadm (1.25.0-00) ...\ndbinla@dbisbx-master01:~$<\/code><\/pre>\n\n\n\n<p>Once we have successfully downgraded kubeadm, we must re-run the &#8220;upgrade plan&#8221; command to perform pre-checks and obtain the green light to begin the upgrade.<br>This step is crucial as it ensures that the cluster is stable and that all the necessary prerequisites for the upgrade have been met. It also helps identify somes potential issues before upgrading, saving us time and effort in the long run.<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: plain; title: ; notranslate\" title=\"\">\nsudo kubeadm upgrade plan v1.25.0\n<\/pre><\/div>\n\n\n<pre class=\"wp-block-code\"><code>dbinla@dbisbx-master01:~$ sudo kubeadm upgrade plan v1.25.0\n&#091;upgrade\/config] Making sure the configuration is correct:\n&#091;upgrade\/config] Reading configuration from the cluster...\n&#091;upgrade\/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'\n&#091;upload-config] Storing the configuration used in ConfigMap \"kubeadm-config\" in the \"kube-system\" Namespace\n&#091;preflight] Running pre-flight checks.\n&#091;upgrade] Running cluster health checks\n&#091;upgrade] Fetching available versions to upgrade to\n&#091;upgrade\/versions] Cluster version: v1.24.0\n&#091;upgrade\/versions] kubeadm version: v1.25.0\n&#091;upgrade\/versions] Target version: v1.25.0\n&#091;upgrade\/versions] Latest version in the v1.24 series: v1.25.0\n\nComponents that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':\nCOMPONENT   CURRENT       TARGET\nkubelet     3 x v1.24.0   v1.25.0\n\nUpgrade to the latest version in the v1.24 series:\n\nCOMPONENT                 CURRENT   TARGET\nkube-apiserver            v1.24.0   v1.25.0\nkube-controller-manager   v1.24.0   v1.25.0\nkube-scheduler            v1.24.0   v1.25.0\nkube-proxy                v1.24.0   v1.25.0\nCoreDNS                   v1.8.6    v1.9.3\netcd                      3.5.3-0   3.5.4-0\n\nYou can now apply the upgrade by executing the following command:\n\n\tkubeadm upgrade apply v1.25.0\n\n_____________________________________________________________________\n\n\nThe table below shows the current state of component configs as understood by this version of kubeadm.\nConfigs that have a \"yes\" mark in the \"MANUAL UPGRADE REQUIRED\" column require manual config upgrade or\nresetting to kubeadm defaults before a successful upgrade can be performed. The version to manually\nupgrade to is denoted in the \"PREFERRED VERSION\" column.\n\nAPI GROUP                 CURRENT VERSION   PREFERRED VERSION   MANUAL UPGRADE REQUIRED\nkubeproxy.config.k8s.io   v1alpha1          v1alpha1            no\nkubelet.config.k8s.io     v1beta1           v1beta1             no\n_____________________________________________________________________\n\ndbinla@dbisbx-master01:~$<\/code><\/pre>\n\n\n\n<p>It&#8217;s always a relief when everything checks out during the pre-upgrade verification. <br>Now we can move forward with the actual upgrade process by running the following command, which consists of upgrading all the components of the control plane, including the Kubernetes API server, the etcd database, and the controller manager :<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: plain; title: ; notranslate\" title=\"\">\nsudo kubeadm upgrade apply v1.25.0\n<\/pre><\/div>\n\n\n<pre class=\"wp-block-code\"><code>dbinla@dbisbx-master01:~$ sudo kubeadm upgrade apply v1.25.0\n&#091;upgrade\/config] Making sure the configuration is correct:\n&#091;upgrade\/config] Reading configuration from the cluster...\n&#091;upgrade\/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'\n&#091;preflight] Running pre-flight checks.\n&#091;upgrade] Running cluster health checks\n&#091;upgrade\/version] You have chosen to change the cluster version to \"v1.25.0\"\n&#091;upgrade\/versions] Cluster version: v1.24.0\n&#091;upgrade\/versions] kubeadm version: v1.25.0\n&#091;upgrade] Are you sure you want to proceed? &#091;y\/N]: y\n&#091;upgrade\/prepull] Pulling images required for setting up a Kubernetes cluster\n&#091;upgrade\/prepull] This might take a minute or two, depending on the speed of your internet connection\n&#091;upgrade\/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'\n&#091;upgrade\/apply] Upgrading your Static Pod-hosted control plane to version \"v1.25.0\" (timeout: 5m0s)...\n&#091;upgrade\/etcd] Upgrading to TLS for etcd\n&#091;upgrade\/staticpods] Preparing for \"etcd\" upgrade\n&#091;upgrade\/staticpods] Renewing etcd-server certificate\n&#091;upgrade\/staticpods] Renewing etcd-peer certificate\n&#091;upgrade\/staticpods] Renewing etcd-healthcheck-client certificate\n&#091;upgrade\/staticpods] Moved new manifest to \"\/etc\/kubernetes\/manifests\/etcd.yaml\" and backed up old manifest to \"\/etc\/kubernetes\/tmp\/kubeadm-backup-manifests-2023-01-13-15-22-52\/etcd.yaml\"\n&#091;upgrade\/staticpods] Waiting for the kubelet to restart the component\n&#091;upgrade\/staticpods] This might take a minute or longer depending on the component\/version gap (timeout 5m0s)\nE0113 15:23:18.784912   45754 request.go:977] Unexpected error when reading response body: net\/http: request canceled (Client.Timeout or context cancellation while reading body)\n&#091;apiclient] Found 1 Pods for label selector component=etcd\n&#091;upgrade\/staticpods] Component \"etcd\" upgraded successfully!\n&#091;upgrade\/etcd] Waiting for etcd to become available\n&#091;upgrade\/staticpods] Writing new Static Pod manifests to \"\/etc\/kubernetes\/tmp\/kubeadm-upgraded-manifests3801252369\"\n&#091;upgrade\/staticpods] Preparing for \"kube-apiserver\" upgrade\n&#091;upgrade\/staticpods] Renewing apiserver certificate\n&#091;upgrade\/staticpods] Renewing apiserver-kubelet-client certificate\n&#091;upgrade\/staticpods] Renewing front-proxy-client certificate\n&#091;upgrade\/staticpods] Renewing apiserver-etcd-client certificate\n&#091;upgrade\/staticpods] Moved new manifest to \"\/etc\/kubernetes\/manifests\/kube-apiserver.yaml\" and backed up old manifest to \"\/etc\/kubernetes\/tmp\/kubeadm-backup-manifests-2023-01-13-15-22-52\/kube-apiserver.yaml\"\n&#091;upgrade\/staticpods] Waiting for the kubelet to restart the component\n&#091;upgrade\/staticpods] This might take a minute or longer depending on the component\/version gap (timeout 5m0s)\n&#091;apiclient] Found 1 Pods for label selector component=kube-apiserver\n&#091;upgrade\/staticpods] Component \"kube-apiserver\" upgraded successfully!\n&#091;upgrade\/staticpods] Preparing for \"kube-controller-manager\" upgrade\n&#091;upgrade\/staticpods] Renewing controller-manager.conf certificate\n&#091;upgrade\/staticpods] Moved new manifest to \"\/etc\/kubernetes\/manifests\/kube-controller-manager.yaml\" and backed up old manifest to \"\/etc\/kubernetes\/tmp\/kubeadm-backup-manifests-2023-01-13-15-22-52\/kube-controller-manager.yaml\"\n&#091;upgrade\/staticpods] Waiting for the kubelet to restart the component\n&#091;upgrade\/staticpods] This might take a minute or longer depending on the component\/version gap (timeout 5m0s)\n&#091;apiclient] Found 1 Pods for label selector component=kube-controller-manager\n&#091;upgrade\/staticpods] Component \"kube-controller-manager\" upgraded successfully!\n&#091;upgrade\/staticpods] Preparing for \"kube-scheduler\" upgrade\n&#091;upgrade\/staticpods] Renewing scheduler.conf certificate\n&#091;upgrade\/staticpods] Moved new manifest to \"\/etc\/kubernetes\/manifests\/kube-scheduler.yaml\" and backed up old manifest to \"\/etc\/kubernetes\/tmp\/kubeadm-backup-manifests-2023-01-13-15-22-52\/kube-scheduler.yaml\"\n&#091;upgrade\/staticpods] Waiting for the kubelet to restart the component\n&#091;upgrade\/staticpods] This might take a minute or longer depending on the component\/version gap (timeout 5m0s)\n&#091;apiclient] Found 1 Pods for label selector component=kube-scheduler\n&#091;upgrade\/staticpods] Component \"kube-scheduler\" upgraded successfully!\n&#091;upgrade\/postupgrade] Removing the old taint &amp;Taint{Key:node-role.kubernetes.io\/master,Value:,Effect:NoSchedule,TimeAdded:&lt;nil&gt;,} from all control plane Nodes. After this step only the &amp;Taint{Key:node-role.kubernetes.io\/control-plane,Value:,Effect:NoSchedule,TimeAdded:&lt;nil&gt;,} taint will be present on control plane Nodes.\n&#091;upload-config] Storing the configuration used in ConfigMap \"kubeadm-config\" in the \"kube-system\" Namespace\n&#091;kubelet] Creating a ConfigMap \"kubelet-config\" in namespace kube-system with the configuration for the kubelets in the cluster\n&#091;kubelet-start] Writing kubelet configuration to file \"\/var\/lib\/kubelet\/config.yaml\"\n&#091;bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes\n&#091;bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials\n&#091;bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token\n&#091;bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster\n&#091;addons] Applied essential addon: CoreDNS\n&#091;addons] Applied essential addon: kube-proxy\n\n&#091;upgrade\/successful] SUCCESS! Your cluster was upgraded to \"v1.25.0\". Enjoy!\n\n&#091;upgrade\/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.\ndbinla@dbisbx-master01:~$<\/code><\/pre>\n\n\n\n<p>We must receive the &#8220;[upgrade\/successful]&#8221; confirmation message indicating that the upgrade process was completed successfully. Also, we should carefully look over the output log to check for any errors that may have occurred during the upgrade process.<\/p>\n\n\n\n<p>By checking for these indicators, we can be confident that our Kubernetes cluster runs on the right version and is stable and secure. It\u2019s also worth noting that keeping track of the upgrade process and reviewing the output log can help&nbsp;to early&nbsp;identify potential issues and take corrective action as needed.<\/p>\n\n\n\n<p>Then we will upgrade kubelet and kubectl on the control plane node through Unix packages:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: plain; title: ; notranslate\" title=\"\">\nsudo apt-get install -y --allow-change-held-packages kubelet=1.25.0-00 kubectl=1.25.0-00\n<\/pre><\/div>\n\n\n<pre class=\"wp-block-code\"><code>dbinla@dbisbx-master01:~$ sudo apt-get install -y --allow-change-held-packages kubelet=1.25.0-00 kubectl=1.25.0-00\nReading package lists... Done\nBuilding dependency tree\nReading state information... Done\nThe following held packages will be changed:\n  kubectl kubelet\nThe following packages will be upgraded:\n  kubectl kubelet\n2 upgraded, 0 newly installed, 0 to remove and 13 not upgraded.\nNeed to get 29.0 MB of archives.\nAfter this operation, 2825 kB disk space will be freed.\nGet:1 https:\/\/packages.cloud.google.com\/apt kubernetes-xenial\/main amd64 kubectl amd64 1.25.0-00 &#091;9500 kB]\nGet:2 https:\/\/packages.cloud.google.com\/apt kubernetes-xenial\/main amd64 kubelet amd64 1.25.0-00 &#091;19.5 MB]\nFetched 29.0 MB in 5s (5792 kB\/s)\n(Reading database ... 195447 files and directories currently installed.)\nPreparing to unpack ...\/kubectl_1.25.0-00_amd64.deb ...\nUnpacking kubectl (1.25.0-00) over (1.24.0-00) ...\nPreparing to unpack ...\/kubelet_1.25.0-00_amd64.deb ...\nUnpacking kubelet (1.25.0-00) over (1.24.0-00) ...\nSetting up kubectl (1.25.0-00) ...\nSetting up kubelet (1.25.0-00) ...\ndbinla@dbisbx-master01:~$<\/code><\/pre>\n\n\n\n<p>After downloading and upgrading the packages, all that remains is to restart the service kubelet and check the nodes status:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>sudo systemctl daemon-reload\nsudo systemctl restart kubelet\nkubectl get nodes<\/code><\/pre>\n\n\n\n<pre class=\"wp-block-code\"><code>dbinla@dbisbx-master01:~$ sudo systemctl daemon-reload\ndbinla@dbisbx-master01:~$ sudo systemctl restart kubelet\ndbinla@dbisbx-master01:~$ kubectl get nodes\nNAME              STATUS                     ROLES           AGE     VERSION\ndbisbx-master01   Ready,SchedulingDisabled   control-plane   23h     v1.25.0\ndbisbx-worker01   Ready                      &lt;none&gt;          5h58m   v1.24.0\ndbisbx-worker02   Ready                      &lt;none&gt;          5h54m   v1.24.0<\/code><\/pre>\n\n\n\n<p>This should return a status <code>\"Ready,SchedulingDisabled\"<\/code>, as the node is still outside of the cluster.<br>The next step, as you may have guessed, is to reintegrate the node into the cluster simply by performing an &#8220;Uncordon&#8221; of the control plane:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>dbinla@dbisbx-master01:~$ kubectl uncordon dbisbx-master01\nnode\/dbisbx-master01 uncordoned<\/code><\/pre>\n\n\n\n<p>Now you should see the control plane in &#8220;Ready&#8221; status:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>kubectl get node<\/code><\/pre>\n\n\n\n<pre class=\"wp-block-code\"><code>dbinla@dbisbx-master01:~$ kubectl get nodes\nNAME              STATUS   ROLES           AGE     VERSION\ndbisbx-master01   Ready    control-plane   23h     v1.25.0\ndbisbx-worker01   Ready    &lt;none&gt;          5h58m   v1.24.0\ndbisbx-worker02   Ready    &lt;none&gt;          5h54m   v1.24.0\ndbinla@dbisbx-master01:~$<\/code><\/pre>\n\n\n\n<p>If it shows a&nbsp;NotReady&nbsp;status, rerun the command after a minute or so. It should be&nbsp;in &#8220;Ready&#8221; status after a while.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Upgrade the Worker Nodes<br><\/h3>\n\n\n\n<p>Now that we&#8217;ve upgraded the control plane, the next step is to proceed with upgrading the nodes by following the same steps, which are as follows:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Drain the node<\/li>\n\n\n\n<li>Upgrade the kubeadm<\/li>\n\n\n\n<li>Upgrade the kubelet configuration<\/li>\n\n\n\n<li>Restart the kubelet<\/li>\n\n\n\n<li>Uncordon the node<\/li>\n<\/ul>\n\n\n\n<h4 class=\"wp-block-heading\">Worker Node 1<\/h4>\n\n\n\n<p>Run the following on the&nbsp;<em>control plane node<\/em>&nbsp;to drain worker node 1:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: plain; title: ; notranslate\" title=\"\">\nkubectl drain dbisbx-worker01 --ignore-daemonsets --force\n<\/pre><\/div>\n\n\n<pre class=\"wp-block-code\"><code>dbinla@dbisbx-master01:~$ kubectl drain dbisbx-worker01 --ignore-daemonsets --force\nnode\/dbisbx-worker01 cordoned\nWarning: ignoring DaemonSet-managed Pods: kube-system\/calico-node-tv9rj, kube-system\/kube-proxy-9889z\nevicting pod kube-system\/coredns-565d847f94-vfkxs\nevicting pod kube-system\/calico-kube-controllers-84c476996d-zgrp6\npod\/calico-kube-controllers-84c476996d-zgrp6 evicted\npod\/coredns-565d847f94-vfkxs evicted\nnode\/dbisbx-worker01 drained\ndbinla@dbisbx-master01:~$<\/code><\/pre>\n\n\n\n<p>You may get an error message that certain pods couldn&#8217;t be deleted, which is fine.<\/p>\n\n\n\n<p>Upgrade kubeadm on worker node 1::<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: plain; title: ; notranslate\" title=\"\">\nsudo apt-get update &amp;amp;&amp;amp; \\\nsudo apt-get install -y --allow-change-held-packages kubeadm=1.25.0-00\n<\/pre><\/div>\n\n\n<pre class=\"wp-block-code\"><code>dbinla@dbisbx-worker01:~$ sudo apt-get install -y --allow-change-held-packages kubeadm=1.25.0-00\n&#091;sudo] password for dbinla:\nReading package lists... Done\nBuilding dependency tree\nReading state information... Done\nThe following held packages will be changed:\n  kubeadm\nThe following packages will be upgraded:\n  kubeadm\n1 upgraded, 0 newly installed, 0 to remove and 10 not upgraded.\nNeed to get 9213 kB of archives.\nAfter this operation, 578 kB disk space will be freed.\nGet:1 https:\/\/packages.cloud.google.com\/apt kubernetes-xenial\/main amd64 kubeadm amd64 1.25.0-00 &#091;9213 kB]\nFetched 9213 kB in 2s (5799 kB\/s)\n(Reading database ... 195442 files and directories currently installed.)\nPreparing to unpack ...\/kubeadm_1.25.0-00_amd64.deb ...\nUnpacking kubeadm (1.25.0-00) over (1.24.0-00) ...\nSetting up kubeadm (1.25.0-00) ...\ndbinla@dbisbx-worker01:~$<\/code><\/pre>\n\n\n\n<p>Make sure it is upgraded correctly:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: plain; title: ; notranslate\" title=\"\">\nkubeadm version\n<\/pre><\/div>\n\n\n<pre class=\"wp-block-code\"><code>dbinla@dbisbx-worker01:~$ kubeadm version\nkubeadm version: &amp;version.Info{Major:\"1\", Minor:\"25\", GitVersion:\"v1.25.0\", GitCommit:\"a866cbe2e5bbaa01cfd5e969aa3e033f3282a8a2\", GitTreeState:\"clean\", BuildDate:\"2022-08-23T17:43:25Z\", GoVersion:\"go1.19\", Compiler:\"gc\", Platform:\"linux\/amd64\"}\ndbinla@dbisbx-worker01:~$<\/code><\/pre>\n\n\n\n<p>upgrade the kubelet configuration on the worker node:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: plain; title: ; notranslate\" title=\"\">\nsudo kubeadm upgrade node\n<\/pre><\/div>\n\n\n<pre class=\"wp-block-code\"><code>dbinla@dbisbx-worker01:~$ sudo kubeadm upgrade node\n&#091;upgrade] Reading configuration from the cluster...\n&#091;upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'\n&#091;preflight] Running pre-flight checks\n&#091;preflight] Skipping prepull. Not a control plane node.\n&#091;upgrade] Skipping phase. Not a control plane node.\n&#091;kubelet-start] Writing kubelet configuration to file \"\/var\/lib\/kubelet\/config.yaml\"\n&#091;upgrade] The configuration for this node was successfully updated!\n&#091;upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.\ndbinla@dbisbx-worker01:~$<\/code><\/pre>\n\n\n\n<p>Mark the kubelet, kubeadm, and kubectl packages as &#8220;held&#8221; to prevent them from being automatically upgraded:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\nsudo apt-mark hold kubelet kubeadm kubectl\n<\/pre><\/div>\n\n\n<p>Restart kubelet:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: plain; title: ; notranslate\" title=\"\">\nsudo systemctl daemon-reload\nsudo systemctl restart kubelet\n<\/pre><\/div>\n\n\n<pre class=\"wp-block-code\"><code>dbinla@dbisbx-worker01:~$ sudo systemctl daemon-reload\ndbinla@dbisbx-worker01:~$ sudo systemctl restart kubelet<\/code><\/pre>\n\n\n\n<p>From the&nbsp;<em>control plane node<\/em>, uncordon worker node 1:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: plain; title: ; notranslate\" title=\"\">\nkubectl uncordon dbisbx-worker01\n\nkubectl get nodes\n<\/pre><\/div>\n\n\n<pre class=\"wp-block-code\"><code>dbinla@dbisbx-master01:~$ kubectl uncordon dbisbx-worker01\nnode\/dbisbx-worker01 uncordoned\ndbinla@dbisbx-master01:~$ kubectl get nodes\nNAME              STATUS   ROLES           AGE     VERSION\ndbisbx-master01   Ready    control-plane   23h     v1.25.0\ndbisbx-worker01   Ready    &lt;none&gt;          6h13m   v1.25.0\ndbisbx-worker02   Ready    &lt;none&gt;          6h9m    v1.24.0\ndbinla@dbisbx-master01:~$<\/code><\/pre>\n\n\n\n<h3 class=\"wp-block-heading\">Worker Node 2<\/h3>\n\n\n\n<p>Here we go again. We will repeat the same steps as for worker01, which involves isolating the node from the cluster and upgrading it safely.<br>From the&nbsp;<em>control plane node<\/em>, drain worker node 2:<\/p>\n\n\n\n<p>kubectl drain dbisbx-worker02 &#8211;ignore-daemonsets &#8211;force<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>dbinla@dbisbx-master01:~$ kubectl drain dbisbx-worker02 --ignore-daemonsets --force\nnode\/dbisbx-worker02 cordoned\nWarning: ignoring DaemonSet-managed Pods: kube-system\/calico-node-4vkp5, kube-system\/kube-proxy-55nks\nevicting pod kube-system\/coredns-565d847f94-hmr7t\nevicting pod kube-system\/calico-kube-controllers-84c476996d-g4bqj\npod\/calico-kube-controllers-84c476996d-g4bqj evicted\npod\/coredns-565d847f94-hmr7t evicted\nnode\/dbisbx-worker02 drained<\/code><\/pre>\n\n\n\n<p>Of course, we make sure that the node is correctly set to &#8220;SchedulingDisabled&#8221;. This is a critical step in the process of isolating the node from the cluster.<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: plain; title: ; notranslate\" title=\"\">\nkubectl get nodes\n<\/pre><\/div>\n\n\n<pre class=\"wp-block-code\"><code>dbinla@dbisbx-master01:~$ kubectl get nodes\nNAME              STATUS                     ROLES           AGE     VERSION\ndbisbx-master01   Ready                      control-plane   23h     v1.25.0\ndbisbx-worker01   Ready                      &lt;none&gt;          6h16m   v1.25.0\ndbisbx-worker02   Ready,SchedulingDisabled   &lt;none&gt;          6h12m   v1.24.0\ndbinla@dbisbx-master01:~$<\/code><\/pre>\n\n\n\n<p>In a new terminal window, log into your worker node 2:<\/p>\n\n\n\n<p>Now, we&#8217;re installing the kubeadm packages with version 1.25.0-00, and making sure to allow changes to held packages.<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: plain; title: ; notranslate\" title=\"\">\nsudo apt-get install -y --allow-change-held-packages kubeadm=1.25.0-00\n<\/pre><\/div>\n\n\n<pre class=\"wp-block-code\"><code>dbinla@dbisbx-worker02:~$ sudo apt-get install -y --allow-change-held-packages kubeadm=1.25.0-00\n&#091;sudo] password for dbinla:\nReading package lists... Done\nBuilding dependency tree\nReading state information... Done\nThe following held packages will be changed:\n  kubeadm\nThe following packages will be upgraded:\n  kubeadm\n1 upgraded, 0 newly installed, 0 to remove and 14 not upgraded.\nNeed to get 9213 kB of archives.\nAfter this operation, 578 kB disk space will be freed.\nGet:1 https:\/\/packages.cloud.google.com\/apt kubernetes-xenial\/main amd64 kubeadm amd64 1.25.0-00 &#091;9213 kB]\nFetched 9213 kB in 1s (11.9 MB\/s)\n(Reading database ... 195446 files and directories currently installed.)\nPreparing to unpack ...\/kubeadm_1.25.0-00_amd64.deb ...\nUnpacking kubeadm (1.25.0-00) over (1.24.0-00) ...\nSetting up kubeadm (1.25.0-00) ...\ndbinla@dbisbx-worker02:~$<\/code><\/pre>\n\n\n\n<p>Once the package has been downloaded and installed, we check the version to ensure that the package upgrade was successful and that we are indeed running the desired version of kubeadm.:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: plain; title: ; notranslate\" title=\"\">\nkubeadm version\n<\/pre><\/div>\n\n\n<pre class=\"wp-block-code\"><code>dbinla@dbisbx-worker02:~$ kubeadm version\nkubeadm version: &amp;version.Info{Major:\"1\", Minor:\"25\", GitVersion:\"v1.25.0\", GitCommit:\"a866cbe2e5bbaa01cfd5e969aa3e033f3282a8a2\", GitTreeState:\"clean\", BuildDate:\"2022-08-23T17:43:25Z\", GoVersion:\"go1.19\", Compiler:\"gc\", Platform:\"linux\/amd64\"}\ndbinla@dbisbx-worker02:~$<\/code><\/pre>\n\n\n\n<p>And now, all that&#8217;s left is to initiate the upgrade process.<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: plain; title: ; notranslate\" title=\"\">\nsudo kubeadm upgrade node\n<\/pre><\/div>\n\n\n<pre class=\"wp-block-code\"><code>dbinla@dbisbx-worker02:~$ sudo kubeadm upgrade node\n&#091;upgrade] Reading configuration from the cluster...\n&#091;upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'\n&#091;preflight] Running pre-flight checks\n&#091;preflight] Skipping prepull. Not a control plane node.\n&#091;upgrade] Skipping phase. Not a control plane node.\n&#091;kubelet-start] Writing kubelet configuration to file \"\/var\/lib\/kubelet\/config.yaml\"\n&#091;upgrade] The configuration for this node was successfully updated!\n&#091;upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.\ndbinla@dbisbx-worker02:~$<\/code><\/pre>\n\n\n\n<p>So far, so good! Everything is progressing smoothly with the upgrade process.<br>We have confirmation that the configuration for this node was successfully updated, and we can go ahead and upgrade the kubelet package.<br><\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: plain; title: ; notranslate\" title=\"\">\nsudo apt-get install -y --allow-change-held-packages kubelet=1.25.0-00 kubectl=1.25.0-00\n<\/pre><\/div>\n\n\n<pre class=\"wp-block-code\"><code>dbinla@dbisbx-worker02:~$ sudo apt-get install -y --allow-change-held-packages kubelet=1.25.0-00 kubectl=1.25.0-00\nReading package lists... Done\nBuilding dependency tree\nReading state information... Done\nThe following held packages will be changed:\n  kubectl kubelet\nThe following packages will be upgraded:\n  kubectl kubelet\n2 upgraded, 0 newly installed, 0 to remove and 18 not upgraded.\nNeed to get 29.0 MB of archives.\nAfter this operation, 2825 kB disk space will be freed.\nGet:1 https:\/\/packages.cloud.google.com\/apt kubernetes-xenial\/main amd64 kubectl amd64 1.25.0-00 &#091;9500 kB]\nGet:2 https:\/\/packages.cloud.google.com\/apt kubernetes-xenial\/main amd64 kubelet amd64 1.25.0-00 &#091;19.5 MB]\nFetched 29.0 MB in 1s (23.2 MB\/s)\n(Reading database ... 195446 files and directories currently installed.)\nPreparing to unpack ...\/kubectl_1.25.0-00_amd64.deb ...\nUnpacking kubectl (1.25.0-00) over (1.24.0-00) ...\nPreparing to unpack ...\/kubelet_1.25.0-00_amd64.deb ...\nUnpacking kubelet (1.25.0-00) over (1.24.0-00) ...\nSetting up kubectl (1.25.0-00) ...\nSetting up kubelet (1.25.0-00) ...\ndbinla@dbisbx-worker02:~$<\/code><\/pre>\n\n\n\n<p>Mark the kubelet, kubeadm, and kubectl packages as &#8220;held&#8221; to prevent them from being automatically upgraded:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\nsudo apt-mark hold kubelet kubeadm kubectl\n<\/pre><\/div>\n\n\n<p>After completing the upgrade process, we will reload the systemd manager configuration files and restart the kubelet service.<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: plain; title: ; notranslate\" title=\"\">\nsudo systemctl daemon-reload\nsudo systemctl restart kubelet\n<\/pre><\/div>\n\n\n<pre class=\"wp-block-code\"><code>dbinla@dbisbx-worker02:~$ sudo systemctl daemon-reload\ndbinla@dbisbx-worker02:~$ sudo systemctl restart kubelet<\/code><\/pre>\n\n\n\n<p>Now that we&#8217;ve completed the upgrade process and restarted the kubelet service, we&#8217;re ready to reintegrate our node into the cluster.<br>From the&nbsp;<em>control plane node<\/em>, <strong>uncordon<\/strong> worker node 2, to remove the SchedulingDisabled state from the node and allow Kubernetes to schedule new pods onto it once again.<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: plain; title: ; notranslate\" title=\"\">\nkubectl uncordon dbisbx-worker02\n<\/pre><\/div>\n\n\n<pre class=\"wp-block-code\"><code>dbinla@dbisbx-master01:~$ kubectl uncordon dbisbx-worker02\nnode\/dbisbx-worker02 uncordoned\ndbinla@dbisbx-master01:~$<\/code><\/pre>\n\n\n\n<p>Still, in the&nbsp;<em>control plane node<\/em>, verify the cluster is upgraded and working:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: plain; title: ; notranslate\" title=\"\">\nkubectl get nodes\n<\/pre><\/div>\n\n\n<pre class=\"wp-block-code\"><code>dbinla@dbisbx-master01:~$ kubectl get nodes\nNAME              STATUS   ROLES           AGE     VERSION\ndbisbx-master01   Ready    control-plane   23h     v1.25.0\ndbisbx-worker01   Ready    &lt;none&gt;          6h23m   v1.25.0\ndbisbx-worker02   Ready    &lt;none&gt;          6h19m   v1.25.0\ndbinla@dbisbx-master01:~$<\/code><\/pre>\n\n\n\n<p>If they show a&nbsp;NotReady&nbsp;status, don&#8217;t worry, as it takes a minute or two for the node to transition into the &#8220;Ready&#8221; status. Just rerun the command after a minute or so. They should become&nbsp;Ready.<br> <\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Conclusion<\/h2>\n\n\n\n<p>We&#8217;ve completed the upgrade process for our cluster! As you may have noticed, the process itself is relatively simple and easy. The kubeadm tools are designed to guide us through the upgrade process (as well as a downgrade if needed) smoothly and straightforwardly.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Upgrade the Control Plane After installing a Kubernetes cluster, I suggest we continue upgrading the cluster using kubeadm installation and configuration tools. So that you know, upgrading a cluster is done from version to version +1. In the context of this blog, we assume that we have a minimal version Kubernetes cluster, with one Master [&hellip;]<\/p>\n","protected":false},"author":40,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[1522],"tags":[38,89,219],"type_dbi":[],"class_list":["post-21603","post","type-post","status-publish","format-standard","hentry","category-kubernetes","tag-cluster","tag-kubernetes","tag-upgrade"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v27.2 (Yoast SEO v27.2) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>Let&#039;s upgrade Kubernetes with kubeadm - dbi Blog<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.dbi-services.com\/blog\/lets-upgrade-kubernetes-with-kubeadm\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Let&#039;s upgrade Kubernetes with kubeadm\" \/>\n<meta property=\"og:description\" content=\"Upgrade the Control Plane After installing a Kubernetes cluster, I suggest we continue upgrading the cluster using kubeadm installation and configuration tools. So that you know, upgrading a cluster is done from version to version +1. In the context of this blog, we assume that we have a minimal version Kubernetes cluster, with one Master [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.dbi-services.com\/blog\/lets-upgrade-kubernetes-with-kubeadm\/\" \/>\n<meta property=\"og:site_name\" content=\"dbi Blog\" \/>\n<meta property=\"article:published_time\" content=\"2023-03-28T09:04:11+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-03-31T12:21:59+00:00\" \/>\n<meta name=\"author\" content=\"Middleware Team\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Middleware Team\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"15 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/lets-upgrade-kubernetes-with-kubeadm\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/lets-upgrade-kubernetes-with-kubeadm\/\"},\"author\":{\"name\":\"Middleware Team\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/8d8563acfc6e604cce6507f45bac0ea1\"},\"headline\":\"Let&#8217;s upgrade Kubernetes with kubeadm\",\"datePublished\":\"2023-03-28T09:04:11+00:00\",\"dateModified\":\"2023-03-31T12:21:59+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/lets-upgrade-kubernetes-with-kubeadm\/\"},\"wordCount\":1096,\"commentCount\":0,\"keywords\":[\"Cluster\",\"kubernetes\",\"Upgrade\"],\"articleSection\":[\"Kubernetes\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/www.dbi-services.com\/blog\/lets-upgrade-kubernetes-with-kubeadm\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/lets-upgrade-kubernetes-with-kubeadm\/\",\"url\":\"https:\/\/www.dbi-services.com\/blog\/lets-upgrade-kubernetes-with-kubeadm\/\",\"name\":\"Let's upgrade Kubernetes with kubeadm - dbi Blog\",\"isPartOf\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#website\"},\"datePublished\":\"2023-03-28T09:04:11+00:00\",\"dateModified\":\"2023-03-31T12:21:59+00:00\",\"author\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/8d8563acfc6e604cce6507f45bac0ea1\"},\"breadcrumb\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/lets-upgrade-kubernetes-with-kubeadm\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.dbi-services.com\/blog\/lets-upgrade-kubernetes-with-kubeadm\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/lets-upgrade-kubernetes-with-kubeadm\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Accueil\",\"item\":\"https:\/\/www.dbi-services.com\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Let&#8217;s upgrade Kubernetes with kubeadm\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#website\",\"url\":\"https:\/\/www.dbi-services.com\/blog\/\",\"name\":\"dbi Blog\",\"description\":\"\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.dbi-services.com\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/8d8563acfc6e604cce6507f45bac0ea1\",\"name\":\"Middleware Team\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/secure.gravatar.com\/avatar\/ddcae7ba0f9d1a0e7ae707f0e689e4a9c95bb48ec49c8e6d9cc86d43f4121cb6?s=96&d=mm&r=g\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/ddcae7ba0f9d1a0e7ae707f0e689e4a9c95bb48ec49c8e6d9cc86d43f4121cb6?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/ddcae7ba0f9d1a0e7ae707f0e689e4a9c95bb48ec49c8e6d9cc86d43f4121cb6?s=96&d=mm&r=g\",\"caption\":\"Middleware Team\"},\"url\":\"https:\/\/www.dbi-services.com\/blog\/author\/middleware-team\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Let's upgrade Kubernetes with kubeadm - dbi Blog","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.dbi-services.com\/blog\/lets-upgrade-kubernetes-with-kubeadm\/","og_locale":"en_US","og_type":"article","og_title":"Let's upgrade Kubernetes with kubeadm","og_description":"Upgrade the Control Plane After installing a Kubernetes cluster, I suggest we continue upgrading the cluster using kubeadm installation and configuration tools. So that you know, upgrading a cluster is done from version to version +1. In the context of this blog, we assume that we have a minimal version Kubernetes cluster, with one Master [&hellip;]","og_url":"https:\/\/www.dbi-services.com\/blog\/lets-upgrade-kubernetes-with-kubeadm\/","og_site_name":"dbi Blog","article_published_time":"2023-03-28T09:04:11+00:00","article_modified_time":"2023-03-31T12:21:59+00:00","author":"Middleware Team","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Middleware Team","Est. reading time":"15 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.dbi-services.com\/blog\/lets-upgrade-kubernetes-with-kubeadm\/#article","isPartOf":{"@id":"https:\/\/www.dbi-services.com\/blog\/lets-upgrade-kubernetes-with-kubeadm\/"},"author":{"name":"Middleware Team","@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/8d8563acfc6e604cce6507f45bac0ea1"},"headline":"Let&#8217;s upgrade Kubernetes with kubeadm","datePublished":"2023-03-28T09:04:11+00:00","dateModified":"2023-03-31T12:21:59+00:00","mainEntityOfPage":{"@id":"https:\/\/www.dbi-services.com\/blog\/lets-upgrade-kubernetes-with-kubeadm\/"},"wordCount":1096,"commentCount":0,"keywords":["Cluster","kubernetes","Upgrade"],"articleSection":["Kubernetes"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.dbi-services.com\/blog\/lets-upgrade-kubernetes-with-kubeadm\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.dbi-services.com\/blog\/lets-upgrade-kubernetes-with-kubeadm\/","url":"https:\/\/www.dbi-services.com\/blog\/lets-upgrade-kubernetes-with-kubeadm\/","name":"Let's upgrade Kubernetes with kubeadm - dbi Blog","isPartOf":{"@id":"https:\/\/www.dbi-services.com\/blog\/#website"},"datePublished":"2023-03-28T09:04:11+00:00","dateModified":"2023-03-31T12:21:59+00:00","author":{"@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/8d8563acfc6e604cce6507f45bac0ea1"},"breadcrumb":{"@id":"https:\/\/www.dbi-services.com\/blog\/lets-upgrade-kubernetes-with-kubeadm\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.dbi-services.com\/blog\/lets-upgrade-kubernetes-with-kubeadm\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/www.dbi-services.com\/blog\/lets-upgrade-kubernetes-with-kubeadm\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Accueil","item":"https:\/\/www.dbi-services.com\/blog\/"},{"@type":"ListItem","position":2,"name":"Let&#8217;s upgrade Kubernetes with kubeadm"}]},{"@type":"WebSite","@id":"https:\/\/www.dbi-services.com\/blog\/#website","url":"https:\/\/www.dbi-services.com\/blog\/","name":"dbi Blog","description":"","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.dbi-services.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/8d8563acfc6e604cce6507f45bac0ea1","name":"Middleware Team","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/ddcae7ba0f9d1a0e7ae707f0e689e4a9c95bb48ec49c8e6d9cc86d43f4121cb6?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/ddcae7ba0f9d1a0e7ae707f0e689e4a9c95bb48ec49c8e6d9cc86d43f4121cb6?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/ddcae7ba0f9d1a0e7ae707f0e689e4a9c95bb48ec49c8e6d9cc86d43f4121cb6?s=96&d=mm&r=g","caption":"Middleware Team"},"url":"https:\/\/www.dbi-services.com\/blog\/author\/middleware-team\/"}]}},"_links":{"self":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/21603","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/users\/40"}],"replies":[{"embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/comments?post=21603"}],"version-history":[{"count":10,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/21603\/revisions"}],"predecessor-version":[{"id":24148,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/21603\/revisions\/24148"}],"wp:attachment":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/media?parent=21603"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/categories?post=21603"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/tags?post=21603"},{"taxonomy":"type","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/type_dbi?post=21603"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}