{"id":24079,"date":"2023-03-30T14:29:06","date_gmt":"2023-03-30T12:29:06","guid":{"rendered":"https:\/\/www.dbi-services.com\/blog\/?p=24079"},"modified":"2023-03-30T14:29:08","modified_gmt":"2023-03-30T12:29:08","slug":"kubernetes-extension-of-disk-in-rook-ceph","status":"publish","type":"post","link":"https:\/\/www.dbi-services.com\/blog\/kubernetes-extension-of-disk-in-rook-ceph\/","title":{"rendered":"Kubernetes: Extension of disk in Rook Ceph"},"content":{"rendered":"\n<p>In my <a href=\"https:\/\/www.dbi-services.com\/blog\/introduction-to-rook-ceph-for-kubernetes\/\" target=\"_blank\" rel=\"noreferrer noopener\">previous blog<\/a>, I&#8217;ve introduced the storage solution Rook Ceph for Kubernetes. We saw that in our architecture we had 3 workers dedicated for storage using Ceph file system. Each worker had 2 disks of 100 GB and we saw that from the storage point of view one OSD equals one disk. With these 6 disks we had a Ceph cluster with a total capacity of 600 GB. This disk size was fine for our initial tests but we wanted to extend them to 1.5 TB each to be closer to a production-like design.<\/p>\n\n\n\n<p>The pools of our Ceph cluster use the default configuration of 3 replicas for our placement groups (pgs) and permit the loss of 1 replica. The Ceph cluster will still operate with only 2 replicas. By knowing this, we can then control this disk extension by working with one worker at a time. We will both keep our Ceph cluster available to users and avoid the risk of data corruption. Data corruption (or even loss) is what could happen if we were doing this disk extension abruptly without properly preparing our cluster to this operation.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Preparing the Ceph cluster before the disk extension<\/h2>\n\n\n\n<p>First let&#8217;s have a look at the status of our Ceph cluster:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; highlight: [4,10,18]; title: ; notranslate\" title=\"\">\n&#x5B;benoit@master ~]$ kubectl -n rookceph exec -it deploy\/rook-ceph-tools -- ceph status\n  cluster:\n    id:     eb11b571-65e7-480a-b15a-e3a200946d3a\n    health: HEALTH_OK\n\n  services:\n    mon: 3 daemons, quorum a,b,c (age 5d)\n    mgr: a(active, since 13d), standbys: b\n    mds: 1\/1 daemons up, 1 hot standby\n    osd: 6 osds: 6 up (since 5d), 6 in (since 5d)\n    rgw: 1 daemon active (1 hosts, 1 zones)\n\n  data:\n    volumes: 1\/1 healthy\n    pools:   12 pools, 337 pgs\n    objects: 87.92k objects, 4.9 GiB\n    usage:   19 GiB used, 481 GiB \/ 500 GiB avail\n    pgs:     337 active+clean\n\n  io:\n    client:   1.2 KiB\/s rd, 2 op\/s rd, 0 op\/s wr\n<\/pre><\/div>\n\n\n<p>Then to the configuration of our OSDs in this cluster:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; highlight: [4,5,6]; title: ; notranslate\" title=\"\">\n&#x5B;benoit@master ~]$ kubectl -n rookceph exec -it deploy\/rook-ceph-tools -- ceph osd df tree\nID  CLASS  WEIGHT   REWEIGHT  SIZE     RAW USE  DATA     OMAP     META     AVAIL    %USE   VAR   PGS  STATUS  TYPE NAME\n-1         0.58612         -  600 GiB  122 GiB  117 GiB  153 MiB  5.1 GiB  478 GiB  20.35  1.00    -          root default\n-5         0.19537         -  200 GiB   40 GiB   39 GiB   57 MiB  1.4 GiB  160 GiB  20.22  0.99    -              host worker1\n 0    hdd  0.09769   1.00000  100 GiB   24 GiB   23 GiB  7.0 MiB  738 MiB   76 GiB  23.72  1.17  169      up          osd.0\n 4    hdd  0.09769   1.00000  100 GiB   17 GiB   16 GiB   50 MiB  737 MiB   83 GiB  16.73  0.82  168      up          osd.3\n-3         0.19537         -  200 GiB   41 GiB   39 GiB   35 MiB  1.8 GiB  159 GiB  20.37  1.00    -              host worker2\n 1    hdd  0.09769   1.00000  100 GiB   18 GiB   17 GiB   23 MiB  836 MiB   82 GiB  17.74  0.87  189      up          osd.1\n 3    hdd  0.09769   1.00000  100 GiB   23 GiB   22 GiB   12 MiB  967 MiB   77 GiB  23.00  1.13  148      up          osd.4\n-7         0.19537         -  200 GiB   41 GiB   39 GiB   61 MiB  1.9 GiB  159 GiB  20.44  1.00    -              host worker3\n 6    hdd  0.09769   1.00000  100 GiB   22 GiB   21 GiB   29 MiB  848 MiB   78 GiB  21.78  1.07  163      up          osd.2\n 7    hdd  0.09769   1.00000  100 GiB   19 GiB   18 GiB   33 MiB  1.0 GiB   81 GiB  19.09  0.94  174      up          osd.5\n                       TOTAL  600 GiB  122 GiB  117 GiB  153 MiB  5.1 GiB  478 GiB  20.35\nMIN\/MAX VAR: 0.82\/1.17  STDDEV: 2.64\n<\/pre><\/div>\n\n\n<p>We will detail the disk extension procedure on worker1. The output above shows worker1 disks with the labels osd.0 and osd.3.<\/p>\n\n\n\n<p>The first thing to do is to stop the Ceph operator and delete the 2 OSD deployments related to worker1:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\n&#x5B;benoit@master ~]$ kubectl -n rookceph scale deployment rook-ceph-operator --replicas=0\n\n&#x5B;benoit@master ~]$ kubectl delete deployment.apps\/rook-ceph-osd-0 deployment.apps\/rook-ceph-osd-3 -n rookceph\ndeployment.apps &quot;rook-ceph-osd-0&quot; deleted\ndeployment.apps &quot;rook-ceph-osd-3&quot; deleted\n<\/pre><\/div>\n\n\n<p>As the operator is stopped, these deployments will not be automatically re-created. We can then continue by moving out osd.0 and osd.3 from this Ceph cluster:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\n&#x5B;benoit@master ~]$ kubectl -n rookceph exec -it deploy\/rook-ceph-tools -- ceph osd out 0\n&#x5B;benoit@master ~]$ kubectl -n rookceph exec -it deploy\/rook-ceph-tools -- ceph osd out 3\n\n&#x5B;benoit@master ~]$ kubectl -n rookceph exec -it deploy\/rook-ceph-tools -- ceph osd crush remove osd.0\n&#x5B;benoit@master ~]$ kubectl -n rookceph exec -it deploy\/rook-ceph-tools -- ceph osd crush remove osd.3\n\n&#x5B;benoit@master ~]$ kubectl -n rookceph exec -it deploy\/rook-ceph-tools -- ceph auth del osd.0\n&#x5B;benoit@master ~]$ kubectl -n rookceph exec -it deploy\/rook-ceph-tools -- ceph auth del osd.3\n\n&#x5B;benoit@master ~]$ kubectl -n rookceph exec -it deploy\/rook-ceph-tools -- ceph osd down osd.0\n&#x5B;benoit@master ~]$ kubectl -n rookceph exec -it deploy\/rook-ceph-tools -- ceph osd down osd.3\n\n&#x5B;benoit@master ~]$ kubectl -n rookceph exec -it deploy\/rook-ceph-tools -- ceph osd rm osd.0\n&#x5B;benoit@master ~]$ kubectl -n rookceph exec -it deploy\/rook-ceph-tools -- ceph osd rm osd.3\n<\/pre><\/div>\n\n\n<p>All references to these 2 OSDs have now been removed from this Ceph cluster. We then wait for the cluster to recover from this removal and monitor its status:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; highlight: [4,11,19,20,21,22,23,48]; title: ; notranslate\" title=\"\">\n&#x5B;benoit@master ~]$ kubectl -n rookceph exec -it deploy\/rook-ceph-tools -- ceph -s\n  cluster:\n    id:     eb11b571-65e7-480a-b15a-e3a200946d3a\n    health: HEALTH_WARN\n            Degraded data redundancy: 55832\/263757 objects degraded (21.168%), 88 pgs degraded\n\n  services:\n    mon: 3 daemons, quorum a,b,c (age 5d)\n    mgr: a(active, since 13d), standbys: b\n    mds: 1\/1 daemons up, 1 hot standby\n    osd: 4 osds: 4 up (since 32s), 4 in (since 15m); 104 remapped pgs\n    rgw: 1 daemon active (1 hosts, 1 zones)\n\n  data:\n    volumes: 1\/1 healthy\n    pools:   12 pools, 337 pgs\n    objects: 87.92k objects, 4.9 GiB\n    usage:   15 GiB used, 385 GiB \/ 400 GiB avail\n    pgs:     55832\/263757 objects degraded (21.168%)\n             32087\/263757 objects misplaced (12.165%)\n             145 active+undersized\n             88  active+undersized+degraded\n             82  active+clean+remapped\n             22  active+clean\n\n  io:\n    client:   1023 B\/s rd, 1 op\/s rd, 0 op\/s wr\n...\n...\n...\n&#x5B;benoit@master ~]$ kubectl -n rookceph exec -it deploy\/rook-ceph-tools -- ceph -s\n  cluster:\n    id:     eb11b571-65e7-480a-b15a-e3a200946d3a\n    health: HEALTH_WARN\n\n  services:\n    mon: 3 daemons, quorum a,b,c (age 5d)\n    mgr: a(active, since 13d), standbys: b\n    mds: 1\/1 daemons up, 1 hot standby\n    osd: 6 osds: 6 up (since 5d), 4 in (since 102s)\n    rgw: 1 daemon active (1 hosts, 1 zones)\n\n  data:\n    volumes: 1\/1 healthy\n    pools:   12 pools, 337 pgs\n    objects: 87.92k objects, 4.9 GiB\n    usage:   19 GiB used, 481 GiB \/ 500 GiB avail\n    pgs:     337 active+clean\n\n  io:\n    client:   1.2 KiB\/s rd, 2 op\/s rd, 0 op\/s wr\n<\/pre><\/div>\n\n\n<p>You can use the command <strong>ceph<\/strong> with <strong>-s<\/strong> (same as <strong>status<\/strong>) in order to follow this redistribution (use it in combination with the <strong>watch<\/strong> command from linux to follow these changes in live). You can notice now that only 4 OSDs are &#8220;in&#8221; because we removed 2 of them. The cluster is ready when all pgs are in the state <strong>active+clean<\/strong>.<\/p>\n\n\n\n<p>Let&#8217;s check the OSDs status:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; highlight: [4]; title: ; notranslate\" title=\"\">\n&#x5B;benoit@master ~]$ kubectl -n rookceph exec -it deploy\/rook-ceph-tools -- ceph osd df tree\nID  CLASS  WEIGHT   REWEIGHT  SIZE     RAW USE  DATA     OMAP     META     AVAIL    %USE  VAR   PGS  STATUS  TYPE NAME\n-1         0.39075         -  400 GiB   15 GiB   12 GiB  154 MiB  2.3 GiB  385 GiB  3.65  1.00    -          root default\n-3               0         -      0 B      0 B      0 B      0 B      0 B      0 B     0     0    -              host worker1\n-7         0.19537         -  200 GiB  8.0 GiB  6.2 GiB   83 MiB  1.6 GiB  192 GiB  3.98  1.09    -              host worker2\n 1    hdd  0.09769   1.00000  100 GiB  3.7 GiB  3.1 GiB   57 MiB  563 MiB   96 GiB  3.68  1.01  200      up          osd.1\n 5    hdd  0.09769   1.00000  100 GiB  4.3 GiB  3.2 GiB   26 MiB  1.1 GiB   96 GiB  4.27  1.17  191      up          osd.5\n-5         0.19537         -  200 GiB  6.6 GiB  5.9 GiB   71 MiB  667 MiB  193 GiB  3.32  0.91    -              host worker3\n 2    hdd  0.09769   1.00000  100 GiB  3.3 GiB  2.9 GiB   51 MiB  367 MiB   97 GiB  3.35  0.92  208      up          osd.2\n 4    hdd  0.09769   1.00000  100 GiB  3.3 GiB  3.0 GiB   20 MiB  299 MiB   97 GiB  3.30  0.90  179      up          osd.4\n                       TOTAL  400 GiB   15 GiB   12 GiB  154 MiB  2.3 GiB  385 GiB  3.65\nMIN\/MAX VAR: 0.90\/1.17  STDDEV: 0.39\n<\/pre><\/div>\n\n\n<p>Both disk references have been properly removed from this Ceph cluster. We can now safely proceed with the extension of disks.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Extension of disks<\/h2>\n\n\n\n<p>In our environment, worker nodes used for storage are VMware virtual machines. Extending their disks is just a parameter to change. Once done, we can then format and erase each disk as follows:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\n&#x5B;root@worker1 ~]# lsblk | egrep &quot;sdb|sde&quot;\n\n&#x5B;root@worker1 ~]# sgdisk --zap-all \/dev\/sdb\nWarning: Partition table header claims that the size of partition table\nentries is 0 bytes, but this program  supports only 128-byte entries.\nAdjusting accordingly, but partition table may be garbage.\nWarning: Partition table header claims that the size of partition table\nentries is 0 bytes, but this program  supports only 128-byte entries.\nAdjusting accordingly, but partition table may be garbage.\nCreating new GPT entries.\nGPT data structures destroyed! You may now partition the disk using fdisk or\nother utilities.\n\n&#x5B;root@worker1 ~]# dd if=\/dev\/zero of=&quot;\/dev\/sdb&quot; bs=1M count=100 oflag=direct,dsync\n100+0 records in\n100+0 records out\n\n&#x5B;root@worker1 ~]# partprobe \/dev\/sdb\n\n&#x5B;root@worker1 ~]# sgdisk --zap-all \/dev\/sde\n\n&#x5B;root@worker1 ~]# dd if=\/dev\/zero of=&quot;\/dev\/sde&quot; bs=1M count=100 oflag=direct,dsync\n\n&#x5B;root@worker1 ~]# partprobe \/dev\/sde\n<\/pre><\/div>\n\n\n<p>We can now bring back those disks into our Ceph cluster.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Bringing back those extended disks<\/h2>\n\n\n\n<p>This step shows all the power of using Rook for operating Ceph:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\n&#x5B;benoit@master ~]$ kubectl -n rookceph scale deployment rook-ceph-operator --replicas=1\n\n&#x5B;benoit@master ~]$ kubectl -n rookceph exec -it deploy\/rook-ceph-tools -- ceph -w\n...\n...\n...\n2023-02-16T14:16:06.419312+0000 mon.a &#x5B;WRN] Health check update: Degraded data redundancy: 3960\/263763 objects degraded (1.501%), 9 pgs degraded, 9 pgs undersized (PG_DEGRADED)\n2023-02-16T14:16:12.426883+0000 mon.a &#x5B;WRN] Health check update: Degraded data redundancy: 3078\/263763 objects degraded (1.167%), 9 pgs degraded, 9 pgs undersized (PG_DEGRADED)\n2023-02-16T14:16:18.260414+0000 mon.a &#x5B;WRN] Health check update: Degraded data redundancy: 2223\/263763 objects degraded (0.843%), 8 pgs degraded, 8 pgs undersized (PG_DEGRADED)\n2023-02-16T14:16:26.427226+0000 mon.a &#x5B;WRN] Health check update: Degraded data redundancy: 1459\/263763 objects degraded (0.553%), 6 pgs degraded, 6 pgs undersized (PG_DEGRADED)\n2023-02-16T14:16:31.429443+0000 mon.a &#x5B;WRN] Health check update: Degraded data redundancy: 848\/263763 objects degraded (0.322%), 4 pgs degraded, 4 pgs undersized (PG_DEGRADED)\n2023-02-16T14:16:36.432270+0000 mon.a &#x5B;WRN] Health check update: Degraded data redundancy: 260\/263763 objects degraded (0.099%), 2 pgs degraded, 2 pgs undersized (PG_DEGRADED)\n2023-02-16T14:16:38.557667+0000 mon.a &#x5B;INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 260\/263763 objects degraded (0.099%), 2 pgs degraded, 2 pgs undersized)\n2023-02-16T14:16:38.557705+0000 mon.a &#x5B;INF] Cluster is now healthy\n<\/pre><\/div>\n\n\n<p>You can use the command <strong>ceph<\/strong> with <strong>-w<\/strong> to monitor live the health of this Ceph cluster. And just like that, by scaling up the operator, the Ceph cluster becomes up and running after a few minutes:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; highlight: [4,10,18,26,27,28]; title: ; notranslate\" title=\"\">\n&#x5B;rook@rook-ceph-tools-9967d64b6-n6rnk \/]$ ceph status\n  cluster:\n    id:     eb11b571-65e7-480a-b15a-e3a200946d3a\n    health: HEALTH_OK\n\n  services:\n    mon: 3 daemons, quorum a,b,c (age 3h)\n    mgr: b(active, since 3h), standbys: a\n    mds: 1\/1 daemons up, 1 hot standby\n    osd: 6 osds: 6 up (since 3m), 6 in (since 6m)\n    rgw: 1 daemon active (1 hosts, 1 zones)\n\n  data:\n    volumes: 1\/1 healthy\n    pools:   12 pools, 337 pgs\n    objects: 87.92k objects, 4.9 GiB\n    usage:   19 GiB used, 481 GiB \/ 500 GiB avail\n    pgs:     337 active+clean\n\n  io:\n    client:   852 B\/s rd, 1 op\/s rd, 0 op\/s wr\n\n&#x5B;rook@rook-ceph-tools-9967d64b6-n6rnk \/]$ ceph osd df tree\nID  CLASS  WEIGHT   REWEIGHT  SIZE     RAW USE  DATA     OMAP     META     AVAIL    %USE  VAR   PGS  STATUS  TYPE NAME\n-1         3.32034         -  3.3 TiB   18 GiB   16 GiB  180 MiB  2.7 GiB  3.3 TiB  0.54  1.00    -          root default\n-3         2.92960         -  2.9 TiB  6.0 GiB  5.2 GiB      0 B  805 MiB  2.9 TiB  0.20  0.37    -              host worker1\n 0    hdd  1.46480   1.00000  1.5 TiB  2.1 GiB  1.9 GiB      0 B  296 MiB  1.5 TiB  0.14  0.26  160      up          osd.0\n 3    hdd  1.46480   1.00000  1.5 TiB  3.8 GiB  3.3 GiB      0 B  509 MiB  1.5 TiB  0.25  0.47  177      up          osd.3\n-7         0.19537         -  200 GiB  6.5 GiB  5.4 GiB  109 MiB  1.0 GiB  194 GiB  3.24  5.99    -              host worker2\n 1    hdd  0.09769   1.00000  100 GiB  3.6 GiB  2.9 GiB   57 MiB  639 MiB   96 GiB  3.60  6.65  179      up          osd.1\n 5    hdd  0.09769   1.00000  100 GiB  2.9 GiB  2.4 GiB   52 MiB  414 MiB   97 GiB  2.89  5.33  161      up          osd.5\n-5         0.19537         -  200 GiB  6.0 GiB  5.0 GiB   71 MiB  924 MiB  194 GiB  2.98  5.51    -              host worker3\n 2    hdd  0.09769   1.00000  100 GiB  2.9 GiB  2.4 GiB   51 MiB  478 MiB   97 GiB  2.90  5.36  171      up          osd.2\n 4    hdd  0.09769   1.00000  100 GiB  3.1 GiB  2.6 GiB   20 MiB  446 MiB   97 GiB  3.06  5.65  163      up          osd.4\n                       TOTAL  3.3 TiB   18 GiB   16 GiB  180 MiB  2.7 GiB  3.3 TiB  0.54\nMIN\/MAX VAR: 0.26\/6.65  STDDEV: 2.12\n<\/pre><\/div>\n\n\n<p>We can see the new size of both disks! All the steps we did to remove those disks have been automatically reverted back. Brilliant!<\/p>\n\n\n\n<p>We just had to repeat the same procedure with the 2 other workers used for storage. We have now a Ceph cluster with 9 TB!<\/p>\n","protected":false},"excerpt":{"rendered":"<p>In my previous blog, I&#8217;ve introduced the storage solution Rook Ceph for Kubernetes. We saw that in our architecture we had 3 workers dedicated for storage using Ceph file system. Each worker had 2 disks of 100 GB and we saw that from the storage point of view one OSD equals one disk. With these [&hellip;]<\/p>\n","protected":false},"author":109,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[1320,1522],"tags":[515,2891,2667,2634,2890,35],"type_dbi":[],"class_list":["post-24079","post","type-post","status-publish","format-standard","hentry","category-devops","category-kubernetes","tag-administration","tag-ceph","tag-devops-2","tag-kubernetes-2","tag-rook","tag-storage"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v27.2 (Yoast SEO v27.5) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>Kubernetes: Extension of disk in Rook Ceph - dbi Blog<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.dbi-services.com\/blog\/kubernetes-extension-of-disk-in-rook-ceph\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Kubernetes: Extension of disk in Rook Ceph\" \/>\n<meta property=\"og:description\" content=\"In my previous blog, I&#8217;ve introduced the storage solution Rook Ceph for Kubernetes. We saw that in our architecture we had 3 workers dedicated for storage using Ceph file system. Each worker had 2 disks of 100 GB and we saw that from the storage point of view one OSD equals one disk. With these [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.dbi-services.com\/blog\/kubernetes-extension-of-disk-in-rook-ceph\/\" \/>\n<meta property=\"og:site_name\" content=\"dbi Blog\" \/>\n<meta property=\"article:published_time\" content=\"2023-03-30T12:29:06+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-03-30T12:29:08+00:00\" \/>\n<meta name=\"author\" content=\"DevOps\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"DevOps\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"9 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/kubernetes-extension-of-disk-in-rook-ceph\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/kubernetes-extension-of-disk-in-rook-ceph\\\/\"},\"author\":{\"name\":\"DevOps\",\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/#\\\/schema\\\/person\\\/4cd1b5f8a3de93f05a16ab8d7d2b7735\"},\"headline\":\"Kubernetes: Extension of disk in Rook Ceph\",\"datePublished\":\"2023-03-30T12:29:06+00:00\",\"dateModified\":\"2023-03-30T12:29:08+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/kubernetes-extension-of-disk-in-rook-ceph\\\/\"},\"wordCount\":549,\"commentCount\":0,\"keywords\":[\"administration\",\"ceph\",\"devops\",\"kubernetes\",\"rook\",\"Storage\"],\"articleSection\":[\"DevOps\",\"Kubernetes\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/kubernetes-extension-of-disk-in-rook-ceph\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/kubernetes-extension-of-disk-in-rook-ceph\\\/\",\"url\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/kubernetes-extension-of-disk-in-rook-ceph\\\/\",\"name\":\"Kubernetes: Extension of disk in Rook Ceph - dbi Blog\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/#website\"},\"datePublished\":\"2023-03-30T12:29:06+00:00\",\"dateModified\":\"2023-03-30T12:29:08+00:00\",\"author\":{\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/#\\\/schema\\\/person\\\/4cd1b5f8a3de93f05a16ab8d7d2b7735\"},\"breadcrumb\":{\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/kubernetes-extension-of-disk-in-rook-ceph\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/kubernetes-extension-of-disk-in-rook-ceph\\\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/kubernetes-extension-of-disk-in-rook-ceph\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Accueil\",\"item\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Kubernetes: Extension of disk in Rook Ceph\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/#website\",\"url\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/\",\"name\":\"dbi Blog\",\"description\":\"\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/#\\\/schema\\\/person\\\/4cd1b5f8a3de93f05a16ab8d7d2b7735\",\"name\":\"DevOps\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/cdd2dd7441774355062c0f0f68612296b059cd1e2ff6c7af0b15dba0ed64a85f?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/cdd2dd7441774355062c0f0f68612296b059cd1e2ff6c7af0b15dba0ed64a85f?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/cdd2dd7441774355062c0f0f68612296b059cd1e2ff6c7af0b15dba0ed64a85f?s=96&d=mm&r=g\",\"caption\":\"DevOps\"},\"url\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/author\\\/devops\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Kubernetes: Extension of disk in Rook Ceph - dbi Blog","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.dbi-services.com\/blog\/kubernetes-extension-of-disk-in-rook-ceph\/","og_locale":"en_US","og_type":"article","og_title":"Kubernetes: Extension of disk in Rook Ceph","og_description":"In my previous blog, I&#8217;ve introduced the storage solution Rook Ceph for Kubernetes. We saw that in our architecture we had 3 workers dedicated for storage using Ceph file system. Each worker had 2 disks of 100 GB and we saw that from the storage point of view one OSD equals one disk. With these [&hellip;]","og_url":"https:\/\/www.dbi-services.com\/blog\/kubernetes-extension-of-disk-in-rook-ceph\/","og_site_name":"dbi Blog","article_published_time":"2023-03-30T12:29:06+00:00","article_modified_time":"2023-03-30T12:29:08+00:00","author":"DevOps","twitter_card":"summary_large_image","twitter_misc":{"Written by":"DevOps","Est. reading time":"9 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.dbi-services.com\/blog\/kubernetes-extension-of-disk-in-rook-ceph\/#article","isPartOf":{"@id":"https:\/\/www.dbi-services.com\/blog\/kubernetes-extension-of-disk-in-rook-ceph\/"},"author":{"name":"DevOps","@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/4cd1b5f8a3de93f05a16ab8d7d2b7735"},"headline":"Kubernetes: Extension of disk in Rook Ceph","datePublished":"2023-03-30T12:29:06+00:00","dateModified":"2023-03-30T12:29:08+00:00","mainEntityOfPage":{"@id":"https:\/\/www.dbi-services.com\/blog\/kubernetes-extension-of-disk-in-rook-ceph\/"},"wordCount":549,"commentCount":0,"keywords":["administration","ceph","devops","kubernetes","rook","Storage"],"articleSection":["DevOps","Kubernetes"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.dbi-services.com\/blog\/kubernetes-extension-of-disk-in-rook-ceph\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.dbi-services.com\/blog\/kubernetes-extension-of-disk-in-rook-ceph\/","url":"https:\/\/www.dbi-services.com\/blog\/kubernetes-extension-of-disk-in-rook-ceph\/","name":"Kubernetes: Extension of disk in Rook Ceph - dbi Blog","isPartOf":{"@id":"https:\/\/www.dbi-services.com\/blog\/#website"},"datePublished":"2023-03-30T12:29:06+00:00","dateModified":"2023-03-30T12:29:08+00:00","author":{"@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/4cd1b5f8a3de93f05a16ab8d7d2b7735"},"breadcrumb":{"@id":"https:\/\/www.dbi-services.com\/blog\/kubernetes-extension-of-disk-in-rook-ceph\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.dbi-services.com\/blog\/kubernetes-extension-of-disk-in-rook-ceph\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/www.dbi-services.com\/blog\/kubernetes-extension-of-disk-in-rook-ceph\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Accueil","item":"https:\/\/www.dbi-services.com\/blog\/"},{"@type":"ListItem","position":2,"name":"Kubernetes: Extension of disk in Rook Ceph"}]},{"@type":"WebSite","@id":"https:\/\/www.dbi-services.com\/blog\/#website","url":"https:\/\/www.dbi-services.com\/blog\/","name":"dbi Blog","description":"","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.dbi-services.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/4cd1b5f8a3de93f05a16ab8d7d2b7735","name":"DevOps","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/cdd2dd7441774355062c0f0f68612296b059cd1e2ff6c7af0b15dba0ed64a85f?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/cdd2dd7441774355062c0f0f68612296b059cd1e2ff6c7af0b15dba0ed64a85f?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/cdd2dd7441774355062c0f0f68612296b059cd1e2ff6c7af0b15dba0ed64a85f?s=96&d=mm&r=g","caption":"DevOps"},"url":"https:\/\/www.dbi-services.com\/blog\/author\/devops\/"}]}},"_links":{"self":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/24079","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/users\/109"}],"replies":[{"embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/comments?post=24079"}],"version-history":[{"count":39,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/24079\/revisions"}],"predecessor-version":[{"id":24200,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/24079\/revisions\/24200"}],"wp:attachment":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/media?parent=24079"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/categories?post=24079"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/tags?post=24079"},{"taxonomy":"type","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/type_dbi?post=24079"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}