DISCLAIMER: I know it exists other solutions to do it
Pre-requisites:
– a virtual machine (or not) with CentOS7 installed
– a free disk or partition
I use a VBox machine and I added a 5GiB hard disk
We list the disk and partition to check if our new hard is added.
[root@deploy ~]$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 20G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 19G 0 part ├─cl-root 253:0 0 21G 0 lvm / └─cl-swap 253:1 0 2G 0 lvm [SWAP] sdb 8:16 0 10G 0 disk
Good, we can continue..
Let’s partition the disk using fdisk
[root@deploy ~]$ fdisk /dev/sdb Welcome to fdisk (util-linux 2.23.2). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. Device does not contain a recognized partition table Building a new DOS disklabel with disk identifier 0x76a98fa2. Command (m for help): n [root@deploy ~]$ fdisk /dev/sdb Welcome to fdisk (util-linux 2.23.2). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. Device does not contain a recognized partition table Building a new DOS disklabel with disk identifier 0x76a98fa2. Command (m for help): n Partition type: p primary (0 primary, 0 extended, 4 free) e extended Select (default p): p Partition number (1-4, default 1): 1 First sector (2048-20971519, default 2048): Using default value 2048 Last sector, +sectors or +size{K,M,G} (2048-20971519, default 20971519): +5G Partition 1 of type Linux and of size 5 GiB is set Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks.
Now, we need to inform the kernel that the partition table has changed. To do that, either we reboot the server or we run partprobe
[root@deploy ~]$ partprobe /dev/sdb1 [root@deploy ~]$
We create a physical volume
[root@deploy ~]$ pvcreate /dev/sdb1 Physical volume "/dev/sdb1" successfully created.
[root@deploy ~]$ pvs PV VG Fmt Attr PSize PFree /dev/sda2 cl lvm2 a-- 19.00g 0 /dev/sdb1 lvm2 --- 5.00g 5.00g /dev/sdc2 cl lvm2 a-- 5.00g 1020.00m
We create a volume group
[root@deploy ~]$ vgcreate vg_deploy /dev/sdb1 Volume group "vg_deploy" successfully created
We check that the volume group was created properly
[root@deploy ~]$ vgdisplay vg_deploy --- Volume group --- VG Name vg_deploy System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 1 VG Access read/write VG Status resizable MAX LV 0 Cur LV 0 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 5.00 GiB PE Size 4.00 MiB Total PE 1279 Alloc PE / Size 0 / 0 Free PE / Size 1279 / 5.00 GiB VG UUID 5ZhlvC-lpor-Ti8x-mS9P-bnxW-Gdtw-Gynocl
Here, I set the size of the logical volume with PE (Physical Extent). One PE represents 4.00 MiB
We create a logical volume on our volume group
[root@deploy ~]$ lvcreate -l 1000 -n lv_deploy vg_deploy Logical volume "lv_deploy" created.
We have a look to check how our logical volume “lv_deploy” looks like
[root@deploy ~]$ lvdisplay /dev/vg_deploy/lv_deploy --- Logical volume --- LV Path /dev/vg_deploy/lv_deploy LV Name lv_deploy VG Name vg_deploy LV UUID 2vxcDv-AHfB-7c2x-1PM8-nbn3-38M5-c1QoNS LV Write Access read/write LV Creation host, time deploy.example.com, 2017-12-05 08:15:59 -0500 LV Status available # open 0 LV Size 3.91 GiB Current LE 1000 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 8192 Block device 253:3
Let’s create our file system on the new logical volume
[root@deploy ~]$ mkfs.xfs /dev/vg_deploy/lv_deploy meta-data=/dev/vg_deploy/lv_deploy isize=512 agcount=4, agsize=256000 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0 data = bsize=4096 blocks=1024000, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0
We now create a new directory “mysqldata” for example
[root@deploy ~]$ mkdir /mysqldata
We add the new entry for our new logical volume
[root@deploy ~]$ echo "/dev/mapper/vg_deploy-lv_deploy /mysqldata xfs defaults 0 0" >> /etc/fstab
We mount it
[root@deploy ~]$ mount -a
We check the filesystem is mounted properly
[root@deploy ~]$ df -hT Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/cl-root xfs 21G 8.7G 13G 42% / devtmpfs devtmpfs 910M 0 910M 0% /dev tmpfs tmpfs 920M 0 920M 0% /dev/shm tmpfs tmpfs 920M 8.4M 912M 1% /run tmpfs tmpfs 920M 0 920M 0% /sys/fs/cgroup /dev/sda1 xfs 1014M 227M 788M 23% /boot tmpfs tmpfs 184M 0 184M 0% /run/user/0 /dev/loop2 iso9660 4.3G 4.3G 0 100% /media/iso /dev/mapper/vg_deploy-lv_deploy xfs 3.9G 33M 3.9G 1% /mysqldata
We add some files to the /mysqldata directory (a for loop will help us)
[root@deploy mysqldata]$ for i in 1 2 3 4 5; do dd if=/dev/zero of=/mysqldata/file0$i bs=1024 count=10; done 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.000282978 s, 36.2 MB/s 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.000202232 s, 50.6 MB/s 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.000255617 s, 40.1 MB/s 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.000195752 s, 52.3 MB/s 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.000183672 s, 55.8 MB/s
[root@deploy mysqldata]$ ls -l total 60 -rw-r--r--. 1 root root 10240 Dec 5 08:28 file01 -rw-r--r--. 1 root root 10240 Dec 5 08:28 file02 -rw-r--r--. 1 root root 10240 Dec 5 08:28 file03 -rw-r--r--. 1 root root 10240 Dec 5 08:28 file04 -rw-r--r--. 1 root root 10240 Dec 5 08:28 file05
NOW the interesting part is coming because we are going to reduce our /mysqldata filesystem
But first let’s make a backup of our current /mysqldata FS
[root@deploy mysqldata]$ yum -y install xfsdump Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile
Bad news! we cannot reduce an xfs partition directly so we need:
– to backup our filesystem
– umount the filsesytem && delete the logical volume
– re-partition tle logical volume with xfs FS
– restore our data
Backup the file system
[root@deploy mysqldata]$ xfsdump -f /tmp/mysqldata.dump /mysqldata xfsdump: using file dump (drive_simple) strategy xfsdump: version 3.1.4 (dump format 3.0) - type ^C for status and control ============================= dump label dialog ============================== please enter label for this dump session (timeout in 300 sec) -> test session label entered: "test" --------------------------------- end dialog --------------------------------- xfsdump: level 0 dump of deploy.example.com:/mysqldata xfsdump: dump date: Tue Dec 5 08:36:20 2017 xfsdump: session id: f010d421-1a34-4c70-871f-48ffc48c29f2 xfsdump: session label: "test" xfsdump: ino map phase 1: constructing initial dump list xfsdump: ino map phase 2: skipping (no pruning necessary) xfsdump: ino map phase 3: skipping (only one dump stream) xfsdump: ino map construction complete xfsdump: estimated dump size: 83840 bytes ============================= media label dialog ============================= please enter label for media in drive 0 (timeout in 300 sec) -> test media label entered: "test" --------------------------------- end dialog --------------------------------- xfsdump: creating dump session media file 0 (media 0, file 0) xfsdump: dumping ino map xfsdump: dumping directories xfsdump: dumping non-directory files xfsdump: ending media file xfsdump: media file size 75656 bytes xfsdump: dump size (non-dir files) : 51360 bytes xfsdump: dump complete: 5 seconds elapsed xfsdump: Dump Summary: xfsdump: stream 0 /tmp/mysqldata.dump OK (success) xfsdump: Dump Status: SUCCESS
Then, we unmount the filesystem and delete the logical volume
[root@deploy ~]$ umount /mysqldata/ [root@deploy ~]$ df -hT Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/cl-root xfs 21G 8.7G 13G 42% / devtmpfs devtmpfs 910M 0 910M 0% /dev tmpfs tmpfs 920M 0 920M 0% /dev/shm tmpfs tmpfs 920M 8.4M 912M 1% /run tmpfs tmpfs 920M 0 920M 0% /sys/fs/cgroup /dev/sda1 xfs 1014M 227M 788M 23% /boot tmpfs tmpfs 184M 0 184M 0% /run/user/0 /dev/loop2 iso9660 4.3G 4.3G 0 100% /media/iso [root@deploy ~]$ lvremove /dev/vg_deploy/lv_deploy Do you really want to remove active logical volume vg_deploy/lv_deploy? [y/n]: y Logical volume "lv_deploy" successfully removed
We recreate the logical volume with a lower size (from 1000 PE to 800 PE)
[root@deploy ~]$ lvcreate -l 800 -n lv_deploy vg_deploy WARNING: xfs signature detected on /dev/vg_deploy/lv_deploy at offset 0. Wipe it? [y/n]: y Wiping xfs signature on /dev/vg_deploy/lv_deploy. Logical volume "lv_deploy" created.
We build the XFS filesystem
[root@deploy ~]$ mkfs.xfs /dev/mapper/vg_deploy-lv_deploy meta-data=/dev/mapper/vg_deploy-lv_deploy isize=512 agcount=4, agsize=204800 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0 data = bsize=4096 blocks=819200, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0
We remount the filesystem
[root@deploy ~]$ mount -a [root@deploy ~]$ [root@deploy ~]$ [root@deploy ~]$ df -hT Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/cl-root xfs 21G 8.7G 13G 42% / devtmpfs devtmpfs 910M 0 910M 0% /dev tmpfs tmpfs 920M 0 920M 0% /dev/shm tmpfs tmpfs 920M 8.4M 912M 1% /run tmpfs tmpfs 920M 0 920M 0% /sys/fs/cgroup /dev/sda1 xfs 1014M 227M 788M 23% /boot tmpfs tmpfs 184M 0 184M 0% /run/user/0 /dev/loop2 iso9660 4.3G 4.3G 0 100% /media/iso /dev/mapper/vg_deploy-lv_deploy xfs 3.2G 33M 3.1G 2% /mysqldata
We list the content of /mysqldata directory
[root@deploy ~]$ ls -l /mysqldata total 0
Let’s restore our data
[root@deploy ~]$ xfsrestore -f /tmp/mysqldata.dump /mysqldata xfsrestore: using file dump (drive_simple) strategy xfsrestore: version 3.1.4 (dump format 3.0) - type ^C for status and control xfsrestore: searching media for dump xfsrestore: examining media file 0 xfsrestore: dump description: xfsrestore: hostname: deploy.example.com xfsrestore: mount point: /mysqldata xfsrestore: volume: /dev/mapper/vg_deploy-lv_deploy xfsrestore: session time: Tue Dec 5 08:36:20 2017 xfsrestore: level: 0 xfsrestore: session label: "test" xfsrestore: media label: "test" xfsrestore: file system id: 84832e04-e6b8-473a-beb4-f4d59ab9e73c xfsrestore: session id: f010d421-1a34-4c70-871f-48ffc48c29f2 xfsrestore: media id: 8fda43c1-c7de-4331-b930-ebd88199d0e7 xfsrestore: using online session inventory xfsrestore: searching media for directory dump xfsrestore: reading directories xfsrestore: 1 directories and 5 entries processed xfsrestore: directory post-processing xfsrestore: restoring non-directory files xfsrestore: restore complete: 0 seconds elapsed xfsrestore: Restore Summary: xfsrestore: stream 0 /tmp/mysqldata.dump OK (success) xfsrestore: Restore Status: SUCCESS
Our data are back
[root@deploy ~]$ ls -l /mysqldata/ total 60 -rw-r--r--. 1 root root 10240 Dec 5 08:28 file01 -rw-r--r--. 1 root root 10240 Dec 5 08:28 file02 -rw-r--r--. 1 root root 10240 Dec 5 08:28 file03 -rw-r--r--. 1 root root 10240 Dec 5 08:28 file04 -rw-r--r--. 1 root root 10240 Dec 5 08:28 file05
Hope this helps 🙂