In the last post, we’ve looked at the very basics when it comes to bcachefs, a new file system which was added to the Linux kernel starting from version 6.7. While we’ve already seen how easy it is to create a new file system using a single device, encrypt and/or compress it and that check summing of meta data and user data is enabled by default, there is much more you can do with bcachefs. In this post we’ll look at how you can work with a file system that spans multiple devices, which is quite common in today’s infrastructures.
When we looked at the devices available to the system in the last post, it looked like this:
tumbleweed:~ $ lsblk | grep -w "4G"
└─vda3 254:3 0 1.4G 0 part [SWAP]
vdb 254:16 0 4G 0 disk
vdc 254:32 0 4G 0 disk
vdd 254:48 0 4G 0 disk
vde 254:64 0 4G 0 disk
vdf 254:80 0 4G 0 disk
vdg 254:96 0 4G 0 disk
This means we have six unused block devices to play with. Lets start again with the most simple case, one device, one file system:
tumbleweed:~ $ bcachefs format --force /dev/vdb
tumbleweed:~ $ mount /dev/vdb /mnt/dummy/
tumbleweed:~ $ df -h | grep dummy
/dev/vdb 3.7G 2.0M 3.6G 1% /mnt/dummy
Assuming we’re running out of space on that file system and we want to add another device, how does work?
tumbleweed:~ $ bcachefs device add /mnt/dummy/ /dev/vdc
tumbleweed:~ $ df -h | grep dummy
/dev/vdb:/dev/vdc 7.3G 2.0M 7.2G 1% /mnt/dummy
Quite easy, and no separate step required to extend the file system, this was done automatically which is quite nice. You can even go a step further and specify how large the file system should be on the new device (which doesn’t make much sense in this case):
tumbleweed:~ $ bcachefs device add --fs_size=4G /mnt/dummy/ /dev/vdd
tumbleweed:~ $ df -h | grep mnt
/dev/vdb:/dev/vdc:/dev/vdd 11G 2.0M 11G 1% /mnt/dummy
Let’s remove this configuration and then create a file system with multiple devices right from the beginning:
tumbleweed:~ $ bcachefs format --force /dev/vdb /dev/vdc
Now we formatted two devices at once, which is great, but how can we mount that? This will obviously not work:
tumbleweed:~ $ mount /dev/vdb /dev/vdc /mnt/dummy/
mount: bad usage
Try 'mount --help' for more information.
The syntax is a bit different, so either do it it with “mount”:
tumbleweed:~ $ mount -t bcachefs /dev/vdb:/dev/vdc /mnt/dummy/
tumbleweed:~ $ df -h | grep dummy
/dev/vdb:/dev/vdc 7.3G 2.0M 7.2G 1% /mnt/dummy
… or use the “bcachefs” utility using the same syntax for the list of devices:
tumbleweed:~ $ umount /mnt/dummy
tumbleweed:~ $ bcachefs mount /dev/vdb:/dev/vdc /mnt/dummy/
tumbleweed:~ $ df -h | grep dummy
/dev/vdb:/dev/vdc 7.3G 2.0M 7.2G 1% /mnt/dummy
What is a bit annoying is, that you need to know which devices you can still add, as you won’t see this in the “lsblk” output”:
tumbleweed:~ $ lsblk | grep -w "4G"
└─vda3 254:3 0 1.4G 0 part [SWAP]
vdb 254:16 0 4G 0 disk /mnt/dummy
vdc 254:32 0 4G 0 disk
vdd 254:48 0 4G 0 disk
vde 254:64 0 4G 0 disk
vdf 254:80 0 4G 0 disk
vdg 254:96 0 4G 0 disk
You do see it, however in the “df -h” output:
tumbleweed:~ $ df -h | grep dummy
/dev/vdb:/dev/vdc 7.3G 2.0M 7.2G 1% /mnt/dummy
Another way to get those details is once more to use the “bcachefs” utility:
tumbleweed:~ $ bcachefs fs usage /mnt/dummy/
Filesystem: d6f85f8f-dc12-4e83-8547-6fa8312c8eca
Size: 7902739968
Used: 76021760
Online reserved: 0
Data type Required/total Durability Devices
btree: 1/1 1 [vdb] 1048576
btree: 1/1 1 [vdc] 1048576
(no label) (device 0): vdb rw
data buckets fragmented
free: 4256956416 16239
sb: 3149824 13 258048
journal: 33554432 128
btree: 1048576 4
user: 0 0
cached: 0 0
parity: 0 0
stripe: 0 0
need_gc_gens: 0 0
need_discard: 0 0
capacity: 4294967296 16384
(no label) (device 1): vdc rw
data buckets fragmented
free: 4256956416 16239
sb: 3149824 13 258048
journal: 33554432 128
btree: 1048576 4
user: 0 0
cached: 0 0
parity: 0 0
stripe: 0 0
need_gc_gens: 0 0
need_discard: 0 0
capacity: 4294967296 16384
Note that shrinking a file system on a device is currently not supported, only growing.
In the next post we’ll look at how you can mirror your data across multiple devices.