Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

btrfs Enhancements #4796

Closed
dcrdev opened this issue Jul 29, 2016 · 20 comments
Closed

btrfs Enhancements #4796

dcrdev opened this issue Jul 29, 2016 · 20 comments

Comments

@dcrdev
Copy link

dcrdev commented Jul 29, 2016

Currently cockpit treats disks that are members of a btrfs array as individual disks and there is no support for common btrfs tasks such as scrubbing, snapshots adding/removing members etc...

All of the above is already implemented for mdadm - can this be implemented for btrfs? Ideally any btrfs raid should also appear in the raid devices box on the storage tab.

@mvollmer mvollmer self-assigned this Sep 9, 2016
@Centril
Copy link

Centril commented Feb 19, 2017

This addition would be neat!

@thunderstorm99
Copy link

I would also very much welcome this coming to fruition

@DidierLmn
Copy link

Would love to see this implemented if someone finds the time.

@Conan-Kudo
Copy link

Long-time Cockpit user on Fedora with Btrfs and would love to see this functionality, too!

@mvollmer
Copy link
Member

Could you guys get together and produce some concrete proposals? UI mockups, what commands the buttons should run, etc. We don't have any btrfs users in the Cockpit team, I am afraid.

@Conan-Kudo
Copy link

@mvollmer I can certainly try to write up something about how I would expect to be able to manage Btrfs. It's a shame no one on the team uses Btrfs, though. It's a nice filesystem. 🥇

@martinpitt
Copy link
Member

I actually do use btrfs on my laptop (happy user for many years), but not across multiple disks. I'm using it for subvolumes mostly, which are so much better than partitions or even LVM LVs.

@mvollmer
Copy link
Member

I actually do use btrfs on my laptop (happy user for many years), but not across multiple disks. I'm using it for subvolumes mostly, which are so much better than partitions or even LVM LVs.

Oh, cool, that makes me want to try this also.

@mvollmer mvollmer removed their assignment Apr 18, 2019
@viktorstrate
Copy link

Rockstor is an operatingsystem for NAS based on CentOS, it uses btrfs.
It also has a web ui, from where the btrfs disks can be managed.
Maybe we could take some inspiration from the way they have done it?

See their documentation for managing pools
http://rockstor.com/docs/pools-btrfs.html

@cmurf
Copy link

cmurf commented Aug 18, 2019

Subvolumes and Snapshots
I know that libbtrfsutil provides a way to support subvolume operations, enumerate, create, delete, snapshotting, and also has python3 bindings.
https://github.com/kdave/btrfs-progs/tree/master/libbtrfsutil

But I don't see any scrub or device management stuff in there. I do see subvolume operations as well as device add and remove in
https://github.com/storaged-project/libblockdev/blob/master/src/plugins/btrfs.h

Scrubbing
It might be that either libbtrfsutil and/or storaged needs some extension. But you could also just leverage a systemd service and timer units, which themselves just execute user space commands, and Cockpit could just have an enable/disable timer toggle.
Examples of this:
https://aur.archlinux.org/packages/btrfs-progs-git/
https://git.archlinux.org/svntogit/packages.git/plain/trunk/[email protected]?h=packages/btrfs-progs
https://git.archlinux.org/svntogit/packages.git/plain/trunk/[email protected]?h=packages/btrfs-progs

A more sophisticated UI might allow changing scrub interval, manual start and stop, selection of specific file systems, progress and statistics, or even cancel/resume based on a schedule or load threshold+duration. It's not obvious but btrfs scrub cancel leaves intact a tracking file for what has been completed and btrfs scrub resume will scrub from that point. The scrub IO priority class is idle so, in theory any other request is promoted above scrub.

If there is a way to detect unclean shutdown (powerfail or crash) it'd be cool if Cockpit can present a message to the user: Unclean shutdown? Perform file system scrub (if supported)? This can apply to md, LVM, Btrfs, ZFS, XFS, and (eventually) Stratis. As much as possible I'd leverage or modify existing UI to match use cases regardless of the backend storage.

Device add and remove
Multiple device handling support would also be super cool, even in the single device about to fail and needs live replacement use case. I'm not seeing anything in libbtrfsutil about it, or python bindings, but I do see device add/remove in libblockdev. Basic support is pretty simple, choose device and volume, and click go. The add command does all the work of sanity testing the target device, formatting it, and resizing the volume; for delete it's the same, and migrates block groups live in the background similar to pvmove, and when it completes, wipes the Btrfs signature from the removed device. A possible option is supporting the newer replace subcommand which is a simpler and faster way of swapping a device with a same sized or larger replacement.

Advanced device management to handle different replication strategies? The redundancy doesn't change just by adding a device, and the balancing of data across devices is also not altered. Such handling probably requires a few extra conversations, as it's not exactly obvious what to do, because the user expectations after adding a device aren't obvious. Did they intend to make the pool bigger? Or add redundancy?

Resize
Btrfs supports live grow and shrink, there's a specifc ioctl for it, all handled by the kernel. There is no locality optimization in the Btrfs allocator, so there's no downside to multiple resizes. If anything, any compaction of block groups resulting from shrink will make the file system a bit more efficient.

@ghost
Copy link

ghost commented Apr 10, 2020

would be great if btrfs is provided as an option in storaged

add_fsys(true, { value: "empty", title: _("No Filesystem") });

vfat: 11,

@ghost
Copy link

ghost commented Apr 10, 2020

Snapshots on subvolumes can be easily managed by snapper and subvolumes can be created using "btrfs subvolume create /path-to-subvolume".
would be great if cockpit cloud build a GUI for btrfs.

Subvolumes and Snapshots
I know that libbtrfsutil provides a way to support subvolume operations, enumerate, create, delete, snapshotting, and also has python3 bindings.
https://github.com/kdave/btrfs-progs/tree/master/libbtrfsutil

But I don't see any scrub or device management stuff in there. I do see subvolume operations as well as device add and remove in
https://github.com/storaged-project/libblockdev/blob/master/src/plugins/btrfs.h

Scrubbing
It might be that either libbtrfsutil and/or storaged needs some extension. But you could also just leverage a systemd service and timer units, which themselves just execute user space commands, and Cockpit could just have an enable/disable timer toggle.
Examples of this:
https://aur.archlinux.org/packages/btrfs-progs-git/
https://git.archlinux.org/svntogit/packages.git/plain/trunk/[email protected]?h=packages/btrfs-progs
https://git.archlinux.org/svntogit/packages.git/plain/trunk/[email protected]?h=packages/btrfs-progs

A more sophisticated UI might allow changing scrub interval, manual start and stop, selection of specific file systems, progress and statistics, or even cancel/resume based on a schedule or load threshold+duration. It's not obvious but btrfs scrub cancel leaves intact a tracking file for what has been completed and btrfs scrub resume will scrub from that point. The scrub IO priority class is idle so, in theory any other request is promoted above scrub.

If there is a way to detect unclean shutdown (powerfail or crash) it'd be cool if Cockpit can present a message to the user: Unclean shutdown? Perform file system scrub (if supported)? This can apply to md, LVM, Btrfs, ZFS, XFS, and (eventually) Stratis. As much as possible I'd leverage or modify existing UI to match use cases regardless of the backend storage.

Device add and remove
Multiple device handling support would also be super cool, even in the single device about to fail and needs live replacement use case. I'm not seeing anything in libbtrfsutil about it, or python bindings, but I do see device add/remove in libblockdev. Basic support is pretty simple, choose device and volume, and click go. The add command does all the work of sanity testing the target device, formatting it, and resizing the volume; for delete it's the same, and migrates block groups live in the background similar to pvmove, and when it completes, wipes the Btrfs signature from the removed device. A possible option is supporting the newer replace subcommand which is a simpler and faster way of swapping a device with a same sized or larger replacement.

Advanced device management to handle different replication strategies? The redundancy doesn't change just by adding a device, and the balancing of data across devices is also not altered. Such handling probably requires a few extra conversations, as it's not exactly obvious what to do, because the user expectations after adding a device aren't obvious. Did they intend to make the pool bigger? Or add redundancy?

Resize
Btrfs supports live grow and shrink, there's a specifc ioctl for it, all handled by the kernel. There is no locality optimization in the Btrfs allocator, so there's no downside to multiple resizes. If anything, any compaction of block groups resulting from shrink will make the file system a bit more efficient.

@histeriks
Copy link

the only thing that i'm really missing in cockpit (which became my new home btw). it would be bloody awesome to see this included...

@dacresni
Copy link

can btrfs be handled like volume grouping?

@Forza-tng
Copy link

Forza-tng commented Oct 15, 2020

Hi, would it make sense to split out the bug and enhancement proposal in separate issues? I think that the issue with not handling multiple devices - I. E. Listing all members of a raid profile as separate, independent filesystems should be considered a bug IMHO. Then we have the other enhancements such as scrubbing, balancing, resizing, etc as a feature enhancement.

Here is a reference to the same issue with udisks storaged-project/udisks#802

@tbzatek
Copy link

tbzatek commented Jan 26, 2021

The btrfs support in UDisks is fairly basic at the moment and with limited (human) resources it evolves rather slowly. The current focus is to provide sane base functionality with respect to filesystem specifics rather than providing exhaustive feature support. Contributions are always welcome though, there's a separate btrfs module that could evolve somewhat independently from the UDisks daemon core:
http://storaged.org/doc/udisks2-api/latest/gdbus-org.freedesktop.UDisks2.Manager.BTRFS.html
http://storaged.org/doc/udisks2-api/latest/gdbus-org.freedesktop.UDisks2.Filesystem.BTRFS.html

With btrfs bringing new concepts there are couple open questions with regards to UDisks2 D-Bus object model. It would really help to know use cases from Cockpit side, plans and the big picture. For the moment it looks like there will necessarily be some tweaks needed in layers above UDisks to provide clean object/volumes presentation.

@carlwgeorge
Copy link

I see some good ideas in this issue, but rather than trying to solve everything at once, could cockpit start small and just fix how btrfs filesystem usage is displayed?

cockpit-btrfs

On this system, xvda1 is an ext4 filesystem, and xvd{e,f,g,h}1 is a four device raid10 btrfs filesystem.

@MurzNN
Copy link

MurzNN commented Aug 24, 2021

Yes, implementing this via small steps is a great idea! Here is my separate feature request #16245 to simply add "Format to BTRFS" option, but it is merged with this large issue, instead of implementing that small and quick feature separately.

@mvollmer
Copy link
Member

Please check out #16408.

@jelly
Copy link
Member

jelly commented Feb 14, 2024

Basic btrfs support is now included in 309.

@jelly jelly closed this as completed Feb 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests