-
Notifications
You must be signed in to change notification settings - Fork 250
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
man 5 btrfs - compress-force warnings missing #960
Comments
The limit is 128K, not 512K, and the limit comes from compression itself. The fragmentation of btrfs is not really related to the extent size, this is a little anti-instinct . On the other hand, the smaller extent size is, the less bookend problem is (the extra space that is no longer referred, but can only be release when the whole original extent is released). It's not an easy task to educate all end users about those, and it will be more convincing if you can provide a real world scenario that smaller extent sizes are really causing problems. |
The 512K size limit applies to uncompressed extents when the
The effect is trivial to reproduce:
Changing
As you can see, whatever size is used in that function becomes the upper bound on uncompressed extent size. This problem is specific to
The kernel checks for the For my experiment above I simply changed the size to If If neither If we get all the way to the end of the function, and it returns 1, and compression succeeds, then the (logical) extent size is limited to 128K because that's the size limit for compressed extents. There's another effect of
Easy:
I say it's an easy test case because I have already run into this several times accidentally, before getting the hint and removing
Not really relevant for this kind of use case. If a user is writing files with 128M extents, and they want to read them quickly, it's very common that they're not overwriting the files, so they won't encounter the bookending problem. Bookending usually only comes up when people use prealloc or indiscriminate defrag. This issue is about getting sensible extent sizes from sequential writes. |
But only for uncompressed extents during compress-force? Would it be possible to increase compressed extent sizes too, or 128k assumed in too many places? Bigger extents would probably make more sense now when media is so much faster. At 1GiB/s, the latency per KiB is only 1us, or 1ms per MiB. |
For compress without
Check I think it's mostly a matter of:
...but I haven't tested it. There's a lot of drama around that change: there would need to be an incompat flag, and maybe use up some bytes in the superblock or a new tree item to record what the size is if it becomes configurable. Performance for some workloads would improve, while other workloads would suffer. The filesystem would be unmountable on small-memory machines--although the criteria to be considered "small" probably also excludes using the higher supported To pay for the drama, there would have to be a large, provable benefit for making the change. On low-latency NVMe devices and modern CPUs, the metadata processing gains from reducing the number of compressed extents for large files may be negligible. On the other hand, the cost of unnecessary decompression for seeky workloads might also be negligible. This is all easy for someone with some spare time to test, and hopefully post the results. |
According to users on
#btrfs
,compress-force
has a side effect which limits uncompressed extent size to 512KB instead of 128MB, leading to gratuitous fragmentation for incompressible data.This should be documented in the manpage for mount options (usually
man 5 btrfs
) undercompress, compress=<type[:level]>, compress-force, compress-force=<type[:level]>
.Fragmentation is a far more serious issue in many hardware configurations than a small amount of additional compute load.
The text was updated successfully, but these errors were encountered: