-
Notifications
You must be signed in to change notification settings - Fork 239
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
slow disk read speed even though a first read pass was 20x faster #796
Comments
Perhaps btrfs is doing something while you run your tests.
Why don't you use btrfs raid1 instead of md? Btrfs balances reads based on pid (not sure on thread id) on the different devices. |
Re: balancing reads based on TID, i seems it does, for example: |
Hello,
I have a btrfs fs over a md raid1 over 2 nvme drives. It uses these parameters
compress=zstd:1,nobarrier,commit=150,noatime
.I copiest 20 LARGE files to this filesystem. They are text data, easily compressible, they take up 10% of their original disk size.
I then concatenated all 20 files into one. The disk usage didn't change too much. I then ran a multithreaded C program which read 256MB block chunks from this file, and i got 200-300MB read speeds according to logs. The large file is 8 TB, and each small file is 400GB.
After this initial run, i re-run the program and now i'm barely getting 20-30MB read speeds. What's happening here?
I plan on removing the 20 files, defragmenting the huge file and then rebooting, to see which one impacts the read speed.
My kernel is
6.1.0-21-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.90-1 (2024-05-03) x86_64 GNU/Linux
and the version i'm running isbtrfs version btrfs-progs v6.2
.The text was updated successfully, but these errors were encountered: