Timeline for How to defragment compressed btrfs files?
Current License: CC BY-SA 4.0
6 events
| when toggle format | what | by | license | comment | |
|---|---|---|---|---|---|
| May 13 at 7:00 | comment | added | grawity | It's generic, from the unitless "32" in OP's output. NTFS compression uses 64k. I have no idea what ZFS uses. | |
| May 13 at 6:59 | comment | added | Stephen Kitt | I take it the “32kB” reference in your answer is generic, and is fine as-is — on Btrfs, compression uses 128KiB extents (see the compression documentation). | |
| May 12 at 18:29 | vote | accept | peterh | ||
| May 12 at 18:22 | comment | added | peterh | I use btrfs on hdd and it defrags well. After defrag, it is fine. Actually, meanwhile I did a full fs - defrag (note: online, while my productive processes have runned on it!), it has shown a progress and this time it worked a lot on the newly created files. So, btrfs on hdd is not so bad et all, we only need to know, how to handle it :-) That online shrink support, online defrag support, these were unthinkable... | |
| May 12 at 18:19 | comment | added | peterh | Afaik it is still an issue even on SSDs, but only in a much lesser extent: 1) block device-level readahead still goes badly if it does not know about the fs 2) ssd can only trim and reflash blocks, and all the blocks has a wearing count (maybe 100000 or so). If the FS blocks are smaller as the ssd blocks and there is an unthinkable fragmentation, it will result more wearing (as the average count of block-level trims will be higher for fs-level block writes.) | |
| May 12 at 17:01 | history | answered | grawity | CC BY-SA 4.0 |