Timeline for Command to delete directories whose contents are less than a given size
Current License: CC BY-SA 4.0
4 events
| when toggle format | what | by | license | comment | |
|---|---|---|---|---|---|
| Jan 17, 2022 at 19:04 | comment | added | BoeroBoy | In practice it works much better for me anyway. Once the metadata is cached other threads that might re-use it speed up by a significant margin. 56 threads on this box and it's about 16x faster in most of my experiences. In my case I needed to purge small or empty garbage dirs from a web crawler so left the full min/max depth. | |
| Jan 17, 2022 at 17:28 | comment | added | Ole Tange | @StéphaneChazelas "Parallelizing tasks that are I/O bound tasks is counter productive." Not always. The answer is really: "it depends, so measure instead of assume". oletange.wordpress.com/2015/07/04/parallel-disk-io-is-it-faster | |
| Jan 17, 2022 at 14:15 | comment | added | Stéphane Chazelas |
parallelizing tasks that are I/O bound tasks is counter productive. Also, running du for each dir means you're going to get disk usages of the same files several times. du -s dir includes the disk usage reported by du -s dir/subdir. Run du without -s instead without find. You'll need -h for du if you want human suffixes. So here just du -lh | sort -rh (all those -l, -h being GNU extensions and here assuming dir paths don't contain newline characters).
|
|
| Jan 17, 2022 at 12:57 | history | answered | BoeroBoy | CC BY-SA 4.0 |