Timeline for How to parallelize dd?
Current License: CC BY-SA 3.0
16 events
| when toggle format | what | by | license | comment | |
|---|---|---|---|---|---|
| Sep 30 at 12:58 | comment | added | ron |
Well in my case the CPU is the bottleneck - don't ask me why, because I don't know. That kind of shakes my confidence in your claim of my cpu is the bottleneck
|
|
| Sep 30 at 5:03 | answer | added | Snorch | timeline score: 0 | |
| Sep 8, 2017 at 2:05 | comment | added | sudo | @CristianCiupitu CPU-bound filesystem copy isn't as rare as you suggest. Besides faster SSDs, the file might actually be in the system's disk cache. I'm looking at a cp task taking 100% CPU on my server right now, I think because I have a 20GiB file that Ubuntu's disk caching seems to have loaded into RAM. | |
| Oct 11, 2014 at 8:32 | answer | added | Ole Tange | timeline score: 8 | |
| Oct 10, 2014 at 23:49 | comment | added | frostschutz |
dd hogs the CPU by default due to small blocksize. make it larger, like bs=1M.
|
|
| Oct 10, 2014 at 22:55 | history | edited | Kalle Richter | CC BY-SA 3.0 |
explained CPU usage
|
| Oct 10, 2014 at 21:58 | answer | added | G-Man Says 'Reinstate Monica' | timeline score: 3 | |
| Oct 10, 2014 at 21:22 | history | edited | Kalle Richter | CC BY-SA 3.0 |
added 65 characters in body
|
| Oct 10, 2014 at 20:57 | history | edited | Kalle Richter | CC BY-SA 3.0 |
explained background
|
| Oct 10, 2014 at 20:47 | comment | added | Cristian Ciupitu | What CPU and filesystem do you have? How big is the file (length & blocks)? | |
| Oct 10, 2014 at 20:44 | history | edited | Kalle Richter | CC BY-SA 3.0 |
added 112 characters in body
|
| Oct 10, 2014 at 20:44 | comment | added | Kalle Richter | @CristianCiupitu Well in my case the CPU is the bottleneck - don't ask me why, because I don't know. Your answer made me realize that the solution should support multiple filesystems (able to handle sparse files) (edited) | |
| Oct 10, 2014 at 20:42 | comment | added | Kalle Richter | @mikeserv I don't understand your comment... | |
| Oct 10, 2014 at 20:40 | comment | added | Cristian Ciupitu | I don't see what you could gain from this, as this operation is I/O bound except in extreme cases. In my opinion the best option would be a program that's sparse aware, e.g something like xfs_copy. Its man page mentions: "However, if the file is created on an XFS filesystem, the file consumes roughly the amount of space actually used in the source filesystem by the filesystem and the XFS log. The space saving is because xfs_copy seeks over free blocks instead of copying them and the XFS filesystem supports sparse files efficiently.". | |
| Oct 10, 2014 at 20:38 | history | edited | Kalle Richter | CC BY-SA 3.0 |
added 43 characters in body
|
| Oct 10, 2014 at 20:31 | history | asked | Kalle Richter | CC BY-SA 3.0 |