Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

Required fields*

7
  • I don't see what you could gain from this, as this operation is I/O bound except in extreme cases. In my opinion the best option would be a program that's sparse aware, e.g something like xfs_copy. Its man page mentions: "However, if the file is created on an XFS filesystem, the file consumes roughly the amount of space actually used in the source filesystem by the filesystem and the XFS log. The space saving is because xfs_copy seeks over free blocks instead of copying them and the XFS filesystem supports sparse files efficiently.". Commented Oct 10, 2014 at 20:40
  • @mikeserv I don't understand your comment... Commented Oct 10, 2014 at 20:42
  • @CristianCiupitu Well in my case the CPU is the bottleneck - don't ask me why, because I don't know. Your answer made me realize that the solution should support multiple filesystems (able to handle sparse files) (edited) Commented Oct 10, 2014 at 20:44
  • What CPU and filesystem do you have? How big is the file (length & blocks)? Commented Oct 10, 2014 at 20:47
  • 4
    dd hogs the CPU by default due to small blocksize. make it larger, like bs=1M. Commented Oct 10, 2014 at 23:49