Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

Required fields*

14
  • 2
    For the second example, is dd even required, or can pv/nc treat /dev/sda just fine on their own? (I have noticed some commands "throw up" when trying to read special files like that one, or files with 0x00 bytes) Commented Sep 7, 2015 at 4:07
  • 5
    @user1794469 Will the compression help? I'm thinking the network is not where the bottleneck is. Commented Sep 7, 2015 at 6:40
  • 17
    Don’t forget that in bash one can use > /dev/tcp/IP/port and < /dev/tcp/IP/port redirections instead of piping to and from netcat respectively. Commented Sep 7, 2015 at 16:41
  • 6
    Good answer. Gigabit Ethernet is often faster than hard drive speed, so compression is useless. To transfer several files consider tar cv sourcedir | pv | nc dest_host_or_ip 9999 and cd destdir ; nc -l 9999 | pv | tar xv. Many variations are possible, you might e.g. want to keep a .tar.gz at destination side rather than copies. If you copy directory to directory, for extra safety you can perform a rsync afterwards, e.g. from dest rsync --inplace -avP [email protected]:/path/to/source/. /path/to/destination/. it will guarantee that all files are indeed exact copies. Commented Sep 8, 2015 at 9:08
  • 3
    Instead of using IPv4 you can achieve a better throughput by using IPv6 because it has a bigger payload. You don't even configure it, if the machines are IPv6 capable they probably already have an IPv6 link-local address Commented Sep 8, 2015 at 13:07