OK I have attempted to answer this question for two computers with "very large pipes" (10Gbe) that are "close" to each other.  

The problem you run into here is: most compression will bottleneck at the cpu, since the pipes are so large.

performance to transfer 10GB file (6 Gb network connection [linode], uncompressible data):

    $  time bbcp 10G root@$dest_ip:/dev/null
    0m16.5s 
    
    iperf:

    server: $ iperf3 -s -F /dev/null
    client:
    $ time iperf3 -c $dest_ip -F 10G -t 20 # -t needs to be greater than time to transfer complete file
    0m13.44s
    (30% cpu)
    
    netcat (1.187 openbsd):

    server: $ nc -l 1234 > /dev/null
    client: $ time nc $dest_ip 1234 -q 0 < 10G 
    0m13.311s
    (58% cpu)
    
    scp:

    $ time /usr/local/bin/scp 10G root@$dest_ip:/dev/null
    1m31.616s
    scp with hpn ssh patch (scp -- hpn patch on client only, so not a good test possibly): 
    1m32.707s
    
    socat:

    server:
    $ socat -u TCP-LISTEN:9876,reuseaddr OPEN:/dev/null,creat,trunc
    client:
    $ time socat -u FILE:10G TCP:$dest_ip:9876
    0m15.989s

And two boxes on 10 Gbe, slightly older versions of netcat (CentOs 6.7), 10GB file:

    nc: 0m18.706s (100% cpu, v1.84, no -q option? without -q option can transfer truncated files...)
    iperf3: 0m10.013s (100% cpu)
    socat: 0m10.293s (88% cpu, possibly maxed out)

So on one instance netcat used less cpu, on the other socat, so YMMV.

What is happening in almost all of these cases is the cpu is being maxed out, not the network.  `scp` maxes out at about 230 MB/s, pegging one core at 100% utilization.

Iperf3 unfortunately creates [corrupted][1] files.  Some versions of netcat seem to not transfer the entire file, very weird.  Especially older versions of it.

Various incantations of "gzip as a pipe to netcat" or "mbuffer" also seemed to max out the cpu with the gzip or mbuffer, so didn't result in faster transfer with such large pipes.  lz4 might help.  In addition, some of the gzip pipe stuff I attempted resulted in corrupted transfers for very large (> 4 GB) files so be careful out there :)

Another thing that might work especially for higher latency (?) is to tune tcp settings. Here is a guide that mention suggested values:

http://pcbunn.cithep.caltech.edu/bbcp/using_bbcp.htm and https://fasterdata.es.net/host-tuning/linux/ (from another answer)

worth double checking after tweaking to make sure it didn't at least hurt.

Also may be worth tuning the window size: https://iperf.fr/iperf-doc.php#tuningtcp

With slow(er) connections compression can definitely help though.  If you have big pipes, very fast compression *might* help with readily compressible data, haven't tried it.

The standard answer for "syncing hard drives" is to rsync the files, that avoids transfer where possible.

  [1]: https://github.com/esnet/iperf/issues/798