OK I have attempted to answer this question for two computers with "very large pipes" (10Gbe) that are "close" to each other.  

The problem you run into here is: most compression will bottleneck the cpu, since the pipes are so large.

performance to transfer 10GB file (6 Gb network connection [linode], uncompressible data):

    $  time bbcp 10G root@dest_ip:/dev/null
    0m16.5s 
    
    $ time iperf3 -c dest_ip -F 10G -t 20 # -t needs to be greater than time to transfer one file
    0m13.502s
    
    netcat:
    server: $ nc -l 1234 > /dev/null
    client: $ time nc dest_ip 1234 -q 0 < 10G 
    0m13.311s
    
    scp:
    $ time /usr/local/bin/scp 10G root@dest_ip:/dev/null
    1m31.616s

    scp with hpn ssh patch (scp -- hpn patch on client only, so not a good test): 
    1m32.707s
    
    socat:

    server:
    $ socat -u TCP-LISTEN:9876,reuseaddr OPEN:/dev/null,creat,trunc
    client:
    $ time socat -u FILE:10G TCP:dest_ip:9876
    0m15.989s

    netcat with "parallel gzip" pigz (but this is uncompressible so of course this won't save anything, poor test example)
    client: $  time cat 10G | pigz | nc 23.92.27.31 1234 -q 0
    server: $ nc -l 1234 | pigz > /dev/null
    1m24.886s

What is happening in almost all of these cases is the cpu is being maxed out, not the network (except in the case of iperf3, though that presently creates [corrupted][1] files).  So the one that is "fastest" but still not quite fast enough to fill a 10Gbe line is netcat (it fills a 6Gbe line well).

Various incantations of "gzip as a pipe to netcat" or "mbuffer" also seemed to max out the cpu with the gzip or mbuffer, so didn't seem faster.  in addition, some of the gzip pipe stuff I attempted resulted in corrupted transfers for very large (> 4 GB) files so be careful out there :)

Another thing that might work especially for higher latency (?) is to tune tcp settings. Here is a guide that mention suggested values:

http://pcbunn.cithep.caltech.edu/bbcp/using_bbcp.htm

  [1]: https://github.com/esnet/iperf/issues/798