45

There is a shell command that allows you to measure how fast the data goes through it, so you can measure the speed of output of commands in a pipe. So instead of:

$ somecommand | anothercommand

you can do something like:

$ somecommand | ??? | anothercommand

And throughput stats (bytes/sec) are printed to stderr, I think. But I can't for the life of me remember what that command was.

7 Answers 7

63

The go-to program nowadays for this kind of scenario would be pv (Pipe Viewer):

Screenshot of pv from the pv homepage

If you give it the --rate flag, it will show the transfer rate.

0
21

You need a utility called cpipe.

Usage:

tar cCf / - usr | cpipe -vr -vw -vt > /dev/null

Output:

...
  in:  19.541ms at    6.4MB/s (   4.7MB/s avg)    2.0MB
 out:   0.004ms at   30.5GB/s (  27.1GB/s avg)    2.0MB
thru:  19.865ms at    6.3MB/s (   4.6MB/s avg)    2.0MB
... 
2
  • 4
    No longer found any valid reference to cpipe... but pv is equivalent. Commented Jan 7, 2015 at 8:26
  • 1
    Is there a similar command for windows, or is there any pv that is windows compatible ? Commented Oct 27, 2020 at 22:36
4

As seen at https://askubuntu.com/a/620234, notice that pv, at least, can slow down your throughput significantly. The article linked to covers dd, but the point is that pv can slow down your stuff. If you care, and if you are transferring terabytes of data for example.

3

If you have Python 2 or 3 and pip (sudo apt-get install python-pip) you can install tqdm:

    python -m pip install tqdm

Then simply:

    somecommand | tqdm | anothercommand

If you need help, run tqdm --help. It has a lot of options. Feel free to read more and make suggestions at https://github.com/tqdm/tqdm

2

Depending on the scenario, I can think of two different approaches:

  1. For a progress bar showing the progress/throughput of something, use pv.
  • E.x. pv my-huge-nix-iso.iso > /dev/sdh
  • E.x. echo | b3sum -l $((1024*1024*1024*10)) --raw | pv >/dev/null
  • Install via apt install pv
  • For a fixed-size, add -Ss <bytes>. E.x. pv -Ss $((1024*1024*1024)) /dev/urandom >/dev/null
  • Make sure to add --no-splice if outputting to a block device such as a USB drive or harddisk, otherwise pv will load gigabytes of input file into a kernel dirty writeback buffer.
  1. For benchmarking stdout throughput of a program generating endless output, I recommend the following one-liner BASH. It divides the number of bytes generated within two seconds by the user-time measured by time (that excludes system-time spent in syscalls) and uses kill signal 9 to instantly destroy the process, so no user-time is wasted on cleanup. Its precision seems to be +/-10%.
dc -e "4 k $(export TIMEFORMAT=%U; { time timeout -s9 --foreground 2 <COMMAND> | wc -c; } 2>&1 | tail -n2) / p" | numfmt --to=iec --format="%.3fiB/s"
1

pipemeter is an alternative to pv mentioned above.

man page

It is available on Linux (at least Debian & Ubuntu) as the pipemeter package.

1

For inspecting (non-pipe) data transfers of (esp. coreutils) processes there's also this tool, with size counter and throughput estimation: https://github.com/Xfennec/progress

If your command is already known (cp,dd,tar,cat…), it's as easy to use as progress -M, else you have to use the -c option to specify your process.

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.