Skip to main content
1 of 2
mr.spuratic
  • 10.4k
  • 29
  • 46

scp itself has no such feature. With GNU parallel you can use the sem command (from semaphore) to arbitrarily limit concurrent processes:

sem --id scp -j 50 scp ...

For all processes started with the same --id, this applies a limit of 50 concurrent instances. An attempt to start a 51st process will wait (indefinitely) until one of the other processes exits. Add --fg to keep the process in the foreground (default is to run it in the background, but this doesn't behave quite the same a shell background process).

Note that the state is stored in ${HOME}/.parallel/ so this won't work quite as hoped if you have multiple users using scp, you may need a lower limit for each user. (It should also be possible override the HOME environment variable when invoking sem, make sure umask permits group write, and modify the permissions so they share state, I have not tested this heavily though, YMMV.)

parallel requires only perl and a few standard modules.

You might also consider using scp -l N where N is a transfer limit in kBps, select a specific cipher (for speed, depending on your required security), or disable compression (especially if the data is already compressed) to further reduce CPU impact.

(Other options include ionice and cpulimit may cause scp sessions to hang for long periods, causing more problems.)

The old school way of doing something similar is to use atd and batch, but that doesn't offer tuning of concurrency, it queues and starts processes when the load is below a specific threshold.

mr.spuratic
  • 10.4k
  • 29
  • 46