Is it possible to limit the memory usage of all processes started by GNU parallel? I realize there are ways to limit the number of jobs, but in cases where it isn't easy to predict the memory usage ahead of time it can be a difficult to tune this parameter.
In my particular case I'm running programs on a HPC where there are hard limits on process memory. E.g. if there's 72GB of ram available on a node, the batch system will kill jobs that exceed 70GB. I'm also unable to spawn jobs directly to the swap and hold them there.
The GNU parallel package comes with niceload, which seems to allow for the current memory usage to be checked before a process runs. However I'm not sure how to use this.