I was thinking if there is "canonical" way to have this?
Background & description
###Background & description II have to install some program on live server. Although I do trust the vendor (FOSS, Github, multiple authors...) I would rather to ensure avoiding not entirely impossible scenario of script falling in some trouble and exhausting system resources and leaving server unresponsive. I had the case of installing amavis which was started right after and because of some messy configuration it have produced loadavg of >4 and system barely responsive.
My first taught was nice - nice -n 19 thatscript.sh. This may and may not help, but I was thinking that it would be best that I write and activate script that would do following:
run as daemon, on (example) 500ms-2s
check for labeled process with
psandgrepif labeled process(es) (or any other process) takes too much CPU (yet to be defined) - kill them with
SIGKILL
My second taught was - it would not be the first time that I'm reinventing the wheel.
So, is there any good way to "jail" the program and processes produced by it into some predefined limited amount of system resources or automated kill if some threshold was reached by them?