Skip to main content
Commonmark migration
Source Link

I was thinking if there is "canonical" way to have this?

Background & description

###Background & description II have to install some program on live server. Although I do trust the vendor (FOSS, Github, multiple authors...) I would rather to ensure avoiding not entirely impossible scenario of script falling in some trouble and exhausting system resources and leaving server unresponsive. I had the case of installing amavis which was started right after and because of some messy configuration it have produced loadavg of >4 and system barely responsive.

My first taught was nice - nice -n 19 thatscript.sh. This may and may not help, but I was thinking that it would be best that I write and activate script that would do following:

  • run as daemon, on (example) 500ms-2s

  • check for labeled process with ps and grep

  • if labeled process(es) (or any other process) takes too much CPU (yet to be defined) - kill them with SIGKILL

My second taught was - it would not be the first time that I'm reinventing the wheel.

So, is there any good way to "jail" the program and processes produced by it into some predefined limited amount of system resources or automated kill if some threshold was reached by them?

I was thinking if there is "canonical" way to have this?

###Background & description I have to install some program on live server. Although I do trust the vendor (FOSS, Github, multiple authors...) I would rather to ensure avoiding not entirely impossible scenario of script falling in some trouble and exhausting system resources and leaving server unresponsive. I had the case of installing amavis which was started right after and because of some messy configuration it have produced loadavg of >4 and system barely responsive.

My first taught was nice - nice -n 19 thatscript.sh. This may and may not help, but I was thinking that it would be best that I write and activate script that would do following:

  • run as daemon, on (example) 500ms-2s

  • check for labeled process with ps and grep

  • if labeled process(es) (or any other process) takes too much CPU (yet to be defined) - kill them with SIGKILL

My second taught was - it would not be the first time that I'm reinventing the wheel.

So, is there any good way to "jail" the program and processes produced by it into some predefined limited amount of system resources or automated kill if some threshold was reached by them?

I was thinking if there is "canonical" way to have this?

Background & description

I have to install some program on live server. Although I do trust the vendor (FOSS, Github, multiple authors...) I would rather to ensure avoiding not entirely impossible scenario of script falling in some trouble and exhausting system resources and leaving server unresponsive. I had the case of installing amavis which was started right after and because of some messy configuration it have produced loadavg of >4 and system barely responsive.

My first taught was nice - nice -n 19 thatscript.sh. This may and may not help, but I was thinking that it would be best that I write and activate script that would do following:

  • run as daemon, on (example) 500ms-2s

  • check for labeled process with ps and grep

  • if labeled process(es) (or any other process) takes too much CPU (yet to be defined) - kill them with SIGKILL

My second taught was - it would not be the first time that I'm reinventing the wheel.

So, is there any good way to "jail" the program and processes produced by it into some predefined limited amount of system resources or automated kill if some threshold was reached by them?

Notice removed Draw attention by Miloš Đakonović
Bounty Ended with Centimane's answer chosen by Miloš Đakonović
Notice added Draw attention by Miloš Đakonović
Bounty Started worth 50 reputation by Miloš Đakonović
Tweeted twitter.com/StackUnix/status/877322588329959424
Source Link

Prevent a script exhausing system resources and crashing entire system

I was thinking if there is "canonical" way to have this?

###Background & description I have to install some program on live server. Although I do trust the vendor (FOSS, Github, multiple authors...) I would rather to ensure avoiding not entirely impossible scenario of script falling in some trouble and exhausting system resources and leaving server unresponsive. I had the case of installing amavis which was started right after and because of some messy configuration it have produced loadavg of >4 and system barely responsive.

My first taught was nice - nice -n 19 thatscript.sh. This may and may not help, but I was thinking that it would be best that I write and activate script that would do following:

  • run as daemon, on (example) 500ms-2s

  • check for labeled process with ps and grep

  • if labeled process(es) (or any other process) takes too much CPU (yet to be defined) - kill them with SIGKILL

My second taught was - it would not be the first time that I'm reinventing the wheel.

So, is there any good way to "jail" the program and processes produced by it into some predefined limited amount of system resources or automated kill if some threshold was reached by them?