I currently have a CoreOS cluster on AWS. The cluster runs multiples containers, mainly for Rails applications. However, one of this container is a pure Ruby app processing a lot of data from specific external APIs.
Actually the Docker container for that application is run everyday at 4am UTC.
myapp.service:
[Unit]
Description=MyApp service
Requires=docker.service
[Service]
ExecStart=/home/core/sc/myapp_start.sh
User=core
myapp.timer:
[Unit]
Description=MyApp Timer
Requires=docker.service
[Timer]
OnCalendar=*-*-* 04:00:00
Persistent=true
The shell script which is executed by the service could be sum up to:
/usr/bin/docker run --rm --name=myapp omg/myapp:$tag
Since the container is deployed to a cluster via CircleCI, the container could be on one of the servers present in the cluster. However if the server on which the container is running run out of memory because of another container running on the same server and taking to much RAM, or if there's no free space on disk available, etc - then the container is stopped and run again on another server of the cluster.
It's kind of problematic in the context of that Ruby application which should be run only once a day and not be run again in case of server's failure.
In that situation, how could I proceed?
Thank you.