0

As titled, I am using CentOS 6.10

Here is part of my shell script, it runs find manually.

#!/bin/sh
backup_dir="/mnt/backup/website"
all_web="$(</mnt/backup/list.txt)"

for web in $all_web
do
    IFS=',' read -ra arraydata <<< "$web"
    tar zcvf $backup_dir/${arraydata[0]}\_$(date +%Y%m%d).tar.gz ${arraydata[1]}
done
exit 0;

The settings in /etc/crontab

0 22 * * * root sh /mnt/backup/backup.sh

I have tried to print the script's logs, but nothing was found, the log just shows that it was stopped.

/mnt/website/corey/public/files/111.pdf
/mnt/website/corey/public/files/222.pdf
/mnt/website/corey/public/files/333.pdf
/mnt/website/corey/pub

Looked through the limits, don't know which one would cause the issue.

core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 65536
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 65535
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 4096
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

Inspected the system using these commands, all of them showed nothing.

dmesg | grep -i "killed process"
dmesg | grep -i "out of memory"
dmesg | grep -i "corn"
dmesg | grep -i "backup"
grep -i "sigkill" /var/log/messages
grep -i "limit" /var/log/messages

Didn't find any error message in /var/log/cron either.

2025/03/31 Updated
I tried to execute the tar command line natively in cornd, still got stopped after five minutes, below is part of my /etc/crontab file

50 3 * * * root /bin/tar zcvf /mnt/backup/website/corey_20250331.tar.gz /mnt/webdisk/corey/

ls -lh

-rw-r--r-- 1 root root 6.1G Mar 31 03:55 corey_20250331.tar.gz

Workaround?
I moved the script from /etc/crontab to folder /etc/cron.daily, the script doesn't get killed. I have no idea what is the difference.

2025/04/07 Updated
As mentioned above, the workaround that moves the script to /etc/cron.daily worked well, but when I tried to specify the execution time in /etc/crontab like this

30 21 * * * root run-parts /etc/cron.daily

The script GET KILLED AFTER FIVE MINUTES AGAIN!

Any idea would really appreciated.

6
  • 1
    I would quote the args on the tar line. Maybe it takes five minutes to get to the first file that has whitespace in its name. Or maybe a comma, which could mess with the read. Commented Mar 28 at 10:05
  • 1
    Have you tried running it from the command line and see if it behaves the same way, or outputs something more useful? What about experimenting with a cron job using the same parameters that does a sleep 600 (followed by a log write) and see if it gets killed, too? Commented Mar 28 at 17:57
  • 1
    When you say stopped, do you mean stopped as in suspended as if with the SIGSTOP signal? Or terminated/killed as if with the SIGTERM signal? Commented Mar 31 at 6:23
  • 1
    With the v option of tar, the list of files being archived will be printed on stdout, and that will end up in an email sent to root. Is that what you want? Beware there's usually a limit on the size of emails being sent Commented Mar 31 at 6:25
  • 1
    The archive that you produce seems to be quite big. Do you have enough space on /mnt/backup/website? Do you have local mail delivery set up? Does the cron daemon send any messages to the owner of the crontab? Commented Mar 31 at 6:27

1 Answer 1

2

While you did check for OOM killer, may I suggest you are using a lot of unnecessary resources. I'd write this loop as

#!/bin/bash

backup_dir=/mnt/backup/website
all_web=/mnt/backup/list.txt

printf -v today '%(%Y%m%d)T' -2
ret=0
while IFS=, read -ru3 -a arraydata
do
    tar -zcvf "$backup_dir/${arraydata[0]}_$today.tar.gz" -- "${arraydata[1]}" ||
      ret=$?
done 3<"$all_web" || ret=$?
exit "$ret"

This reads the files line by line and processes one at a time. Previously you were loading the entire file, which I assume could be large, into memory at once, which could use a lot of resources and isn't necessary (unless $all_web can change during the backup). I also quoted things to handle any space or tab or wildcard characters it encounters, added the missing -- to handle file names starting with -, retrieved today's date only once and with the printf builtin (getting the date at the time of shell invocation with -2), and report any failure of any of the tar commands in the script's exit status.

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.