As titled, I am using CentOS 6.10
Here is part of my shell script, it runs find manually.
#!/bin/sh
backup_dir="/mnt/backup/website"
all_web="$(</mnt/backup/list.txt)"
for web in $all_web
do
IFS=',' read -ra arraydata <<< "$web"
tar zcvf $backup_dir/${arraydata[0]}\_$(date +%Y%m%d).tar.gz ${arraydata[1]}
done
exit 0;
The settings in /etc/crontab
0 22 * * * root sh /mnt/backup/backup.sh
I have tried to print the script's logs, but nothing was found, the log just shows that it was stopped.
/mnt/website/corey/public/files/111.pdf
/mnt/website/corey/public/files/222.pdf
/mnt/website/corey/public/files/333.pdf
/mnt/website/corey/pub
Looked through the limits, don't know which one would cause the issue.
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 65536
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 65535
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 4096
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Inspected the system using these commands, all of them showed nothing.
dmesg | grep -i "killed process"
dmesg | grep -i "out of memory"
dmesg | grep -i "corn"
dmesg | grep -i "backup"
grep -i "sigkill" /var/log/messages
grep -i "limit" /var/log/messages
2025/03/31 Updated
I tried to execute the tar command line natively in cornd, still got stopped after five minutes, below is part of my /etc/crontab file
50 3 * * * root /bin/tar zcvf /mnt/backup/website/corey_20250331.tar.gz /mnt/webdisk/corey/
ls -lh
-rw-r--r-- 1 root root 6.1G Mar 31 03:55 corey_20250331.tar.gz
Didn't find any error message in /var/log/cron either.
Any idea would really appreciated.