I am doing incremental backup according to the following example found there: https://linuxconfig.org/how-to-create-incremental-backups-using-rsync-on-linux
#!/bin/bash
# A script to perform incremental backups using rsync
set -o errexit
set -o nounset
set -o pipefail
readonly SOURCE_DIR="${HOME}"
readonly BACKUP_DIR="/mnt/data/backups"
readonly DATETIME="$(date '+%Y-%m-%d_%H:%M:%S')"
readonly BACKUP_PATH="${BACKUP_DIR}/${DATETIME}"
readonly LATEST_LINK="${BACKUP_DIR}/latest"
mkdir -p "${BACKUP_DIR}"
rsync -av --delete \
"${SOURCE_DIR}/" \
--link-dest "${LATEST_LINK}" \
--exclude=".cache" \
"${BACKUP_PATH}"
rm -rf "${LATEST_LINK}"
ln -s "${BACKUP_PATH}" "${LATEST_LINK}"
After a few incremental backup This gives me a list of folder like this:
dir
dir_2024_06_21T18_17_40
dir_2024_06_21T18_18_14
dir_2024_06_21T18_18_32
dir_2024_06_21T18_18_50
dir_latest
After a while with enough changes, the disk will becomes full
I have the following questions:
If a file
thefile
was created betweendir_2024_06_21T18_18_14
anddir_2024_06_21T18_18_32
and I deletedir_2024_06_21T18_18_32
, isthefile
going te be still found indir_2024_06_21T18_18_50
or not (because there is some kind of recursive reference in the time) or can I safely deletedir_2024_06_21T18_18_32
and still findthefile
indir_2024_06_21T18_18_50
?More generally, is there a better strategy to erase the incremental backups when the backup disk becomes full?
rsnapshot
. If you've not come across it, take a good look at it, as I think it will do almost exactly what you are describing herethefile
should be a hard link and share the same inode number, if you list the file withls -i
orls -li
the number should be the same and you can safely delete the file.