Keeping a Linux server’s disk space healthy is a critical, yet often overlooked, sysadmin task. Over time, logs pile up, Docker storage balloons, and application caches accumulate — eating into your precious disk space and causing performance or stability issues.
I recently faced this exact problem on a production server, so I wrote a comprehensive Bash cleanup script that automates the tedious investigation and cleanup tasks — saving hours and keeping my servers lean and performant.
The Problem: Disk Space Slowly Runs Out
Here’s what commonly fills up disk space on Linux servers:
-
Large and old log files in
/var/log
(including system logs and application logs like PM2’s) - Systemd journal logs that grow indefinitely unless managed
- Docker storage layers and volumes that accumulate unused data
- Application cache folders (e.g., Node.js or Next.js caches)
- PM2 process logs that grow unbounded without rotation
Manually hunting down these culprits is time-consuming and error-prone.
The Solution: A One-Stop Cleanup and Management Script
I created a single Bash script that:
- Shows disk usage summary and largest folders in
/var
- Safely truncates all
.log
files without deleting them to avoid breaking processes - Deletes old rotated logs (
*.gz
and.1
) - Shrinks systemd journal logs to a configurable maximum size (default: 500MB)
- Prunes unused Docker volumes to reclaim space
- Safely truncates PM2 logs to avoid stopping running apps
- Installs and configures PM2’s
pm2-logrotate
module to automate log rotation going forward - Optionally clears common caches inside
/var/www
folders (can be enabled easily) - Shows a final report with the largest remaining files
Why This Script Is Safe and Effective
- Log truncation vs. deletion: Truncating log files keeps file handles open so running services like systemd or PM2 don’t crash or restart unexpectedly.
- Systemd journal vacuuming: Manages binary logs used by the system without data loss, reducing size intelligently.
- Docker volume pruning: Frees up space used by dangling volumes without touching active containers.
- PM2 logrotate integration: Prevents log files from growing uncontrollably in the future by automatically rotating and compressing logs.
- Final checks: The script finishes with summaries so you know exactly what was cleaned and what still takes space.
How to Use the Script
- Save the script as
cleanup_disk.sh
on your server. - Make it executable:
chmod +x cleanup_disk.sh
- Run with root privileges:
sudo ./cleanup_disk.sh
It will print step-by-step progress and disk usage reports.
The Script (Full Version)
#!/bin/bash
set -euo pipefail
echo "=== Disk Usage Summary ==="
df -h /
echo -e "\n=== Top /var Subdirectories by Size ==="
du -h --max-depth=1 /var | sort -hr | head -n 20
echo -e "\n=== Truncating *.log files in /var/log ==="
find /var/log -type f -name "*.log" -exec truncate -s 0 {} \;
echo -e "\n=== Deleting rotated log files (*.gz, *.1) in /var/log ==="
find /var/log -type f \( -name "*.gz" -o -name "*.1" \) -delete
echo -e "\n=== Cleaning systemd journal logs to max 500MB ==="
journalctl --disk-usage
journalctl --vacuum-size=500M
journalctl --disk-usage
echo -e "\n=== Pruning unused Docker volumes ==="
if command -v docker &> /dev/null; then
docker volume prune -f || echo "Docker volume prune failed or no volumes to prune"
else
echo "Docker not installed or not in PATH"
fi
echo -e "\n=== Truncating PM2 logs safely ==="
PM2_LOG_DIR="${HOME:-/root}/.pm2/logs"
if [ -d "$PM2_LOG_DIR" ]; then
find "$PM2_LOG_DIR" -type f -name "*.log" -exec truncate -s 0 {} \;
else
echo "PM2 log directory not found at $PM2_LOG_DIR"
fi
echo -e "\n=== Installing and configuring PM2 logrotate ==="
if command -v pm2 &> /dev/null; then
pm2 install pm2-logrotate || echo "pm2-logrotate module already installed or failed to install"
pm2 set pm2-logrotate:max_size 10M
pm2 set pm2-logrotate:retain 5
pm2 set pm2-logrotate:compress true
pm2 set pm2-logrotate:dateFormat YYYY-MM-DD_HH-mm-ss
else
echo "PM2 not installed or not in PATH"
fi
# Uncomment below lines if you want to clear caches inside /var/www
# echo -e "\n=== Clearing common caches in /var/www ==="
# find /var/www -type d -name "node_modules" -exec rm -rf {}/.cache \; 2>/dev/null || true
# find /var/www -type d -name ".next" -exec rm -rf {}/cache \; 2>/dev/null || true
echo -e "\n=== Final Disk Usage ==="
df -h /
echo -e "\n=== Large files >100MB ==="
find / -type f -size +100M -exec ls -lh {} \; | sort -k 5 -rh | head -n 20
echo -e "\n=== Top /var/log after cleanup ==="
du -h /var/log | sort -hr | head -n 20
echo -e "\nCleanup complete."
Final Thoughts
Disk space issues happen to every Linux server admin eventually — but with a little automation, you can prevent surprises and keep your systems running smoothly. This script has saved me hours of manual cleanup and helped avoid unexpected downtime.
If you find it useful, feel free to copy, customize, and share! And let me know if you want enhancements like scheduled cron jobs or integrations with monitoring tools.
Happy cleaning! 🚀
Top comments (1)
This is "nice" but also dangerous.
If your logs are getting so big they cause your system to run out of space, then you should be looking at WHY they are getting so big. [eg. my 15 year old server has some really bad system logs... because of reasons :D]
Turn of logging warnings and notices that don't affect things, and clean up your debug logs, turn them off when not needed, and keep logs at least 6 months because at some point you'll have to dig up log data from a while ago to help debug a problem and if your script runs frequently you won't have any logs to look at.