HelpWithWebGet Help Now
← Back to Blog
Server6 min read

Server Out of Disk Space? Here's Exactly What to Do

Your server is full and nothing works. Here's how to find what's eating your disk, free up space fast, and prevent it from happening again.

ByDino Bartolome

You log in to your server and something's wrong. Services are failing, databases can't write, logs are erroring — and then you see it: disk full. Here's how to find what's eating your space and fix it fast, without breaking your site.

Step 1: Confirm the problem

Run:

`` df -h ``

Look at the "Use%" column. If any filesystem shows 100% (or 95%+), that's your culprit. Note *which* filesystem (/, /var, /home).

Step 2: Find the biggest offenders

The most common causes of sudden disk fill-ups:

1. Runaway log files

Log files grow forever unless rotation is configured. The usual suspects:

`` sudo du -sh /var/log/* | sort -h ``

  • Watch out for:
  • /var/log/journal/ — systemd journal can grow huge
  • /var/log/nginx/ or /var/log/apache2/ — web server access logs
  • /var/log/syslog — if something is crash-looping

Quick fix: Truncate (don't delete — files may be held open):

`` sudo truncate -s 0 /var/log/syslog ``

Or vacuum the systemd journal:

`` sudo journalctl --vacuum-size=500M ``

2. Database binary logs

MySQL and MariaDB can fill disks with binlog files if binary logging is enabled but never purged.

`` ls -lh /var/lib/mysql/*.bin* ``

In MySQL, set a reasonable expire_logs_days and run PURGE BINARY LOGS BEFORE NOW() - INTERVAL 7 DAY;

3. Old backups

Full-site backups in /var/backups/, /home/, or /opt/backups/ that nobody cleaned up.

`` sudo du -sh /var/backups/* 2>/dev/null ``

4. Docker images and volumes

If this is a Docker host:

`` docker system df docker system prune -a --volumes ``

That last command will delete unused images, containers, and volumes — be careful if you have data you care about.

5. Temp files

`` sudo du -sh /tmp /var/tmp ``

Safe to clean most things in /tmp that are more than a few days old.

Step 3: Find any large file anywhere

If nothing obvious shows up:

`` sudo du -ahx / | sort -rh | head -20 ``

This walks the entire root filesystem and shows the 20 largest files or directories. You'll almost always spot the problem immediately.

Step 4: Don't just delete — fix the cause

Freeing space is the emergency fix. The long-term fix:

  • Configure log rotation (logrotate) so logs don't grow unbounded
  • Set up alerts at 80% disk usage so you get warned early
  • Automate backups to rotate — keep 7 or 14 daily, not infinite
  • Enable journal size limits in /etc/systemd/journald.conf

Step 5: If you truly need more space

If your server is legitimately using its disk for real work:

  • Add a volume (cloud providers make this easy)
  • Move databases or media to dedicated storage
  • Upgrade your plan to one with more disk

Common mistakes

  • Deleting `/var/log/` files while services have them open — truncate instead
  • Deleting package caches only to break package installsapt clean is safer
  • Restarting services without checking — can leave orphaned file handles
  • Deleting `node_modules` in production — yes, people do this

Need help?

If your server is full and things are on fire, I can jump in, free space safely, fix the root cause, and set up rotation/alerts so it doesn't happen again. Send me a message.

Need Help With Your Website?

I fix these problems every day. Send me a message and I'll take a look.

Get Help Now