HelpWithWebGet Help Now
← Back to Blog
Server6 min read

High CPU or Memory on Your Server? How to Find the Cause

Server grinding to a halt with high CPU or memory usage? Here's how to find what's eating your resources and fix it in minutes, not hours.

ByDino Bartolome

Your server is at 100% CPU. Or 100% memory. Or both. Things are grinding to a halt. Here's how to find the culprit quickly without guessing.

Step 1: See what's running

`` top ``

Or the nicer version:

`` htop ``

The top processes in the CPU or MEM column are your suspects.

Common high-CPU causes

1. A runaway PHP / Node / Python worker

One request got stuck in an infinite loop and is pegging a whole core. Kill it:

`` kill -9 <PID> ``

Then figure out *why* it happened — usually a bug in the app.

2. Bot traffic or scraping

Check your access logs:

`` sudo tail -f /var/log/nginx/access.log ``

If you see a flood of requests from the same IP or user-agent, you&apos;re being scraped/crawled. Quick fixes:

  • Block the IP: sudo ufw deny from <ip>
  • Put Cloudflare in front of your site (free tier)
  • Add rate limiting in Nginx

3. Cron jobs running too often

A cron job set to every minute that takes 2 minutes = cron jobs piling up. Check:

`` ps aux | grep cron crontab -l ``

4. Heavy database query

`` SHOW FULL PROCESSLIST; ``

Look for queries that have been running for a long time. Kill the query, then optimize it (missing index is usually the culprit).

Common high-memory causes

1. Memory leak in your app

Your app&apos;s memory grows over time until OOM kills it. Common in Node/Python apps. Diagnosis:

`` ps aux --sort=-%mem | head -10 ``

If your app is growing: restart it as an immediate fix, then profile to find the leak.

2. Database caches too large

Databases (especially MySQL and Postgres) will use available RAM for caching. That&apos;s fine *until* it crowds out everything else. Check:

`` free -h ``

If you see tiny "free" but huge "buffers/cache" — that&apos;s database caching. Usually fine; only a problem if apps are getting OOM-killed.

3. Too many app workers

If you have 10 PHP-FPM workers each using 100MB, you&apos;re using 1GB just for workers. Reduce pm.max_children or equivalent.

4. Docker containers with no memory limits

`` docker stats ``

If a container is using way more memory than you&apos;d expect, set limits in your compose file:

``yaml deploy: resources: limits: memory: 512M ``

Is it actually too much?

A server running at 80% CPU or 80% memory isn&apos;t necessarily a problem — it means your resources are being used. It becomes a problem when:

  • Response times degrade
  • Requests start queueing
  • OOM killer starts killing processes
  • Disk I/O gets saturated (see iostat)

Quick wins

Add swap if you don&apos;t have it

A little swap prevents OOM killer from nuking your app when memory briefly spikes:

`` sudo fallocate -l 2G /swapfile sudo chmod 600 /swapfile sudo mkswap /swapfile sudo swapon /swapfile ``

Don&apos;t use swap for *sustained* load — but for spikes, it&apos;s a safety net.

Enable caching

Most sites don&apos;t need to re-render every page on every request. If you&apos;re running WordPress, install a caching plugin. If you have a custom app, use Redis or Memcached.

Upgrade your plan

If the server is genuinely too small for your traffic, throwing hardware at it is cheaper than engineering time.

Need help?

If your server is on fire right now, I can jump in, identify the resource hog, and get things back to normal — usually within an hour. Send me a message with what you&apos;re seeing.

Need Help With Your Website?

I fix these problems every day. Send me a message and I'll take a look.

Get Help Now