← Back

Domain 08: Logging & Troubleshooting — My Server Is Under Attack and I Can Prove It

Mar 22, 2026

I always knew running a public server meant dealing with attackers. But knowing it and actually seeing it in your logs are two completely different things.

This week I finished Domain 8 — Logging and Troubleshooting. And for the first time, I didn’t just learn commands. I watched my own server get attacked in real time through the logs. Let me show you what I mean.


Two Logging Systems, Not One

The first thing that confused me was why Linux has two logging systems running at the same time.

Here’s the simple version: journald collects everything — service output, kernel messages, boot events — and stores it in a binary format. You can’t just cat it. You query it with journalctl. The other system is rsyslog, which reads from journald and writes human-readable text files into /var/log. Both are running on your server right now, working together.

journald is the collector. rsyslog is the writer.


journalctl — Your First Tool When Something Breaks

Before I touch anything else on a broken server, I run this:

journalctl -p err -b

This shows every error-level message since the last boot. If something is broken, the answer is almost always in here.

A few more I use constantly:

# See logs for a specific service
journalctl -u nginx -b

# Follow logs live
journalctl -u nginx -f

# Kernel messages only
journalctl -k -b

# Logs from the previous boot (very useful after a crash)
journalctl -b -1

That last one saved me from a lot of confusion. When your server crashes and reboots, the logs from before the crash are still accessible — but only if journald is configured to persist logs across reboots. Check yours:

journalctl --list-boots

If you see multiple boots listed, you’re good. If you only see one, your logs are volatile — gone after every reboot. Fix that by checking /etc/systemd/journald.conf for Storage= and making sure /var/log/journal/ directory exists.


Reading auth.log — Where the Real Action Is

I moved my server from my home laptop to a Hetzner VPS this week. Within minutes of the migration, this is what I saw in /var/log/auth.log:

Connection reset by authenticating user root 2.57.121.69 port 10022 [preauth]
Connection reset by authenticating user root 45.148.10.147 port 56638 [preauth]
Connection reset by authenticating user root 45.148.10.152 port 21504 [preauth]

Different IPs, all trying to log in as root, all getting rejected before they even submit a password. The [preauth] tag means they were cut off at the door. This is a distributed brute force attack — multiple machines hitting my server at the same time.

My own login looked completely different:

Accepted publickey for limon from 87.92.180.21 port 45342 ssh2: ED25519 SHA256:t0s9NPK...

Three things prove this is me and not an attacker: Accepted instead of Failed, my known Finnish IP address, and my specific ED25519 key fingerprint. In a real security audit you’d take that SHA256 hash and verify it matches your actual key.

Every sudo command I ran also appeared in auth.log with the exact command, which directory I was in, and which terminal. Full audit trail, automatically.


Reading nginx Access Logs

One line from my nginx access log this week:

216.73.216.107 - - [22/Mar/2026:18:33:46 +0000] "GET /category/domain-08/ HTTP/1.1" 200 22879 "https://limonlab.online/category/domain-08" "ClaudeBot/1.0"

Seven fields, each one telling you something. The IP, the timestamp, the HTTP method and URL, the status code, the response size in bytes, the referring page, and the user agent.

That user agent — ClaudeBot — is Anthropic’s web crawler indexing my blog. Anthropic is reading my Linux notes. I found this genuinely funny.

Status codes you need to recognize cold: 200 is success, 301 is permanent redirect, 403 is forbidden, 404 is not found, 502 is upstream failure — usually PHP-FPM or your app crashed.


dmesg — The Kernel’s Private Log

When I ran dmesg -T --level=err,crit,warn on my server, I got no hardware errors. Good news — the hardware is healthy. What I did see was this, repeating every 20 seconds:

[UFW BLOCK] IN=eth0 SRC=85.217.149.20 DST=204.168.177.239 PROTO=TCP DPT=28567 SYN
[UFW BLOCK] IN=eth0 SRC=85.217.149.19 DST=204.168.177.239 PROTO=TCP DPT=3093 SYN
[UFW BLOCK] IN=eth0 SRC=85.217.149.35 DST=204.168.177.239 PROTO=TCP DPT=42901 SYN

Port scan attempts from a botnet — five different IPs from the same 85.217.149.x subnet, hitting random ports looking for anything open. The SYN flag means these are TCP connection attempts. UFW is dropping them silently before they even get a response, so from the attacker’s perspective my server doesn’t exist.

The subnet pattern is intentional. Block one IP, the scan continues from the next. The correct response is blocking the entire /24 subnet in one UFW rule.


The Log Reading Framework

After this domain, I have one mental model I apply to every log line:

  1. Who generated it? — kernel, a service, PAM, systemd?
  2. What PID, and is it system or user level?systemd[1] is system, systemd[1797] is a user session
  3. Success, failure, or warning? — words like Accepted, Failed, Killed, BLOCK
  4. Is this the root cause or a symptom? — always look for the first error, not the most recent one

That last point is the one I had to learn the hard way. When something breaks, you have ten lines of errors. Nine of them are consequences. One is the actual cause. It’s almost always the earliest timestamp.

Scroll to Top