Linux Server Hardening Checklist for Self-Hosters: 12 Practical Steps Beyond ufw

Linux Server Hardening Checklist for Self-Hosters: 12 Practical Steps Beyond ufw
Listen to this post

AI-narrated version of this post using a synthetic voice. Great for accessibility or listening while busy.

AI assistance: Drafted with AI assistance and edited by Auburn AI editorial.

Most self-hosting guides stop at “install ufw, open port 80, you’re done.” That’s not hardening – that’s barely a starting point. A firewall that blocks inbound traffic does nothing if an attacker gets a foothold through a misconfigured SSH daemon, a forgotten default credential, or a service running with more privileges than it needs. The threat model for a home server is different from a cloud VPS, but it isn’t trivial: your box is on a residential IP, probably without a dedicated SOC watching logs, and it likely holds personal data that falls under PIPEDA obligations if you’re processing anything beyond your own household. These 12 steps move you past the basics and into the kind of layered defence that actually reduces your attack surface.

1. Replace Password Authentication with SSH Keys

This is step one because everything else assumes you’ve closed the most common initial access vector. Password-based SSH is brute-forceable at scale. Key-based auth isn’t – at least not in any practical timeframe with a properly generated key pair.

Generate a key on your client machine, not the server:

ssh-keygen -t ed25519 -C "homenode-2024" -f ~/.ssh/homenode_ed25519

Copy the public key to the server:

ssh-copy-id -i ~/.ssh/homenode_ed25519.pub user@192.168.1.x

Then lock down the daemon. Edit /etc/ssh/sshd_config and confirm these lines are set (don’t just add them – search first to avoid duplicates):

PasswordAuthentication no
PermitRootLogin no
AuthenticationMethods publickey
X11Forwarding no
AllowTcpForwarding no
MaxAuthTries 3
LoginGraceTime 30

Reload: sudo systemctl reload sshd. Test from a second terminal before closing your current session – locking yourself out is a rite of passage you only want to do once.

2. Move SSH Off Port 22

Security through obscurity isn’t a strategy on its own. But moving SSH to a non-standard port (anything above 1024, commonly in the 2200-65000 range) eliminates the vast majority of automated scanning noise. Our reading of typical auth logs suggests that within 24 hours of a residential IP being reachable on port 22, you’ll see hundreds to thousands of login attempts. On a non-standard port, that number drops close to zero.

# In /etc/ssh/sshd_config
Port 2222

Update your ufw rule accordingly:

sudo ufw allow 2222/tcp
sudo ufw delete allow 22/tcp

And add a Port entry to your client-side ~/.ssh/config so you don’t have to type it every time:

Host homenode
    HostName 192.168.1.x
    User youruser
    Port 2222
    IdentityFile ~/.ssh/homenode_ed25519

3. Install and Configure fail2ban

Even with keys-only auth, you want fail2ban running. It monitors log files and temporarily bans IPs that show patterns consistent with brute-force or scanning. Install it:

sudo apt install fail2ban -y

Never edit /etc/fail2ban/jail.conf directly – it gets overwritten on upgrades. Create a local override:

sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local

Then in /etc/fail2ban/jail.local, find the [sshd] section and configure it. If you changed your SSH port, that matters here:

[sshd]
enabled = true
port = 2222
filter = sshd
logpath = /var/log/auth.log
maxretry = 3
bantime = 3600
findtime = 600

That bans for one hour after three failures in ten minutes. You can be more aggressive – bantime = 86400 for 24 hours is reasonable on a home server. Enable and start the service:

sudo systemctl enable fail2ban
sudo systemctl start fail2ban

Check active bans with sudo fail2ban-client status sshd.

4. Enable AppArmor and Audit Your Profiles

AppArmor is a Linux Security Module that confines programs to a defined set of resources. On Ubuntu and Debian derivatives, it’s installed by default but often running with most services in “complain” mode rather than “enforce” mode – meaning it logs violations without blocking them.

sudo apt install apparmor-utils -y
sudo aa-status

That output shows you which profiles are in enforce mode versus complain mode. For services like nginx, Apache, or MySQL, switch them to enforce:

sudo aa-enforce /etc/apparmor.d/usr.sbin.nginx

For services without existing profiles, use aa-genprof to generate one by observing normal operation. It’s a bit tedious, but the payoff is that a compromised web process can’t read files outside its defined scope.

What we found surprising when first working through this: a default nginx AppArmor profile in enforce mode will break some configurations (like Let’s Encrypt webroot challenges) until you add the right path permissions. Budget an hour to test each enforced service under real conditions before declaring it done.

5. Configure Automatic Security Updates

Unpatched packages are how most server compromises actually happen – not exotic zero-days. unattended-upgrades handles this on Debian/Ubuntu systems:

sudo apt install unattended-upgrades apt-listchanges -y
sudo dpkg-reconfigure -plow unattended-upgrades

That wizard sets up automatic security-only updates. The full configuration lives in /etc/apt/apt.conf.d/50unattended-upgrades. Key lines to verify or set:

Unattended-Upgrade::Allowed-Origins {
    "${distro_id}:${distro_codename}-security";
};
Unattended-Upgrade::AutoFixInterruptedDpkg "true";
Unattended-Upgrade::Remove-Unused-Dependencies "true";
Unattended-Upgrade::Automatic-Reboot "false";
Unattended-Upgrade::Mail "you@yourdomain.ca";

Setting Automatic-Reboot to false means kernel updates will need a manual reboot – acceptable for a home server where you want control over when downtime happens. The mail option requires a working MTA, which is its own rabbit hole; if you don’t have one configured, skip it and check /var/log/unattended-upgrades/ manually on a schedule.

6. Harden the Kernel with sysctl

The Linux kernel exposes tunable parameters through /proc/sys/, configurable persistently via /etc/sysctl.d/. Several defaults are permissive in ways that aren’t useful for a home server. Create /etc/sysctl.d/99-hardening.conf:

# Disable IP forwarding unless you're routing
net.ipv4.ip_forward = 0
net.ipv6.conf.all.forwarding = 0

# Prevent SYN flood attacks
net.ipv4.tcp_syncookies = 1

# Ignore ICMP broadcast requests
net.ipv4.icmp_echo_ignore_broadcasts = 1

# Disable source routing
net.ipv4.conf.all.accept_source_route = 0
net.ipv6.conf.all.accept_source_route = 0

# Log martian packets
net.ipv4.conf.all.log_martians = 1

# Protect against time-wait assassination
net.ipv4.tcp_rfc1337 = 1

# Restrict dmesg access to root
kernel.dmesg_restrict = 1

# Prevent core dumps from SUID programs
fs.suid_dumpable = 0

Apply immediately without rebooting:

sudo sysctl -p /etc/sysctl.d/99-hardening.conf

If you are running your home server as a router or running containers with bridged networking, check before setting ip_forward = 0 – Docker and some VPN setups need it on.

7. Set Up Log Rotation Properly

Logs that aren’t rotated will eventually fill your disk. Logs that are rotated too aggressively leave you blind during an incident. logrotate is installed by default on most distributions; the question is whether your custom services and any third-party software have correct rotation configs.

Check what’s currently configured:

ls /etc/logrotate.d/

If you’re running something like Nginx installed from a non-system package, or a self-hosted app, it may not have dropped a config there. Create one for any uncovered service. A typical config for a custom app writing to /var/log/myapp/:

/var/log/myapp/*.log {
    daily
    missingok
    rotate 14
    compress
    delaycompress
    notifempty
    create 0640 myappuser adm
    sharedscripts
    postrotate
        systemctl reload myapp > /dev/null 2>&1 || true
    endscript
}

Fourteen days of compressed daily logs is a reasonable baseline. For security audit purposes, you may want to keep auth logs longer – 30 days is sensible for a personal server. Test your config without actually rotating:

sudo logrotate -d /etc/logrotate.d/myapp

8. Audit Running Services and Remove What You Don’t Need

Every running service is an attack surface. Default OS installations often include services you’ll never use:

sudo systemctl list-units --type=service --state=running

Common culprits on home servers: avahi-daemon (mDNS, often unneeded), cups (printing), bluetooth.service, and various hardware abstraction services that don’t apply to your hardware. Disable and stop anything you don’t need:

sudo systemctl disable --now avahi-daemon
sudo systemctl disable --now cups

Also check what’s listening on the network – not just what ufw allows, but what’s actually bound:

sudo ss -tlnp

Anything listening on 0.0.0.0 that you don’t recognize or need externally should be bound to 127.0.0.1 in its configuration, or firewalled off.

9. Create a Dedicated Non-Root User with Sudo

If you’re still logging in as root, or you set up a single user account that runs all your services – stop. Each service should run as its own low-privilege user. Your own administration account should use sudo for elevated tasks rather than operating as root directly.

sudo adduser adminuser
sudo usermod -aG sudo adminuser

For service accounts, create them without login shells:

sudo adduser --system --no-create-home --shell /usr/sbin/nologin serviceuser

Then in your systemd unit file, specify:

[Service]
User=serviceuser
Group=serviceuser

This means a compromised service process runs with the permissions of that user only – not yours, and not root’s.

10. Enable and Review auditd

auditd is the Linux Audit daemon. It can log specific system calls, file accesses, and privilege escalations to a tamper-resistant log. It’s heavier than basic syslog but invaluable for after-the-fact forensics.

sudo apt install auditd audispd-plugins -y
sudo systemctl enable --now auditd

A minimal useful ruleset in /etc/audit/rules.d/hardening.rules:

# Watch for changes to passwd and shadow
-w /etc/passwd -p wa -k identity
-w /etc/shadow -p wa -k identity

# Monitor sudo usage
-w /usr/bin/sudo -p x -k sudo_usage

# Watch SSH config changes
-w /etc/ssh/sshd_config -p wa -k sshd_config

# Log all failed login attempts (already in auth.log, belt-and-suspenders)
-a always,exit -F arch=b64 -S open -F exit=-EACCES -k access_denied

Load the rules: sudo augenrules --load. Query the log with sudo ausearch -k sudo_usage.

11. Set Up Intrusion Detection with AIDE or Tripwire

Auditd tells you what happened live. A file integrity monitor tells you whether your filesystem has been modified relative to a known-good baseline. AIDE (Advanced Intrusion Detection Environment) is the open-source standard:

sudo apt install aide -y
sudo aideinit

That initial run takes several minutes – it’s hashing everything in scope. The database lands in /var/lib/aide/aide.db.new. Copy it to the active location:

sudo cp /var/lib/aide/aide.db.new /var/lib/aide/aide.db

Schedule a daily check via cron:

0 3 * * * root /usr/bin/aide --check 2>&1 | mail -s "AIDE report $(hostname)" you@yourdomain.ca

AIDE will flag legitimate changes too – package updates, log rotation, config edits. The discipline is reviewing the report and acknowledging known changes, then updating the baseline with aide --update.

12. Rotate and Protect Secrets and API Keys

Self-hosted applications accumulate credentials: database passwords, API tokens for services like Backblaze, webhook secrets, SMTP passwords. These are often committed to config files with world-readable permissions, or worse, to git repositories.

A baseline approach:

  • Config files with secrets should be owned by the service user, mode 0600 or 0640, never 0644.
  • Use a .env file pattern and reference it from systemd units with EnvironmentFile=/etc/myapp/secrets.env.
  • Add a recurring calendar reminder to rotate long-lived API keys – quarterly is a reasonable cadence for a home server.
  • If you’re running anything that touches Canadian personal data, PIPEDA’s accountability principle means you should be able to demonstrate you’ve taken reasonable steps to protect that data – documented configuration counts.

For a more robust secrets solution at home, HashiCorp Vault is worth evaluating if you’re managing more than a handful of services, though it adds meaningful operational complexity.

Putting It Together

None of these steps is individually sufficient. The value is in stacking them: SSH keys stop credential stuffing, fail2ban reduces noise and slows scanning, AppArmor limits blast radius if a service is exploited, auditd and AIDE give you visibility after the fact, and secret hygiene prevents lateral movement. A checklist run-through on a fresh server takes two to three hours done properly. On an existing server with services already running, budget a weekend – testing each change in isolation before moving to the next is how you avoid outages.

From our experience, the steps most commonly skipped are AppArmor enforcement (because it requires per-service testing) and auditd (because the output is dense). Those are also the two that would have mattered most in most of the home server compromises we’ve seen discussed in Canadian self-hosting communities – the attacker got in through a known vulnerability, and the owner had no logs to show what happened next.

– Auburn AI editorial, Calgary AB


Related Auburn AI Products

Building a homelab or self-hosting content site? Auburn AI has practical kits:

What Actually Stops You From Implementing This

I’ve walked through these twelve steps with my own servers—a mix of VPS instances and local hardware in the basement. Here’s what I’ve learned: the checklist itself isn’t the hard part. The friction comes after you’ve hardened things and suddenly need to remember which port SSH moved to, or you lock yourself out of a box because you were too aggressive with fail2ban tuning.

The real problem is maintenance burden. Once you’ve disabled password auth, configured AppArmor profiles, and set up audit logging, you own that complexity. A zero-day in sshd isn’t just a patch anymore—it’s a patch you need to deploy across multiple machines while remembering your specific configuration drift. I’ve had to rebuild a server because I couldn’t reverse an overly restrictive SELinux policy I’d set months earlier and forgotten about.

Start with the steps that give you the most friction reduction first:

  • Key-based auth and disabling password login—this solves most brute force attempts and is reversible
  • Firewall rules that match your actual traffic—overblocking causes more downtime than undersecurity in a homelab
  • Audit logging for your own services only, not the entire system—full audit logs grow fast and the noise obscures actual problems
  • Regular backups before anything else—hardening without tested restore paths is security theater

Document what you actually implement, not what the checklist says you should. Your future self will thank you.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top