Backing Up a Home Lab in 2026: 3-2-1 Strategy with Open-Source Tools Only

Backing Up a Home Lab in 2026: 3-2-1 Strategy with Open-Source Tools Only
Listen to this post

AI-narrated version of this post using a synthetic voice. Great for accessibility or listening while busy.

AI assistance: Drafted with AI assistance and edited by Auburn AI editorial.

Most home labs run fine until they don’t. A failed drive, an accidental rm -rf, a ZFS pool that won’t import after a kernel update – these things happen, and they happen at the worst possible time. The 3-2-1 backup rule has been around since the film photography era (three copies, two different media types, one offsite), and it holds up just as well for a 2026 home lab as it did for analogue slides. What’s changed is the tooling. You no longer need a commercial backup service or a proprietary appliance to do this properly. Restic and rclone, both open-source and actively maintained, cover the full chain from local deduplication to cold cloud storage – and they work on any Linux host without phoning home to anyone.

Understanding the 3-2-1 Model for a Home Lab Context

The rule is simple to state and easy to implement badly. “Three copies” doesn’t mean three copies of your most recent snapshot – it means three independent copies, each of which could restore you if the other two were gone. “Two different media” means your spinning external USB drive and your NVMe pool are not two different media for this purpose; they’re both block storage sitting in the same physical chassis. “One offsite” means genuinely offsite: a cloud bucket in a different geographic region, a drive at a family member’s house, or a VPS you SSH into.

For a typical home lab running Proxmox VE 8.x or a bare Debian/Ubuntu host, a workable 3-2-1 layout looks like this:

  • Copy 1: Live data on the primary pool (ZFS, ext4, Btrfs – doesn’t matter).
  • Copy 2: A local restic repository on a USB-attached drive or a secondary internal disk in a different enclosure.
  • Copy 3: A remote restic repository pushed offsite via rclone to an S3-compatible bucket or a Backblaze B2 bucket (both of which have free-tier entry points).

What we found surprising when stress-testing this setup: the bottleneck is almost never the backup software. It’s the discipline of actually running restores. A backup you haven’t tested restoring is not a backup – it’s a hope.

Restic: What It Does and Why It Fits

Restic is a content-addressed, deduplicated, encrypted backup program written in Go. The project lives at restic.net and the source at github.com/restic/restic. As of version 0.17.x (current in early 2026), it supports repositories on local paths, SFTP, S3, Azure Blob, Google Cloud Storage, rclone backends, and several others natively.

Key properties worth knowing before you start configuring:

  • Encryption is mandatory. Every restic repository is AES-256-CTR encrypted with a password you supply. There’s no plaintext mode. This matters for Canadian data residency concerns – even if your cloud provider stores data in a US region, the content is encrypted client-side before it leaves your network.
  • Deduplication is chunk-based. Restic splits files into variable-length chunks and stores each chunk once, referenced by its SHA-256 hash. A 50 GB VM disk that changes 200 MB between snapshots will upload roughly 200 MB on the second run, not 50 GB again.
  • Snapshots are point-in-time. Each restic backup run creates a snapshot with its own ID. You can restore any snapshot independently, browse the file tree inside it, or mount it as a FUSE filesystem.

Install on Debian 12 / Ubuntu 24.04:

sudo apt update
sudo apt install restic
restic version
# restic 0.17.1 compiled with go1.22.x on linux/amd64

Or pull the latest binary directly from GitHub releases if your distro ships an older version:

wget https://github.com/restic/restic/releases/download/v0.17.1/restic_0.17.1_linux_amd64.bz2
bunzip2 restic_0.17.1_linux_amd64.bz2
chmod +x restic_0.17.1_linux_amd64
sudo mv restic_0.17.1_linux_amd64 /usr/local/bin/restic

Setting Up Local and Remote Repositories

Start with your local repository. Mount your USB backup drive at /mnt/backupdisk and initialize:

export RESTIC_PASSWORD="a-long-random-passphrase-store-this-safely"
restic init --repo /mnt/backupdisk/homelab-restic
# created restic repository abc123def at /mnt/backupdisk/homelab-restic

Run your first backup. Here we’re backing up /etc, /home, and a Proxmox dump directory:

restic -r /mnt/backupdisk/homelab-restic backup \
  /etc \
  /home \
  /var/lib/vz/dump \
  --exclude='/home/*/.cache' \
  --exclude='/var/lib/vz/dump/*.tmp'

Check what snapshots exist:

restic -r /mnt/backupdisk/homelab-restic snapshots

Apply a retention policy so old snapshots don’t fill your drive. This keeps daily snapshots for 7 days, weekly for 4 weeks, monthly for 6 months:

restic -r /mnt/backupdisk/homelab-restic forget \
  --keep-daily 7 \
  --keep-weekly 4 \
  --keep-monthly 6 \
  --prune

For the remote repository, the process is identical – just point --repo at an S3 or B2 URL. More on that in the rclone section below. Our reading suggests keeping the local and remote repositories separate (different repo paths, same password or different passwords stored in a secrets manager) so that a compromised cloud credential can’t delete your local copy and vice versa.

Rclone as the Offsite Transport Layer

Rclone handles the connection between restic and any remote storage backend. Project page: rclone.org. It supports over 70 storage providers and can also be used as a restic backend directly, which means restic talks to rclone, and rclone translates that to whatever API the remote storage speaks.

Install rclone on Debian/Ubuntu:

sudo apt install rclone
# or use the install script for the latest version:
curl https://rclone.org/install.sh | sudo bash
rclone version
# rclone v1.68.x

Configure a remote. Run rclone config and follow the interactive prompts to add a Backblaze B2 remote (named b2home here) or an S3-compatible remote. The config file lives at ~/.config/rclone/rclone.conf. A minimal B2 entry looks like:

[b2home]
type = b2
account = YOUR_B2_ACCOUNT_ID
key = YOUR_B2_APPLICATION_KEY
hard_delete = false

Once rclone is configured, you have two options for the remote restic repo:

Option A – restic native S3/B2 backend: Restic can talk directly to B2 or S3 without rclone as an intermediary. Set environment variables and init:

export AWS_ACCESS_KEY_ID="your-key"
export AWS_SECRET_ACCESS_KEY="your-secret"
export RESTIC_REPOSITORY="s3:https://s3.ca-central-1.amazonaws.com/your-bucket/homelab"
export RESTIC_PASSWORD="same-or-different-passphrase"
restic init

Using the ca-central-1 region keeps data in Canada if that matters for your situation – relevant if you’re handling any data that touches PIPEDA obligations, or just prefer Canadian data residency on principle.

Option B – rclone backend: Useful when your storage target is one of the providers restic doesn’t speak natively, or when you want rclone’s bandwidth throttling and transfer features:

export RESTIC_REPOSITORY="rclone:b2home:your-bucket-name/homelab"
export RESTIC_PASSWORD="your-passphrase"
restic init

Both options produce a normal restic repository. The restore procedure is identical regardless of which backend you used.

Cold Storage: Keeping Costs Near Zero

The “cold” tier matters for long-term retention. You don’t want to pay retrieval costs every time you run restic check. A practical approach for a home lab:

  • Use B2 or Wasabi for your active remote repo (frequent writes, occasional reads during verification). Both have S3-compatible APIs and no egress fees to Cloudflare-connected infrastructure.
  • For annual archives – say, a full snapshot of your lab state each January – export a restic snapshot to a tar archive and push that to a genuinely cold tier (AWS Glacier Instant Retrieval, or a physical drive stored at a second location). This is cheaper per GB for data you almost never read.

To export a specific restic snapshot to a tar file for cold archiving:

# List snapshots to find the ID you want
restic -r /mnt/backupdisk/homelab-restic snapshots

# Dump snapshot to a compressed archive
restic -r /mnt/backupdisk/homelab-restic dump --archive tar SNAPSHOT_ID | \
  gzip > /tmp/annual-archive-2026-01.tar.gz

# Push to cold bucket via rclone
rclone copy /tmp/annual-archive-2026-01.tar.gz coldremote:cold-archive-bucket/

A 100 GB home lab backup compressed and deduplicated typically lands somewhere between 20-60 GB depending on how much of your data is already compressed (media files, existing ZFS compressed datasets). At B2’s current pricing that’s well under $2 CAD per month for active storage – not a compelling reason to skip backups.

Automating the Whole Chain with Systemd Timers

Cron works fine, but systemd timers give you better logging and failure notifications. Here’s a minimal setup for a nightly backup at 02:00.

Create the backup script at /usr/local/bin/homelab-backup.sh:

#!/bin/bash
set -euo pipefail

export RESTIC_PASSWORD_FILE=/etc/restic/password
LOCAL_REPO=/mnt/backupdisk/homelab-restic
REMOTE_REPO="rclone:b2home:your-bucket/homelab"

# Local backup
restic -r "$LOCAL_REPO" backup /etc /home /var/lib/vz/dump \
  --exclude='/home/*/.cache'

restic -r "$LOCAL_REPO" forget \
  --keep-daily 7 --keep-weekly 4 --keep-monthly 6 --prune

# Remote backup
restic -r "$REMOTE_REPO" backup /etc /home /var/lib/vz/dump \
  --exclude='/home/*/.cache'

restic -r "$REMOTE_REPO" forget \
  --keep-daily 7 --keep-weekly 4 --keep-monthly 6 --prune

echo "Backup completed: $(date)"

Store the password in /etc/restic/password (mode 600, owned by root). Then create the systemd service and timer:

# /etc/systemd/system/homelab-backup.service
[Unit]
Description=Homelab Restic Backup
After=network-online.target
Wants=network-online.target

[Service]
Type=oneshot
ExecStart=/usr/local/bin/homelab-backup.sh
User=root
# /etc/systemd/system/homelab-backup.timer
[Unit]
Description=Run homelab backup nightly

[Timer]
OnCalendar=*-*-* 02:00:00
Persistent=true

[Install]
WantedBy=timers.target
sudo systemctl daemon-reload
sudo systemctl enable --now homelab-backup.timer
systemctl list-timers homelab-backup.timer

Add a monthly restic check run against both repos to verify data integrity. This reads a sample of pack files and verifies their checksums – it will catch silent corruption on a degraded drive before you need to rely on that copy.

Testing Restores: The Part Everyone Skips

A restore test doesn’t have to be dramatic. Once a month, pick a random file from a snapshot and restore it to /tmp:

restic -r /mnt/backupdisk/homelab-restic restore SNAPSHOT_ID \
  --target /tmp/restore-test \
  --include /etc/fstab

diff /etc/fstab /tmp/restore-test/etc/fstab

Quarterly, test a full VM restore from your Proxmox dump directory backup. Spin up a temporary VM, point it at the restored dump, verify it boots. This takes 30 minutes and will tell you things about your backup configuration that no amount of restic snapshots output will reveal.

From our experience, the most common failure mode isn’t the backup software – it’s the exclusion list. People exclude too aggressively, miss a critical config directory, and only discover it when restoring under pressure. Keep your exclusion list minimal and document why each entry is there.

The tools covered here – restic 0.17.x, rclone 1.68.x, systemd timers – are mature, well-documented, and free in every sense of the word. The 3-2-1 strategy they enable is solid enough that there’s genuinely no good reason a 2026 home lab should be running without it.

– Auburn AI editorial, Calgary AB


Related Auburn AI Products

Building a homelab or self-hosting content site? Auburn AI has practical kits:

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top