
Pi 5 vs Mini PC for Always-On Home Services in 2026: I Switched Back After 8 Months
Eight months ago I decommissioned my three-node Pi 5 cluster and consolidated everything onto a single mini PC. Last week someone in the Calgary Homelab Discord asked me why, and I realized I’d never actually written it up properly. The short version: the Pi setup looked great on paper and terrible on a spreadsheet. Here’s the full math, the real performance numbers I measured, and the parts I still regret about switching.
How I Got Here: The Pi 5 Cluster Setup
I was running three Raspberry Pi 5 8GB boards in a custom acrylic rack on my server shelf. Each one had a 256GB NVMe SSD on a Pimoroni NVMe Base, powered through PoE+ hats pulling from a Netgear switch. The pitch was obvious: low individual power draw, silent operation, easy to replace a failed node, ARM architecture that felt like the future.
Services were split across the cluster using Docker Swarm. Node one handled networking stuff — AdGuard Home, Nginx Proxy Manager, Uptime Kuma. Node two ran media services — Jellyfin, Sonarr, Radarr, Prowlarr. Node three was the data node — Nextcloud, Vaultwarden, Immich.
On paper, that’s a clean architecture. In practice, it was a cluster of problems I didn’t anticipate.
The Power Draw Reality Check
The Pi 5 is rated around 5W idle, and that’s roughly accurate for the board alone. But once you add the NVMe SSD, the PoE hat, and actual load from running containers, each node was sitting at 8–11W under normal operation. I measured this with a TP-Link Kasa smart plug on each node for six weeks.
Three nodes at an average of 9.5W each: 28.5W continuous. Over a year, that’s roughly 250 kWh. At Alberta’s current residential electricity rates — I was paying about $0.17/kWh all-in with distribution charges — that works out to about $42.50 CAD per year just in electricity. Not catastrophic, but not the “basically free” narrative you see in a lot of Pi content.
The mini PC I switched to, an N100-based unit with 16GB RAM and a 512GB SSD, idles at 6–8W doing the same work. Under moderate load it hits maybe 12–15W. Measured average over similar period: about 8W. Annual electricity cost: roughly $12 CAD. That’s a $30/year difference — meaningful over three years.
SSD Wear: The Problem Nobody Talks About Enough
This was the thing that actually pushed me to switch. I checked the SMART data on my Pi 5 NVMe drives at the six-month mark and the wear indicators were moving faster than I expected.
The issue isn’t that Pi 5 is uniquely hard on SSDs. It’s that certain services generate a lot of write I/O, and when you’re constrained to one drive per node without much RAM to buffer writes, that load hits storage harder. Immich was the worst offender — machine learning thumbnail generation and metadata indexing on a single 256GB drive with 8GB RAM meant constant swap and cache writes.
I was also running Nextcloud with Redis on the same node, and the combination of database writes, PHP sessions, and Redis persistence added up. At six months, the 256GB drive on the data node had 18 TBW consumed against a rated 150 TBW. Annualized, that’s 36 TBW per year, meaning I’d theoretically burn through the drive rating in just over four years — while paying NVMe prices for drives I was stuffing into a Pi hat.
The mini PC consolidates everything onto one drive with proper DRAM cache and a more capable write buffer. Same services, four months in: about 9 TBW. The difference is partly the better SSD (a 512GB drive with higher endurance rating), partly that the x86 platform handles memory pressure better, and partly that I have enough RAM headroom to actually cache aggressively instead of writing through.
Replacing Drives on a Pi Cluster Is Its Own Problem
When the NVMe on node three started showing early warning signs, I had to take down Nextcloud, Vaultwarden, and Immich to swap it. Those services don’t play nicely with being half-available. The cluster architecture I’d imagined would give me redundancy didn’t actually give me service continuity for stateful apps without a lot more work — shared storage, proper volume replication, none of which I’d set up because it’s genuinely complex at the homelab scale.
With the mini PC, drive replacement is still an outage, but it’s one planned maintenance window instead of three separate nodes I have to think about.
Container Performance: Where x86 Still Wins in 2026
ARM support for container images has gotten genuinely better. Most things I run have proper ARM64 images now, and I’m not pulling emulated x86 layers the way I was two years ago. But “runs on ARM” and “runs well on ARM” are still different things for specific workloads.
Jellyfin hardware transcoding on Pi 5 using the VideoCore VII is functional. I got it working with the proper V4L2 setup. But it’s finicky, the container configuration is more complex than on x86 with Intel QuickSync, and when it broke after a Jellyfin update I spent two evenings debugging it. On the N100 mini PC, QuickSync just works, it’s been stable across four Jellyfin updates, and I can run two simultaneous 1080p transcodes without the CPU breaking a sweat.
Immich face recognition and CLIP embedding is the other big one. On Pi 5, the ML worker was slow enough that processing a new batch of photos felt like watching paint dry. It would take 45–60 seconds per photo for initial processing during a bulk import. On the N100, same container, same model: 8–12 seconds per photo. That’s not a minor difference when you’re importing a family vacation album.
What ARM Is Still Good For
I want to be fair here. I still run two Pi 5 boards. One is my dedicated AdGuard Home + Unbound resolver — that workload is genuinely light and the Pi is completely appropriate. The other runs Home Assistant OS, because HA on Pi is a proven, stable combination and I have no reason to move it.
For network-level services and home automation, Pi 5 is a great fit. For a multi-service homelab running ML workloads, media transcoding, and photo management, the x86 mini PC is the honest choice right now.
The Three-Year Cost of Ownership Math
Let me put actual numbers on this so you can run your own version.
Pi 5 Cluster (3-node)
- Hardware: Three Pi 5 8GB at roughly $110 CAD each = $330. Three Pimoroni NVMe Bases at ~$35 each = $105. Three 256GB NVMe drives at ~$55 each = $165. PoE hats + miscellaneous = ~$90. Total hardware: approximately $690 CAD.
- Electricity (3 years at $42.50/year): $127.50 CAD
- Drive replacement assumption (one drive over 3 years): $55 CAD
- Three-year total: approximately $872 CAD
N100 Mini PC (single node)
- Hardware: N100 mini PC 16GB/512GB, bought on sale for $285 CAD. This is realistic — these units fluctuate but $250–$320 CAD is the typical range right now.
- Electricity (3 years at $12/year): $36 CAD
- Drive replacement assumption (none expected in 3 years at current wear rate): $0
- Three-year total: approximately $321 CAD
The gap is real: roughly $550 CAD over three years. That’s not including my time debugging ARM compatibility issues, cluster networking headaches, or the two evenings I spent on the Jellyfin transcoding problem. If your time has any value, the single mini PC wins harder than the hardware math alone suggests.
The Counterargument Worth Taking Seriously
If you already own Pi 5 boards, the calculus changes. Sunk costs are sunk. If you have two boards sitting around and you’re deciding whether to buy a mini PC or put those boards to work, the answer might be different depending on your workloads. The cluster approach also gives you something the mini PC doesn’t: actual hardware fault tolerance if you set it up right. I just didn’t, and most homelabbers don’t.
The Honest Tradeoffs
The mini PC is better for my specific situation. I want to be clear that I’m not claiming it’s universally correct.
What I gave up by switching:
- The architecture story. “I run a three-node cluster” is more interesting to talk about than “I run a mini PC.” That’s shallow, but it’s real, and I’m being honest.
- Physical redundancy. If the mini PC dies, everything is down until I fix it. With the cluster, I at least had the option of building real redundancy — I just didn’t follow through on it.
- ARM familiarity. Running ARM in production at home is good experience. The ecosystem is where x86 was five years ago — growing fast, occasionally rough. There’s value in knowing it.
- The quiet pride of a Pi cluster. The acrylic rack looked genuinely cool on the shelf. The mini PC looks like a small black box, because that’s what it is.
What I don’t miss:
- Debugging why a container image doesn’t have an ARM64 variant
- Managing Swarm networking for stateful services
- Worrying about three separate points of failure
- The Jellyfin transcoding configuration
- Watching Immich process photos at a speed that made me question my life choices
The other thing I’ll say honestly: if I were doing this in 2023, the Pi 5 didn’t exist and the Pi 4 versus mini PC math was even more lopsided toward the mini PC. The Pi 5 is a genuinely capable board. It closed the gap significantly. But “closed the gap” isn’t the same as “better for always-on multi-service homelabs” — at least not yet, and not for the specific workloads I run.
What I Would Do Differently
I’d start with the mini PC and a single Pi 5 for DNS and Home Assistant from day one. The Pi cluster was a fun project that taught me real things about distributed systems, container orchestration, and ARM quirks. I don’t regret doing it. I do regret spending $690 CAD to learn lessons I could have learned more cheaply.
I’d also set up proper offsite backups earlier. The SSD wear situation made me nervous in a way that accelerated my 3-2-1 backup implementation — one of the few things I’m genuinely grateful the cluster anxiety pushed me toward.
If you’re starting fresh today and your goal is reliable always-on home services rather than learning distributed systems specifically, buy the N100 mini PC, put 32GB RAM in it if the budget allows, and spend the money you save on a good UPS. That’s the boring answer, and it’s the right one.
If you’re already running Pi 5 boards and your services are working fine, don’t let this post talk you into unnecessary spending. Measure your actual power draw, check your SSD SMART data, and make the decision based on your own numbers — not mine.
Check your SMART data today if you haven’t recently. It takes five minutes and it’s the single most useful thing you can do for long-term storage health on any always-on system. The command you want is smartctl -a /dev/nvme0 — run it, look at the wear indicators, and stop guessing.
Related Auburn AI Products
Building a homelab or self-hosting content site? Auburn AI has practical kits:
- 500 Homelab and Self-Hosting Blog Titles ($27)
- Auburn AI Monitoring Stack ($37) – 6 production PowerShell scripts
- Podcast Automation Kit ($37)
- Browse all Auburn AI products
