
Disclosure: As an Amazon Associate, Auburn AI earns a small commission on qualifying purchases at no extra cost to you.
Hardware to run either approach
A Beelink mini PC with 16GB RAM is enough to host a serious Compose stack OR a 1-node k3s cluster.

AI-narrated version of this post using a synthetic voice. Great for accessibility or listening while busy.
There’s a recurring pattern in homelab communities: someone gets their first few containers running smoothly on Docker Compose, discovers Kubernetes exists, and spends the next three weekends rebuilding everything in k3s or kind-only to end up with a more complicated system that does exactly the same thing. It’s not that Kubernetes is bad. It’s genuinely excellent at solving specific problems at specific scale. The issue is that those problems almost never show up in a home environment running 10-30 services on one or two machines. This post lays out a practical decision framework for self-hosters trying to figure out which tool actually fits their situation, without the cloud-native enthusiasm that tends to colour most comparisons you’ll find online.
What You’re Actually Comparing
Docker Compose and Kubernetes are not really competitors doing the same job at different scales. They’re different categories of tool that happen to both run containers.
Docker Compose is a local orchestration tool. You write a docker-compose.yml file, run docker compose up -d, and your services start. Networking between containers is automatic. Volumes are straightforward. The mental model maps closely to what’s actually happening on the machine. A single file can describe a full application stack-Postgres, Redis, a web app, a reverse proxy-and anyone who reads that file can understand the deployment in five minutes.
Kubernetes is a distributed cluster management platform. Its job is to schedule workloads across multiple nodes, self-heal when pods crash, roll out updates with zero downtime, and scale horizontally based on load. To do those things reliably at scale, it introduces a substantial control plane: an API server, etcd (a distributed key-value store), a scheduler, a controller manager, and on each node, kubelet and a container runtime. Even the lightweight distributions-k3s, k0s, MicroK8s-carry meaningful overhead compared to Compose.
The honest comparison isn’t “simple vs. powerful.” It’s “right tool vs. wrong context.”
The Real Overhead of Running Kubernetes at Home
Let’s put some actual numbers on this. A single-node k3s install on a modest machine will consume roughly 500-700 MB of RAM just for the control plane components at idle. That’s before you add any workloads. On a box with 8 GB of RAM running Plex, Home Assistant, a VPN, a couple of databases, and a few other services, that’s nearly 10% of your memory committed to infrastructure management.
Docker Engine with the Compose plugin running equivalent workloads uses maybe 50-80 MB for the daemon itself. The rest goes to your actual services.
There’s also the operational surface area. A working Kubernetes cluster requires you to understand-or at minimum not break-concepts including:
- Pods, Deployments, StatefulSets, DaemonSets
- Services (ClusterIP, NodePort, LoadBalancer)
- Ingress controllers and IngressClass resources
- ConfigMaps and Secrets
- Persistent Volumes and Persistent Volume Claims
- Namespaces and RBAC
- Helm charts and their values files
None of that is hard to learn in isolation. But when something breaks at 11pm and your network-attached storage isn’t mounting because a PersistentVolumeClaim is stuck in Pending state due to a StorageClass misconfiguration, the debugging path is much longer than it would be with a Compose volume that simply points to /mnt/data/appname.
From our experience, the hidden cost of Kubernetes in a home environment isn’t the initial setup-it’s the ongoing cognitive load every time you want to add a service, update a config, or troubleshoot something at an inconvenient hour.
A Practical Decision Framework
Rather than asserting one tool is universally better, here are the actual questions worth asking before committing to either path.
How many physical or virtual nodes do you have?
If you have one machine, or one main server with a NAS attached, Compose is almost certainly the right answer. Kubernetes’s scheduling intelligence is wasted when there’s nowhere to schedule workloads except one node. You get all the complexity and none of the distributed-systems benefits.
If you have three or more nodes-say, a cluster of Raspberry Pi 4s or a few mini PCs-Kubernetes starts making more sense. k3s was specifically designed for this kind of resource-constrained multi-node setup, and it runs reasonably well on ARM64. The k3s project documents a single-server minimum of 512 MB RAM and 1 CPU, though 1 GB is more realistic in practice.
Do you need automatic failover?
Kubernetes will notice a crashed pod and reschedule it, potentially on a different node, without you doing anything. This is genuinely useful if you’re running a cluster where individual node failures are expected. For a single server homelab, Docker’s own restart: unless-stopped policy in Compose handles the common case (process crash, daemon restart after reboot) just fine.
Are you learning Kubernetes for professional reasons?
This is the one scenario where running Kubernetes at home absolutely makes sense even when it’s operationally overkill. If you’re working toward a CKA certification, or your workplace is migrating workloads to EKS or GKE, having a local k3s cluster to experiment with is legitimate value. Just be clear that you’re running it as a learning environment, not because it’s the right tool for home service hosting.
How much time do you want to spend on infrastructure vs. using your services?
This is the most honest question. Some people genuinely enjoy tinkering with infrastructure. That’s fine-homelab is partly about that. But if your goal is a reliable Jellyfin server and a self-hosted password manager that your family can actually use, the maintenance overhead of a Kubernetes cluster will regularly eat into time you could spend doing something else.
What Docker Compose Does Well That Often Gets Overlooked
Compose tends to get framed as the “beginner” option, which undersells it. A well-structured Compose setup can handle a serious homelab workload with minimal friction.
Compose files are readable documentation
A docker-compose.yml file is self-documenting in a way that a Helm chart values file usually isn’t. Here’s a minimal but real example of what a typical homelab service might look like:
services:
vaultwarden:
image: vaultwarden/server:1.30.5
container_name: vaultwarden
restart: unless-stopped
volumes:
- /mnt/data/vaultwarden:/data
environment:
DOMAIN: "https://vault.yourdomain.ca"
SIGNUPS_ALLOWED: "false"
networks:
- proxy
caddy:
image: caddy:2.8.4-alpine
container_name: caddy
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile
- caddy_data:/data
networks:
- proxy
networks:
proxy:
external: false
volumes:
caddy_data:
Anyone who’s spent an afternoon with Docker can read that and know exactly what’s running, what ports are exposed, and where data is stored. That clarity matters when you’re debugging at midnight or handing the system off to someone else.
Updating is two commands
Pulling a new image and restarting a service is:
docker compose pull vaultwarden
docker compose up -d vaultwarden
Kubernetes rolling updates are more sophisticated, yes. But for a home service where 30 seconds of downtime during a manual update is entirely acceptable, that sophistication costs more than it’s worth.
Secrets and environment variables are straightforward
Kubernetes Secrets are base64-encoded (not encrypted by default in etcd unless you configure encryption at rest explicitly-a common gotcha), require careful RBAC configuration to protect, and add ceremony around something that a well-managed .env file with appropriate filesystem permissions handles adequately for most home environments. Under PIPEDA, if you’re handling any personal information of Canadians, you need reasonable safeguards-but “reasonable” for a single-person homelab doesn’t require a secrets management platform.
When Kubernetes in the Homelab Is the Right Call
This post has leaned toward Compose, and that reflects where most self-hosters actually sit. But Kubernetes has legitimate homelab use cases.
Multi-node clusters with real redundancy goals. If you’ve built a three-node cluster and genuinely want workloads to survive a node failure-maybe you’re running a small business service or something a few other people depend on-Kubernetes earns its complexity here. k3s with an embedded etcd datastore handles this reasonably well starting at three control-plane nodes.
GitOps workflow practice. Tools like Flux and Argo CD operate on Kubernetes and let you declare your entire cluster state in a Git repository, with automatic reconciliation. If you want to learn this workflow-which is increasingly standard in professional infrastructure-a homelab Kubernetes cluster is a reasonable place to practice it.
You already know Kubernetes and Compose feels limiting. If you’re coming from a professional background where Kubernetes is your daily driver, Compose’s lack of built-in health-check-based rollout controls and pod disruption budgets may genuinely frustrate you. Use what you know.
Specific workloads that benefit from Kubernetes primitives. CronJobs in Kubernetes are first-class objects with proper scheduling semantics. StatefulSets give you ordered pod management that matters for some database clustering setups. If you’re specifically building around these patterns, Kubernetes isn’t overkill-it’s appropriate.
A Sensible Migration Path If You Want Both
You don’t have to choose once and never revisit. Our reading of how experienced homelabbers tend to land is roughly this progression:
- Start with Compose. Get services running, understand volume mounts, networking, and the update cycle.
- Standardize your Compose files: one directory per application, consistent naming, all configuration in version-controlled files (minus secrets), data volumes on a dedicated mount point.
- If you acquire more hardware and want to experiment with Kubernetes, stand up a separate k3s cluster rather than migrating production services immediately. Run learning workloads there.
- Only migrate services to Kubernetes when you have a specific reason tied to something Kubernetes actually does better for that workload.
What we found surprising talking to people who’ve gone through this is how many of them end up running both: Compose for the stable, boring production services (Jellyfin, Home Assistant, Vaultwarden, Nextcloud), and a Kubernetes cluster for experimentation and anything that benefits from multi-node scheduling. That’s not a compromise-it’s using each tool for what it’s good at.
The Honest Summary
If you’re running a single server-whether that’s an old workstation in a Calgary basement, a mini PC on a shelf, or a cloud VPS-Docker Compose will handle your homelab without complaint for years. It’s not a stepping stone you need to graduate from. It’s a production-grade tool used by professionals running real workloads, and its simplicity is a feature, not a limitation.
Kubernetes is worth learning if you have professional reasons to learn it, more than two nodes to work with, or specific workload requirements that Compose can’t meet. It’s not worth running just because it’s more technically impressive.
Pick the thing that keeps your services up and your weekend free.
– Auburn AI editorial, Calgary AB
Related Auburn AI Products
Building a homelab or self-hosting content site? Auburn AI has practical kits:
- 500 Homelab and Self-Hosting Blog Titles ($27)
- Auburn AI Monitoring Stack ($37) – 6 production PowerShell scripts
- Podcast Automation Kit ($37)
- Browse all Auburn AI products
Books and gear for going deeper
- Beelink mini PC for homelab — quiet, cheap, runs anything
- Kubernetes Up & Running — the standard intro book if you graduate to k8s
- Docker Deep Dive — goes beyond Compose into networking and storage
- 16GB DDR4 SODIMM — common upgrade for older mini PCs
The Operational Reality: What Kubernetes Costs You That Nobody Mentions
I’ve run both stacks in my Calgary lab for extended periods, and the honest friction isn’t just learning curve—it’s operational burden disguised as infrastructure maturity. Kubernetes demands you become a platform operator, not just a self-hoster. That means learning YAML schemas deeply enough to debug why a pod won’t schedule, understanding resource requests well enough to prevent node thrashing, and maintaining cluster health across updates.
Docker Compose let me add a new service in 15 minutes. Kubernetes took me three hours the first time, partly because I had to wrestle with networking policies and persistent volume claims. The second time was faster, sure. But I also had to maintain a mental model of how secrets propagate, how service discovery resolves differently than with Compose, and why my ingress controller sometimes silently dropped traffic.
There’s a real cost to this complexity that compounds quietly:
- Cluster upgrades eat a weekend. Compose updates take minutes.
- Debugging a failed deployment requires understanding multiple abstraction layers instead of reading logs from one container.
- You’ll eventually need to manage persistent storage properly, which is where Kubernetes’s elegance becomes a second job.
- A three-node cluster for redundancy means power costs and thermal load that don’t always justify the reliability gain for non-critical services.
Kubernetes shines when you’re running dozens of interdependent services or need genuine high availability. For a home lab running 5–12 services, Compose keeps you focused on your actual projects instead of cluster maintenance. That’s not a limitation of Kubernetes—it’s just mismatched scope.
