
When I was setting up my own home lab running Nextcloud, Immich, and Paperless-ngx side by side, I kept copying compose file snippets from the official docs without questioning them. It wasn’t until I noticed my Redis containers were writing gigabytes of AOF logs to disk that I stopped and asked myself — wait, isn’t this supposed to be a cache? Digging into the community discussion around why everyone persisting Redis even for purely ephemeral workloads opened up a rabbit hole I wasn’t expecting. What I found changed how I configure every single Redis and Valkey instance in my home server stack.
Key Takeaways
- Most official Docker Compose examples enable Redis persistence by default, but this is often unnecessary when Redis is used purely as a volatile cache.
- Disabling AOF and RDB persistence with
--save "" --appendonly noreduces disk I/O, improves container security, and keeps your stack truly ephemeral. - Some self-hosted apps like Nextcloud and Immich store session tokens and job queue state in Redis, making persistence genuinely valuable in production setups.
- Valkey, the open-source Redis fork, is now the community-preferred drop-in replacement and supports all the same persistence flags.
- The right choice depends on how the application actually uses Redis — cache-only workloads need no persistence, but queue or session workloads benefit from it.
What Is Happening: The Community Debate
A thread that surfaced recently in the r/selfhosted subreddit has reignited a long-standing question in the home lab community: why is everyone persisting Redis even when the application only uses it as a cache? The original poster noticed that virtually every popular self-hosted application — Immich, Nextcloud, Paperless-ngx, and dozens more — ships its official Docker Compose example with either appendonly yes enabled or a bind-mounted redis-data volume. The poster was already running a hardened, ephemeral Valkey configuration with persistence fully disabled and wanted to know what they were missing.
The thread drew hundreds of responses from experienced self-hosters, DevOps engineers, and home lab veterans. The consensus was nuanced: persistence in Redis is not always wrong, but it is frequently cargo-culted from production templates into home lab environments where it adds unnecessary complexity, disk wear, and attack surface without meaningful benefit.
This is a genuinely important operational question. Redis is one of the most widely deployed in-memory data stores in the self-hosting ecosystem, and how you configure it has real implications for container security, disk I/O, NVMe longevity, and application restart behavior. Understanding the difference between a true cache workload and a session or queue workload is the key to getting this right.
Why Is Everyone Persisting Redis Even for Cache-Only Workloads?
The short answer is: habit, copy-paste culture, and defensive defaults. Official documentation for large projects tends to be written by developers who want their software to survive edge cases in production. Enabling appendonly yes in a Redis config costs almost nothing in a cloud environment with fast SSDs and unlimited storage — so it becomes the default. When that same config is copied into a home lab Docker Compose file, the persistence settings come along for the ride, even if they serve no real purpose.
The AOF and RDB Persistence Modes Explained
Redis supports two primary persistence mechanisms. RDB (Redis Database Backup) takes point-in-time snapshots of the dataset at configurable intervals, writing a compact binary file to disk. AOF (Append-Only File) logs every write operation to a file that can be replayed on restart. AOF provides much stronger durability guarantees — data loss is limited to at most 1 second of writes with the default fsync everysec setting — but it generates significantly more disk I/O and produces files that grow continuously until a rewrite is triggered.
For a pure cache workload, neither of these mechanisms provides meaningful value. If Redis restarts and the cache is empty, the application simply repopulates it from the primary database. The worst-case outcome is a brief performance degradation lasting a few seconds to a few minutes while the cache warms up. That is an entirely acceptable trade-off for most home lab deployments.
The Cargo-Cult Configuration Problem
In a real home lab setup, the consequences of blindly copying production Redis configs are more significant than they first appear. AOF logging on a busy Nextcloud instance can generate write amplification of 3x to 5x compared to the actual data being cached, accelerating wear on consumer NVMe drives. On a Raspberry Pi or low-power ARM board with an SD card or USB-attached SSD, this is a genuine concern. Based on community experience, many home labbers running Redis with AOF enabled on SD-backed storage have reported card failures within 12 to 18 months of continuous operation.
There is also a security dimension. A Redis container with a writable bind-mounted volume requires broader filesystem permissions, increases the blast radius of a container escape, and makes it harder to run the container in read_only: true mode with cap_drop: ALL — hardening options that are trivially easy to apply when persistence is disabled.
When Redis Persistence Actually Matters in a Home Lab
What actually works in practice is a case-by-case evaluation of what the application is storing in Redis. Not everything labeled a cache is actually ephemeral.
Session Tokens and Authentication State
Nextcloud, for example, stores user session tokens in Redis when configured to do so. If Redis restarts without persistence, every logged-in user is immediately logged out. For a single-user home lab, this is a minor annoyance. For a family server with 10 or 15 active users, it becomes a support ticket. In this scenario, enabling RDB snapshots — without full AOF — is a reasonable compromise that provides session recovery with minimal write amplification.
Job Queues and Background Task State
Immich uses Redis (or Valkey) as a job queue backend via the BullMQ library. Active photo processing jobs, face recognition tasks, and thumbnail generation pipelines are tracked in Redis. If Redis loses its state mid-job, those tasks are either orphaned or must be resubmitted manually. In this case, persistence is not about caching at all — it is about queue durability. Disabling persistence here can result in stuck or silently dropped background jobs after an unexpected container restart.
Pure Cache Workloads Where Persistence Is Genuinely Unnecessary
Paperless-ngx uses Redis primarily as a Celery task broker and result backend. The tasks themselves are short-lived and re-queueable. A restart causes a brief delay but no data loss. Similarly, many reverse proxy and rate-limiting use cases store only transient counters in Redis with TTLs measured in seconds. For these workloads, disabling both AOF and RDB is the correct call. The official Redis persistence documentation itself acknowledges that for pure caching use cases, persistence should be disabled entirely.
Everyone Persisting Redis Even When They Shouldn’t: The Secure Config Fix
The hardened, ephemeral Valkey configuration that sparked the original community discussion is worth examining in detail. Running Valkey with --save "" --appendonly no disables both RDB snapshots and AOF logging. Setting user: 1000:1000 drops root privileges. Enabling read_only: true and cap_drop: ALL removes all Linux capabilities and prevents any filesystem writes. This is a genuinely excellent baseline for cache-only workloads and represents best practice for container hardening in 2026.
The key insight from the community thread is that the decision should be intentional, not accidental. If you are running Redis as a pure cache, disable persistence explicitly. If your application uses Redis for session storage or job queues, enable the minimum persistence mode required — typically RDB-only with a snapshot interval of 60 seconds or longer — and accept the trade-off consciously.
Top 5 Home Lab Servers and Storage Devices for Running Redis Workloads
Whether you are running Redis with or without persistence, the underlying hardware matters. Here are five home lab platforms that handle Redis workloads particularly well, from low-power single-board computers to full mini-PC servers.
1. Raspberry Pi 5 (8GB)
Specs: Broadcom BCM2712 quad-core Cortex-A76 at 2.4GHz, 8GB LPDDR4X RAM, PCIe 2.0 x1 for NVMe via HAT, USB 3.0, Gigabit Ethernet.
Pros: Extremely low idle power draw of around 3 to 5 watts, massive community support and Docker ecosystem, PCIe NVMe support eliminates SD card write-wear concerns for persistent Redis workloads.
Cons: Single PCIe lane limits NVMe throughput to approximately 400MB/s sequential read.
Best for: Low-power always-on home lab nodes running ephemeral Redis caches for Nextcloud or Paperless-ngx.
2. Intel NUC 13 Pro (NUC13ANKi5)
Specs: Intel Core i5-1340P (12 cores, up to 4.6GHz), up to 64GB DDR4 RAM, dual M.2 NVMe slots, 2.5GbE + Wi-Fi 6E, Thunderbolt 4.
Pros: Dual NVMe slots allow separating Redis AOF logs onto a dedicated drive, strong single-threaded performance for Redis operations, compact form factor under 1 liter.
Cons: Higher idle power consumption of 15 to 25 watts compared to ARM alternatives.
Best for: Multi-container home labs running Immich, Nextcloud, and other apps with Redis job queues requiring genuine persistence.
3. Beelink EQ12 Mini PC
Specs: Intel N100 quad-core at up to 3.4GHz, 16GB DDR5 RAM, 500GB NVMe SSD, dual 2.5GbE, dual HDMI 2.0.
Pros: Exceptional value at under $200, DDR5 memory reduces latency for in-memory Redis operations, dual Ethernet enables dedicated management and data networks.
Cons: Single M.2 slot limits storage expansion without USB enclosures.
Best for: Budget-conscious home labbers who want a capable always-on Redis host without spending NUC money.
4. Samsung 990 Pro NVMe SSD (2TB)
Specs: PCIe 4.0 x4, sequential read up to 7,450MB/s, sequential write up to 6,900MB/s, 1,200 TBW endurance rating, 5-year warranty.
Pros: Industry-leading write endurance makes it suitable for persistent Redis AOF workloads, hardware encryption support, consistent sustained write performance.
Cons: Premium pricing compared to PCIe 3.0 alternatives.
Best for: Home lab nodes where Redis persistence is intentionally enabled and write endurance is a priority.
5. Synology DS923+ NAS
Specs: AMD Ryzen R1600 dual-core at 2.6GHz, 4GB DDR4 ECC RAM (expandable to 32GB), 4-bay 3.5-inch SATA, dual M.2 NVMe cache slots, dual 1GbE (10GbE via expansion).
Pros: Dedicated NVMe cache slots allow Redis data volumes to land on flash while bulk storage uses spinning disks, ECC RAM protects in-memory Redis dataset integrity, excellent Docker support via Container Manager.
Cons: Higher upfront cost and AMD Ryzen R1600 is not the fastest for CPU-bound container workloads.
Best for: Home lab users who want NAS storage and a Redis-capable Docker host in a single appliance.
Comparison Table
| Device | CPU Cores | Max RAM | NVMe Slots | Idle Power | Best Redis Use Case | Approx. Price |
|---|---|---|---|---|---|---|
| Raspberry Pi 5 (8GB) | 4 | 8GB | 1 (via HAT) | 3–5W | Ephemeral cache | ~$80 |
| Intel NUC 13 Pro | 12 | 64GB | 2 | 15–25W | Job queues + sessions | ~$450 |
| Beelink EQ12 | 4 | 16GB | 1 | 8–12W | Budget all-rounder | ~$180 |
| Samsung 990 Pro 2TB | N/A | N/A | N/A | N/A | High-endurance AOF storage | ~$160 |
| Synology DS923+ | 2 | 32GB | 2 (cache) | 20–30W | NAS + Docker combo | ~$600 |
Best Overall Pick for Redis Home Lab Hosting
The Intel NUC 13 Pro is the best overall home lab platform for running Redis workloads that span both ephemeral and persistent use cases. Here is exactly why it wins. The dual M.2 NVMe slots allow you to place Redis AOF or RDB data volumes on a dedicated secondary drive, completely isolating cache write amplification from your primary OS and application storage. The 12-core Intel Core i5-1340P handles multiple simultaneous Docker containers — Immich, Nextcloud, Paperless, Vaultwarden, and more — without the CPU contention that causes Redis latency spikes on lower-powered hosts. With up to 64GB of DDR4 RAM, you can allocate generous maxmemory limits to each Redis or Valkey instance, ensuring the LRU eviction policy never has to fight for headroom. In a real home lab setup running six to eight self-hosted applications simultaneously, the NUC 13 Pro handles the load with room to spare while staying under 35 watts at typical utilization. It is the platform that lets you configure Redis correctly — with or without persistence — rather than working around hardware constraints.
Community Reaction and Forum Takeaways
The r/selfhosted thread generated a remarkably high signal-to-noise ratio. Several experienced contributors pointed out that the cargo-cult persistence problem is not unique to Redis — it appears across the entire self-hosting ecosystem wherever production configs are copy-pasted without adaptation. Based on community experience, the most upvoted responses clustered around three key points.
First, official documentation is written for the lowest common denominator. Developers enabling persistence by default are protecting against the user who does not read the docs, not prescribing best practice for informed home labbers. Second, the distinction between a Redis cache and a Redis queue or session store is critical and often poorly documented by the application itself. Third, the hardened Valkey configuration in the original post — with read_only: true, cap_drop: ALL, and persistence disabled — was widely praised as a model for ephemeral cache containers and generated significant discussion about container security hardening as a first-class concern in home lab Docker stacks.
What actually works in practice, according to the thread’s most experienced contributors, is auditing each application’s Redis usage individually. Check whether the app stores session data, job queue state, or purely derived cache data in Redis. Then configure persistence accordingly — and document your decision in a comment in the compose file so you remember why you made it six months later. This is also a great opportunity to explore Docker Compose security hardening for your entire self-hosted stack.
For those running Valkey specifically, the community consensus is that Valkey is now the preferred drop-in replacement for Redis in home lab environments following the Redis license change in 2024. It supports all the same persistence flags, runs identically in Docker Compose, and is actively maintained under a fully open-source license. You can also check out our guide on Valkey versus Redis for self-hosted applications for a deeper comparison. If you are building a new stack from scratch in 2026, start with Valkey. And if you are curious about optimizing your overall stack, our breakdown of Nextcloud Docker performance optimization covers Redis configuration in the context of a full production-style home lab deployment.
Frequently Asked Questions
What is the best Redis configuration for a home lab cache?
For a pure cache workload, disable both AOF and RDB persistence with --save "" --appendonly no. This keeps the container fully ephemeral, eliminates unnecessary disk I/O, and allows you to run the container with read_only: true and cap_drop: ALL for a hardened security posture.
How do I know if my application actually needs Redis persistence?
Audit what the application stores in Redis. Session tokens and job queue state require persistence. Derived cache data that can be regenerated from a primary database does not. When in doubt, check the application’s documentation for its Redis usage pattern, or inspect the Redis keyspace with redis-cli --scan and look at TTL values — keys with no TTL are likely not pure cache entries.
Do I need Redis or can I just use Valkey instead?
Valkey is a fully compatible drop-in replacement. Swap redis:latest for valkey/valkey:latest in your Compose file and all flags, configuration options, and client libraries work identically. Valkey is actively maintained under an open-source license and is the community-preferred choice for new home lab deployments in 2026.
What is the difference between AOF and RDB persistence in Redis?
RDB takes periodic snapshots of the full dataset as a compact binary file. AOF logs every write operation and can limit data loss to approximately 1 second with the default fsync everysec setting. AOF is more durable but generates far more disk writes, making it inappropriate for cache workloads on write-sensitive storage like SD cards or budget NVMe drives with low TBW ratings.
Conclusion: Configure Redis With Intent, Not Habit
The question of why everyone persisting Redis even for cache-only workloads has a clear answer: it is habit, defensive defaults, and copy-paste culture. But the right answer for your home lab depends entirely on what your application actually stores in Redis. Pure cache workloads — anything that can be regenerated from a primary database without user impact — should run with persistence fully disabled and containers fully hardened. Session stores and job queues deserve at minimum RDB snapshot persistence configured with a conscious trade-off in mind.
The community thread that sparked this analysis is a reminder that questioning the defaults in your Docker Compose files is one of the highest-value habits you can develop as a home labber. Every bind mount, every capability, every persistence flag is a choice — and the best home lab stacks are the ones where those choices are intentional.
How are you configuring Redis or Valkey in your home lab? Are you running ephemeral cache containers, or have you found a use case where persistence genuinely saved you? Drop your compose snippets and configuration choices in the comments below — the HomeNode community learns best when we share what actually works in practice.