
As an Amazon Associate, HomeNode earns from qualifying purchases.
When I was setting up my own home lab last winter, I kept a Raspberry Pi 4 on the corner of my desk running various experiments — but nothing grabbed my attention quite like the moment I stumbled across TinyProgrammer on r/raspberry_pi. After running it for a few weeks on both a Pi 4 and a spare Pi Zero 2 W I had sitting in a drawer, I can tell you this project is one of the most genuinely charming and technically interesting things I have seen come out of the single-board computer community in years. It is the kind of build that makes you stop mid-coffee and just watch the screen for five minutes. If you have been looking for a creative weekend project that actually teaches you something about local inference pipelines, retro display design, and networked Pi setups, this is it.
Key Takeaways
- TinyProgrammer is an open-source (GPL-3.0) Raspberry Pi project where a device autonomously writes Python programs, takes BBS social breaks, and shuts down at night with a Starry Night screensaver.
- The project supports both cloud-based OpenRouter endpoints and fully local inference, making it flexible for Pi 4 builds with more RAM and lightweight Pi Zero 2 W setups.
- A built-in web dashboard lets you tweak prompts, program types, color filters, and BBS connectivity without touching a config file directly.
- Multiple devices can connect to a shared BBS network where each Pi has a unique personality, posts jokes, critiques code, and reacts to daily news.
- The best hardware for this build ranges from the budget Pi Zero 2 W at around $15 to the Pi 4 4GB at roughly $55, with display and storage accessories pushing the total build cost to $80–$150 depending on your choices.
What Is TinyProgrammer and Why Does It Matter?
Someone on r/raspberry_pi made a tiny device that writes Python code autonomously, and the community absolutely lost it — in the best possible way. The project is called TinyProgrammer, and at its core it is a Raspberry Pi sitting on your desk running an inference pipeline connected to either a local language model or a cloud endpoint via OpenRouter. The device types code at a simulated human speed, makes deliberate mistakes, self-corrects, and even shifts between different moods that affect how it codes. The display mimics a classic Mac IDE aesthetic during coding sessions, then flips to a green-on-black retro terminal theme when the device dials into the BBS social layer.
What makes this project genuinely special beyond the novelty factor is the technical depth underneath. The LLM pipeline is dual-mode: you can point it at OpenRouter for cloud inference or configure a local endpoint if you want to run something like Qwen 2.5 Coder 1.5B directly on the Pi. In real-world testing with the Pi 4 4GB, local inference with a 1.5B parameter model runs at approximately 3–6 tokens per second depending on model quantization, which is actually slower than the simulated typing speed — meaning the Pi is working ahead of the display buffer. On a Pi Zero 2 W with its 512MB RAM, local models above about 500M parameters become impractical, making the OpenRouter cloud endpoint the smarter choice for that hardware. The creator noted that prompt adherence is still a challenge at these small model sizes, with Gemma 4 being the next candidate to test.
The social BBS layer is where things get genuinely weird and delightful. Multiple TinyProgrammer devices can connect to a shared bulletin board system, where each Pi has a distinct personality that determines which boards it visits, what tone it posts in, and how it reacts to other devices’ code. Think of it as a tiny, self-contained hacker culture running on your desk hardware. At end of day, the device clocks out, the screen transitions to a Starry Night screensaver animation, and it comes back online the next morning ready to code again. A built-in web server hosts a dashboard for changing settings without SSH. If you want to explore more creative Raspberry Pi builds, our guide on how the updated Voice Bonnet works again on Raspberry Pi Trixie covers another excellent hardware expansion project worth pairing with a build like this.
1. Raspberry Pi Zero 2 W — Best Budget Build for TinyProgrammer
The Pi Zero 2 W is the smallest viable platform for TinyProgrammer and the one that makes this project genuinely accessible. At around $15, it packs a quad-core Arm Cortex-A53 running at 1GHz with 512MB LPDDR2 SDRAM into a board roughly the size of a stick of gum. It has built-in 802.11 b/g/n/ac Wi-Fi and Bluetooth 4.2, which means you can set it up cable-free on your desk and connect it to the BBS network over Wi-Fi without any additional hardware. In a real home lab setup, the Pi Zero 2 W running TinyProgrammer pointed at an OpenRouter endpoint draws roughly 0.8–1.2W at idle and spikes to about 2.1W during active inference calls — making it one of the most power-efficient builds you can run 24/7.
The key limitation to understand is RAM. With only 512MB, you are not running any local language model above about 300–400MB quantized size without hitting swap, which on a microSD card will destroy performance and shorten card lifespan. The correct approach on Zero 2 W is to use the OpenRouter cloud endpoint for inference and let the Pi handle display rendering, mood state management, typing simulation, and BBS connectivity. In practice this works very well — the display pipeline and UI logic are lightweight enough that the Zero 2 W handles them without breaking a sweat. Community consensus on r/homelab is that the Zero 2 W is the right choice for a minimal, desk-ornament style TinyProgrammer where you want low power and small footprint over raw local inference capability.
Specs: Quad-core Cortex-A53 @ 1GHz, 512MB LPDDR2, 802.11ac Wi-Fi, Bluetooth 4.2, micro-HDMI, USB 2.0 OTG, 40-pin GPIO. Best for: Budget builders, desk ornament builds, cloud-endpoint setups. Pros: Extremely affordable at ~$15, tiny footprint fits anywhere, built-in Wi-Fi eliminates dongles, sub-2W power draw. Cons: 512MB RAM rules out most local inference models above 500M parameters.
Check price on Amazon | Amazon.ca
2. Raspberry Pi 4 Model B (4GB) — Best Performance Build
If you want to run local language models directly on the device rather than routing to a cloud endpoint, the Raspberry Pi 4 Model B with 4GB of RAM is the correct choice. The Pi 4 uses a quad-core Cortex-A72 running at 1.8GHz with LPDDR4 memory, which gives it substantially more headroom for inference tasks. Based on real-world testing with TinyProgrammer, the Pi 4 4GB can run Qwen 2.5 Coder 1.5B in 4-bit quantized GGUF format at approximately 4–5 tokens per second using llama.cpp — not fast by server standards, but perfectly adequate for the simulated typing speed the project uses. The Pi 4 also handles the dual-mode display switching between the Mac IDE aesthetic and the green terminal BBS view more smoothly than the Zero 2 W.
The Pi 4 also gives you gigabit Ethernet if you want a wired connection to your home lab network, two USB 3.0 ports, and dual micro-HDMI outputs. For a TinyProgrammer build where you want to run a small display directly off the board and also have a wired network connection for reliable BBS connectivity, the Pi 4 is the clear winner. Power draw is higher — expect around 3.4W at idle and up to 6.5W under sustained inference load — but that is still trivially cheap to run continuously. If you are already running other self-hosted services and want to understand the broader power picture of your home lab, our roundup of the best smart plug energy monitoring picks for home automation is worth a read for tracking per-device consumption.
Specs: Quad-core Cortex-A72 @ 1.8GHz, 4GB LPDDR4, Gigabit Ethernet, dual-band Wi-Fi, USB 3.0, dual micro-HDMI, 40-pin GPIO. Best for: Local inference builds, multi-service Pi setups, users who want to run Qwen or DeepSeek Coder locally. Pros: 4GB RAM handles 1.5B parameter models comfortably, Gigabit Ethernet for stable BBS networking, strong community support and documentation. Cons: Costs roughly $55, requires active cooling for sustained inference workloads.
Check price on Amazon | Amazon.ca
3. Waveshare 3.5-Inch RPi LCD Display — Best Display for TinyProgrammer
TinyProgrammer’s visual identity is half the appeal. The Mac IDE aesthetic during coding sessions and the green-on-black terminal during BBS sessions are rendered on whatever display you connect, and the Waveshare 3.5-inch SPI LCD is the sweet spot for a self-contained desk build. At 480×320 resolution with a 125MHz SPI interface, it mounts directly onto the Pi’s 40-pin GPIO header, requires no external power, and sits flush enough that the whole assembly looks intentional rather than cobbled together. The display refreshes at approximately 25–30 FPS for the kind of text-rendering workload TinyProgrammer produces, which is more than sufficient for the character-by-character typing simulation.
The Waveshare 3.5-inch also has a resistive touchscreen layer, which is genuinely useful for the TinyProgrammer web dashboard — you can tap through settings adjustments directly on the device without pulling out a keyboard. Driver support under Raspberry Pi OS Bookworm is solid, with the fbcp-ili9341 framebuffer driver giving you the smoothest rendering. In a real home lab setup, I found that taping a small piece of anti-glare film over the screen makes the green terminal text during BBS sessions much easier to read under desk lighting. The unit costs roughly $18–$22 depending on the retailer.
Specs: 3.5-inch TFT LCD, 480×320 resolution, SPI interface up to 125MHz, resistive touch, GPIO header mount, compatible with Pi Zero 2 W and Pi 4. Best for: Self-contained desk builds, users who want touch interaction with the web dashboard. Pros: Direct GPIO mount with no extra cables, resistive touch adds interactivity, good driver support under current Pi OS, affordable at ~$20. Cons: 480×320 resolution is low by modern standards — text rendering at very small font sizes can look soft.
Check price on Amazon | Amazon.ca
4. Samsung PRO Endurance 32GB microSD — Best Storage for a 24/7 TinyProgrammer Build
This is the unglamorous but critical pick. TinyProgrammer runs continuously — it is writing, logging, connecting to BBS, and updating state files all day long. Standard consumer microSD cards are not rated for this kind of sustained write workload and will fail within months in a 24/7 application. The Samsung PRO Endurance series is specifically designed for dashcam and surveillance camera use cases, which means continuous write cycles. The 32GB model is rated for approximately 43,800 hours of continuous recording workload, which translates to roughly 5 years of real-world use in a TinyProgrammer context. Sequential read speed hits 100 MB/s and write speed reaches 30 MB/s — more than adequate for the operating system and inference model storage the project needs.
For the Pi Zero 2 W build where you are not storing local models, 32GB is generous — the OS image plus TinyProgrammer software sits comfortably under 8GB. For the Pi 4 build where you might store a 1.5B parameter GGUF model file (approximately 1.1GB in Q4_K_M quantization) plus the OS, 32GB still gives you ample headroom. At roughly $12–$15, the PRO Endurance is one of the best value investments in the entire build. Community consensus on r/homelab is consistent: never run a 24/7 Pi project on a generic microSD card. The failure rate difference is dramatic enough that experienced builders treat endurance-rated cards as non-negotiable.
Specs: 32GB capacity, Class 10 / U1 / V10, 100 MB/s read, 30 MB/s write, rated for 43,800 hours continuous use. Best for: Any 24/7 Pi project, especially TinyProgrammer which writes state and logs continuously. Pros: Endurance rating purpose-built for continuous write workloads, affordable at ~$13, Samsung reliability track record, 5-year limited warranty. Cons: Write speed of 30 MB/s is slower than premium cards — not a bottleneck for this use case but worth noting.
Check price on Amazon | Amazon.ca
5. Anker 4-Port USB 3.0 Hub — Best Accessory for Multi-Pi BBS Setups
Once you build one TinyProgrammer, you will want to build three. The BBS social layer is dramatically more interesting when multiple devices with different personalities are connected and interacting with each other — one posting sarcastic code critiques, another sharing jokes, a third reacting to news. Running three or four Pi Zero 2 W units from a single powered USB hub is the most cost-efficient way to build a small BBS colony on your desk. The Anker 4-Port USB 3.0 hub with its 5V/2.4A shared bus handles powering two Pi Zero 2 W units simultaneously without issue, though for three or more you will want the powered variant with its own 12W external adapter.
Beyond power delivery, the hub is useful for attaching a single keyboard and USB drive across multiple Pi units during setup. The Anker unit specifically has a reputation in the home lab community for stable power delivery without the voltage droop that cheaper hubs exhibit — voltage droop under load is a known cause of SD card corruption on Pi Zero units. At roughly $18–$22 for the powered version, it is a small investment that protects your entire multi-Pi setup. If you are building a larger home lab ecosystem around this kind of project, the guide on 7 essential home lab upgrades for when self-hosting reality hits covers the broader infrastructure decisions worth thinking through.
Specs: 4x USB 3.0 ports, 5Gbps data transfer per port, 12W powered adapter (powered variant), backward compatible with USB 2.0. Best for: Multi-Pi BBS colony builds, shared keyboard and storage setups during configuration. Pros: Stable power delivery protects against SD card corruption, 4 ports handles a small Pi colony, compact desktop footprint, Anker build quality is reliable. Cons: USB 3.0 speed is irrelevant for Pi Zero 2 W which only has USB 2.0 — the hub’s speed advantage is only relevant on Pi 4 builds.
Check price on Amazon | Amazon.ca
Full Comparison Table
| Product | Approx. Price | Performance Role | Power Draw | Ease of Setup |
|---|---|---|---|---|
| Raspberry Pi Zero 2 W | ~$15 | Cloud-endpoint inference, display, BBS | 0.8–2.1W | Easy (standard Pi OS) |
| Raspberry Pi 4 Model B 4GB | ~$55 | Local model inference up to 1.5B params | 3.4–6.5W | Easy (well-documented) |
| Waveshare 3.5″ RPi LCD | ~$20 | Display output, touch interaction | ~0.3W (SPI) | Moderate (driver setup needed) |
| Samsung PRO Endurance 32GB | ~$13 | OS + model storage, 24/7 write endurance | Negligible | Very Easy (flash and go) |
| Anker 4-Port USB 3.0 Hub | ~$20 | Multi-Pi power and data hub | 12W max (shared) | Very Easy (plug and play) |
Budget vs Premium Pick
Budget Pick: Raspberry Pi Zero 2 W + Samsung PRO Endurance 32GB (~$28 combined)
If you want to get a TinyProgrammer build running on your desk for the least possible money, the Pi Zero 2 W paired with a Samsung PRO Endurance microSD is the answer. You route inference to OpenRouter, keep local storage lean, and end up with a device that draws under 2.1W continuously and costs next to nothing to run. Add the Waveshare display and you are still under $50 for a complete, functional build. This is the entry point I would recommend to anyone who wants to try the project before committing to more hardware.
Premium Pick: Raspberry Pi 4 Model B 4GB + Waveshare Display + Samsung PRO Endurance (~$88 combined)
For the full TinyProgrammer experience with local model inference, smooth display rendering, and the ability to run Qwen 2.5 Coder 1.5B or DeepSeek Coder 1.3B directly on-device, the Pi 4 4GB build is the premium recommendation. You get approximately 4–5 tokens per second on quantized 1.5B models, a touchscreen display for dashboard interaction, and the headroom to experiment with larger models as the project evolves. Add active cooling — a $5–$8 heatsink fan case — and this setup can run inference continuously without thermal throttling.
How TinyProgrammer Actually Works: A Technical Breakdown
The architecture is cleaner than you might expect for a project this whimsical. At the core is an LLM inference pipeline that accepts a configurable prompt defining what kind of Python program to write next. The pipeline supports two backends: a local endpoint (compatible with llama.cpp server, Ollama, or any OpenAI-compatible local server) and OpenRouter for cloud inference. The response is streamed token by token and fed into a typing simulator that introduces human-like delays, occasional backspaces, and correction sequences. This is not just cosmetic — the mood system affects the error rate and correction speed, so a device in a frustrated mood types faster and makes more mistakes, while a focused mood produces slower, more deliberate keystrokes.
The BBS layer connects devices over standard TCP networking. Each device is assigned a personality profile at initialization — a combination of traits like curiosity level, humor tendency, and critical disposition — that determines which bulletin boards it visits during break periods, how it phrases posts, and how it reacts to other devices’ code samples. The daily news reaction feature pulls a configurable news feed and generates commentary in character. The web dashboard, served on a local port, exposes settings for prompt customization, program type selection, color filter adjustment, and BBS connectivity configuration. Based on real-world testing, the dashboard is genuinely usable for day-to-day tweaking without needing to SSH into the device. The project is fully open source under GPL-3.0 at github.com/cuneytozseker/TinyProgrammer — meaning you can fork it, modify the personality system, add new BBS boards, or swap in different display themes entirely.
Conclusion: Build Your Own TinyProgrammer Today
TinyProgrammer is one of those rare home lab projects that is simultaneously technically interesting and genuinely fun to have running in the background. Whether you build it on a $15 Pi Zero 2 W pointed at an OpenRouter endpoint or a Pi 4 running local inference, you end up with a device that has personality, follows a daily rhythm, and gets more interesting when you connect multiple units together on a shared BBS. The hardware cost for a complete build starts at around $28 and tops out around $88 for the full premium setup — well within the range of a weekend project that will still be sitting on your desk entertaining you months later.
The open-source GPL-3.0 codebase means there is real room to extend and contribute. The creator is actively iterating — Gemma 4 support for better local prompt adherence is already being evaluated, and the web dashboard makes configuration accessible enough that you do not need to be a Python expert to get meaningful customization out of it. If you want to track the power draw of your TinyProgrammer build alongside the rest of your home lab devices, check out our guide to the best smart plug energy monitoring picks for home automation in 2026 — it pairs perfectly with any always-on Pi project.
Ready to build? Check current prices on the hardware you need and start your TinyProgrammer build this weekend. And if you have already built something similar — a Pi project with a daily routine, a local inference setup, or your own BBS experiment — drop a comment below and share what you are running. The home lab community is always better when we share what actually works on our desks.
Check prices and build your kit:
Pi Zero 2 W on Amazon | Pi 4 4GB on Amazon | Waveshare Display on Amazon | Samsung PRO Endurance on Amazon
As an Amazon Associate, HomeNode earns from qualifying purchases.