NAM vs GuitarML vs Aida-X on Raspberry Pi 5: Real-Time Latency Tested in 2026

NAM vs GuitarML vs Aida-X on Raspberry Pi 5: Real-Time Latency Tested in 2026

# NAM vs GuitarML vs Aida-X on Raspberry Pi 5: Real-Time Latency Tested in 2026

If you landed here from a search, you probably already know the basic premise. A Raspberry Pi running neural amp modeling software can replace a physical amp — at least in headphones or direct into a DAW. We covered the initial build process back in the original HomeNode post, so this review skips the soldering and OS setup and gets straight to what matters: which of the three main software options actually sounds good, stays stable, and doesn’t blow up your CPU mid-set.

The three contenders are Neural Amp Modeler (NAM), GuitarML (specifically the Proteus and Smart Amp plugins), and Aida-X. All three run on Raspberry Pi 5 in 2026. None of them are magic. They each have real tradeoffs, and the Pi 5’s BCM2712 quad-core Cortex-A76 at 2.4 GHz changes the math considerably compared to the BCM2711 in the Pi 4 — but it doesn’t erase every problem.

Quick Background

Neural amp modeling works by training a small recurrent or convolutional neural network on audio recorded through a real amp. You feed it a test signal, capture the output, train a model, and you get a file that theoretically reproduces that amp’s behaviour in real time. The quality depends on the model architecture, the training data, and how much compute the inference engine demands at runtime.

The Pi 5 gives you roughly 2–2.5× the single-threaded performance of the Pi 4, which is meaningful here because amp model inference is largely single-threaded. With a Focusrite Scarlett Solo or 2i2 over USB, you can get JACK running at 48 kHz/64-sample buffers without immediately crying. With a cheaper interface like a Behringer UCA222, you’re looking at 128 or 256 samples minimum before ALSA stops complaining, and the UCA222’s audio quality is honestly not the bottleneck you want in a neural amp rig anyway.

All testing here was done on a Pi 5 (8 GB), Patchbox OS (based on Raspberry Pi OS Bookworm), JACK2 via QjackCtl, and a Focusrite Scarlett Solo (3rd gen) at 48 kHz. I used both 64 and 128-sample buffer sizes. Guitar was a standard-output humbucker guitar. All three apps were tested at software versions current as of May 2026: NAM 0.7.x, GuitarML Proteus 1.4.x, and Aida-X 1.3.x (as the LV2 plugin via MOD-host).

Neural Amp Modeler (NAM)

What It Is

NAM is the project from Steven Atkinson, and it’s become the de facto standard for neural amp captures in the DIY world. The model format (.nam files) is open, and ToneHunt.org hosts thousands of free captures — everything from a blackface Fender Deluxe to a Diezel VH4. The NAM plugin itself is available as a standalone app, VST3, and LV2. On Pi 5, you’re using the LV2 version inside MOD-host or a headless JACK session.

Sound Quality

Honest answer: NAM captures sound the closest to the source amp of any of these three options, when the model was trained well. A good NAM capture of a Marshall JCM800 through a proper impulse response loader is genuinely convincing. A bad capture from ToneHunt — one trained on 10 minutes of data with mic bleed — sounds like a Marshall JCM800 filtered through a flannel blanket.

The architecture NAM uses (WaveNet-style with recurrent layers depending on model type) handles pick attack and dynamic response better than simpler LSTM-only approaches. Standard models and lite models are both on ToneHunt; on Pi 5, standard models at 64-sample buffers work, but you are running hot.

CPU Usage on Pi 5

  • 64-sample buffer (1.3 ms latency), standard NAM model: CPU sits around 68–75% on one core. Occasional xruns under heavy palm-muted chugging. Not stable enough for a live rig unless you’re playing clean or at low gain.
  • 64-sample buffer, NAM lite model: 45–52% CPU. Stable. Sounds noticeably thinner on high-gain captures.
  • 128-sample buffer (2.6 ms), standard model: 55–62% CPU. Stable. This is where NAM actually becomes usable for live playing on the Pi 5.
  • 256-sample buffer (5.3 ms): Fully stable, barely breaks a sweat. But 5.3 ms starts to feel mushy on attack, especially for faster picking.

The Pi 4 situation was worse — standard NAM models at 64 samples on the BCM2711 would xrun constantly, and even 128-sample buffers were borderline. The Pi 5 is a meaningful improvement, but standard NAM at 64 samples is still not reliable under metal-rhythm sustained loads. I’ll be direct: if your genre involves fast downpicking or tremolo picking for more than 8 bars straight, use 128 samples or switch to a lite model.

Model Library

ToneHunt is free and community-driven. The quality range is enormous. ML-Sound-Lab sells professionally trained captures for NAM (and other formats) — their models are consistently better trained, but you’re paying $10–$25 per pack. For a headless Pi rig, model management means dropping .nam files into a folder and reloading MOD-host, which is clunky but workable.

UI

There’s no pretty touchscreen UI for NAM on Pi without building one yourself. Most people run it headless with a Python or Pure Data wrapper for preset switching via MIDI. Patchbox OS with MOD-host gives you a browser-based UI at your Pi’s IP address, which works fine from a phone or laptop on the same network. It’s not ideal for a gigging rig where you want instant pedal-board-style control, but for home use it’s perfectly reasonable.

GuitarML (Proteus / Smart Amp)

What It Is

GuitarML is a collection of projects from Keith Bloemer. Proteus and Smart Amp are the two most relevant. Proteus is aimed at amp captures using a similar LSTM approach to NAM, while Smart Amp also includes some cabinet simulation. Both are available as LV2 plugins. The model format (.json files containing LSTM weights) is different from NAM, so you can’t share captures between them directly.

Sound Quality

Proteus sounds good, particularly on clean and mid-gain captures. The LSTM architecture it uses is efficient and the inference is fast. Where it falls short compared to NAM is in dynamic nuance — the pick attack and bloom on a good tube amp don’t come through as clearly. It’s closer to “sounds like an amp” than “sounds like *that* amp.” For jazz, blues, or classic rock tones, this distinction matters less. For someone who’s spent years playing through a specific amp and wants their modeler to nail it, NAM’s architecture generally edges it out.

Smart Amp adds a trained cabinet model into the mix, which simplifies the signal chain — you don’t need a separate impulse response loader. Convenient, but the combined IR+amp model means less flexibility.

CPU Usage on Pi 5

This is where GuitarML Proteus genuinely shines.

  • 64-sample buffer, Proteus standard model: 28–35% CPU. Rock solid. No xruns across 30 minutes of continuous testing including sustained metal-rhythm passages.
  • 128-sample buffer: 18–22% CPU. Trivial load.

The LSTM architecture used in Proteus is significantly lighter than NAM’s WaveNet-style layers. On the Pi 4 (BCM2711), Proteus at 64 samples was already usable. On the Pi 5, it barely registers. If CPU headroom is your primary concern — say, you want to chain amp model + reverb + delay in a single JACK session — Proteus is the practical choice.

Model Library

The GuitarML model community is smaller than NAM’s. There are official model packs on the GuitarML GitHub and associated community pages, but you’re not browsing 3,000 captures. This matters if you want a very specific amp sound. It matters less if you’re happy choosing from a curated set of ~80–100 quality captures.

UI

Same story as NAM — LV2 in MOD-host, browser UI, or build your own. One advantage is that Proteus JSON models load faster than NAM’s, which makes preset switching via MIDI slightly snappier in practice.

Aida-X

What It Is

Aida-X is a project from MOD Audio (the same people behind the MOD Dwarf and MOD Duo hardware pedals). It uses a different neural architecture (AIDADSP, based on WaveNet with some optimizations) and is tightly integrated with the MOD ecosystem. The LV2 plugin runs in MOD-host on Patchbox OS and feels the most “finished” of the three options in terms of UI and workflow — because it was designed for exactly this kind of embedded hardware.

Aida-X models (.json, different structure from GuitarML) are hosted on the MOD Cloud platform, and a reasonable number are available free. MOD Audio’s commercial model packs are well-trained.

Sound Quality

Aida-X sits between NAM and Proteus in terms of sonic accuracy. On clean and mid-gain tones, it’s indistinguishable from NAM to most ears. On high-gain, complex compressed tones — think 5150-era heavy metal or modern djent-adjacent stuff — NAM’s architecture captures more of the “feel.” Aida-X can sound slightly compressed in a way that’s pleasing for some genres and slightly artificial for others. Blues and classic rock players will likely prefer Aida-X to Proteus, and possibly over NAM, simply because it integrates cleanly with MOD-host’s effects chain (reverb, delay, compressor) with a single browser interface and no extra wrapper work.

CPU Usage on Pi 5

  • 64-sample buffer: 40–50% CPU depending on model complexity. Stable for most playing styles. Under very aggressive high-gain rhythmic playing, occasional xruns — better than NAM standard, worse than Proteus.
  • 128-sample buffer: 30–38% CPU. Fully stable.

UI

The MOD-host browser interface is the best out-of-the-box experience of the three. You get a pedalboard-style view, drag-and-drop plugin routing, and direct model downloads from MOD Cloud. For a Pi-based rig that’s going to live in a box under your desk or in a gig bag, Aida-X with Patchbox OS is the least painful to set up and operate. If you’ve got a 7″ touchscreen attached to your Pi, the browser UI at 1024×600 is usable without squinting.

Side-by-Side Summary

| | NAM 0.7.x | GuitarML Proteus 1.4.x | Aida-X 1.3.x | |—|—|—|—| | Architecture | WaveNet-style / recurrent | LSTM | AIDADSP (WaveNet variant) | | 64-sample CPU (Pi 5) | 68–75% (standard) | 28–35% | 40–50% | | 128-sample CPU (Pi 5) | 55–62% | 18–22% | 30–38% | | High-gain accuracy | Best | Adequate | Good | | Clean/mid-gain accuracy | Best | Good | Very good | | Model library size | Very large (ToneHunt) | Small-medium | Medium (MOD Cloud) | | Free models | Yes (ToneHunt) | Yes (GitHub) | Yes (MOD Cloud) | | 64-sample stability | Marginal (standard) | Excellent | Good | | UI experience | DIY / headless | DIY / headless | MOD-host browser (best) | | JACK / ALSA | JACK recommended | JACK recommended | JACK (MOD-host) |

Recommendations by Player Profile

Jazz and Blues (Clean to Mild Overdrive)

Use Aida-X. The MOD-host browser UI makes it straightforward to chain amp model → cabinet IR → reverb → a light compressor. The sonic accuracy at clean and edge-of-breakup tones is excellent, and you’re not pushing CPU hard enough to care about Proteus’s efficiency advantage. Grab some Fender-flavoured Aida-X captures from MOD Cloud and spend your time playing instead of configuring.

High-Gain Metal (Palm Muting, Fast Rhythm, Drop Tuning)

Use NAM with lite models at 128-sample buffer, or accept the tradeoffs. NAM standard models sound better for high-gain than the alternatives, but they’re not reliably stable at 64 samples on Pi 5 under sustained rhythmic loads. Lite models at 64 samples are stable but thinner-sounding. The honest recommendation: run 128 samples (2.6 ms), use a well-trained NAM capture from ML-Sound-Lab for your specific amp target, and don’t expect a 64-sample live-rig experience without occasional xruns.

If you absolutely need 64-sample stability for a live situation with metal gain, GuitarML Proteus is your fallback — lower fidelity on the very top-end pick attack, but it won’t stutter during a breakdown.

Ambient / Effects-Heavy Playing

Use Aida-X. The integration with MOD-host’s plugin ecosystem means you can build a full signal chain — tuner, amp model, reverb, delay, modulation — in one browser window. CPU overhead at 128 samples leaves plenty of headroom for reverb tails and delay repeats without xruns. GuitarML Proteus is also worth considering here if you want maximum CPU headroom for heavy plugin chains.

Final Notes

All three of these applications have improved meaningfully in the past year and change, and the Pi 5’s BCM2712 makes a real difference compared to running this on a Pi 4. That said, none of them have solved the fundamental tension: the model architectures that sound most accurate are the most expensive to run, and 64-sample buffers on a Pi 5 are still a stretch for the best-sounding option (NAM standard).

For a home recording or practice rig where 128-sample latency (2.6 ms) is acceptable, any of the three work well. For a gigging rig where you need bulletproof stability and hands-free preset recall, you’ll want to spend time building out proper MIDI control and — realistically — test your specific amp models under load before the gig, not during it.

The Focusrite Scarlett Solo remains the audio interface I’d recommend at this budget level for USB reliability and driver behaviour on Raspberry Pi OS. The Behringer UCA222 will work, but you’re adding noise floor issues and limited yourself to higher buffer sizes, which compounds every latency problem described above.

If you want to go deeper on the initial build — OS image, JACK configuration, and getting your first NAM model loaded — the original HomeNode setup post covers that ground.


Related Auburn AI Products

Building a homelab or self-hosting content site? Auburn AI has practical kits:

For general informational purposes only; not professional advice. Posts may contain affiliate links. Learn more.
Scroll to Top