
AI-narrated version of this post using a synthetic voice. Great for accessibility or listening while busy.
Most home lab setups grow faster than the documentation keeping track of them. You add a container here, spin up a VM there, and before long you have no idea whether your NAS is actually under load or your Raspberry Pi DNS resolver is quietly running out of RAM. Professional monitoring stacks used to require serious infrastructure overhead to operate – the kind of thing only sysadmins with dedicated ops time could justify. That’s changed. Prometheus and Grafana together give you production-grade metrics collection and visualization, and the minimum viable version of this stack runs comfortably on a single small VM or even a spare Pi. This guide walks through exactly that: a working install with node_exporter, Prometheus scraping it, and a Grafana dashboard you can actually read – no paid tiers, no cloud accounts required.
What You’re Actually Building
Before touching a terminal, it helps to understand the data flow. These are three separate pieces of software that talk to each other in one direction:
- node_exporter runs on each machine you want to monitor. It exposes a plain HTTP endpoint (default port
9100) that lists hundreds of system metrics in Prometheus text format – CPU, memory, disk I/O, network throughput, filesystem usage, and more. - Prometheus is a time-series database with a built-in scraper. Every 15 seconds (by default), it pulls the metrics endpoint from each configured target and stores the values with a timestamp. It also has its own query language, PromQL.
- Grafana is the visualization layer. It connects to Prometheus as a data source and lets you build dashboards from PromQL queries, or import pre-built community dashboards with a single ID number.
Nothing here requires an internet connection once installed. All three components are open-source and actively maintained. Our reading suggests most home lab operators underestimate how little RAM this stack actually needs: a Prometheus instance scraping five or six hosts uses roughly 200-400 MB at steady state, depending on retention settings.
For this guide, assume you have one monitoring host (Ubuntu 22.04 LTS or Debian 12) where Prometheus and Grafana will live, and at least one target host running node_exporter. The monitoring host can also run node_exporter on itself – that’s a sensible starting point.
Installing node_exporter on Each Target Host
Do this on every machine you want metrics from. The monitoring host included.
First, grab the current release. At time of writing, that’s node_exporter 1.8.1. Check the official releases page for whatever’s current.
cd /tmp
curl -LO https://github.com/prometheus/node_exporter/releases/download/v1.8.1/node_exporter-1.8.1.linux-amd64.tar.gz
tar xvf node_exporter-1.8.1.linux-amd64.tar.gz
sudo mv node_exporter-1.8.1.linux-amd64/node_exporter /usr/local/bin/
sudo useradd --no-create-home --shell /bin/false node_exporter
sudo chown node_exporter:node_exporter /usr/local/bin/node_exporter
Now create a systemd unit so it starts on boot:
sudo nano /etc/systemd/system/node_exporter.service
Paste this in:
[Unit]
Description=Prometheus Node Exporter
Wants=network-online.target
After=network-online.target
[Service]
User=node_exporter
Group=node_exporter
Type=simple
ExecStart=/usr/local/bin/node_exporter
[Install]
WantedBy=multi-user.target
Enable and start it:
sudo systemctl daemon-reload
sudo systemctl enable node_exporter
sudo systemctl start node_exporter
sudo systemctl status node_exporter
Verify the metrics endpoint is responding:
curl http://localhost:9100/metrics | head -20
You should see a wall of text starting with lines like # HELP go_gc_duration_seconds.... That’s the raw Prometheus exposition format. If you see it, node_exporter is working. Repeat this section on every host you want monitored, then come back to the monitoring host for the next steps.
Installing Prometheus on the Monitoring Host
Same basic pattern. Current stable is 2.52.0 as of mid-2024 – again, verify at prometheus.io/download.
cd /tmp
curl -LO https://github.com/prometheus/prometheus/releases/download/v2.52.0/prometheus-2.52.0.linux-amd64.tar.gz
tar xvf prometheus-2.52.0.linux-amd64.tar.gz
sudo mv prometheus-2.52.0.linux-amd64/prometheus /usr/local/bin/
sudo mv prometheus-2.52.0.linux-amd64/promtool /usr/local/bin/
sudo useradd --no-create-home --shell /bin/false prometheus
sudo mkdir /etc/prometheus /var/lib/prometheus
sudo mv prometheus-2.52.0.linux-amd64/consoles /etc/prometheus/
sudo mv prometheus-2.52.0.linux-amd64/console_libraries /etc/prometheus/
sudo chown -R prometheus:prometheus /etc/prometheus /var/lib/prometheus
Now write the configuration file. This is where you tell Prometheus what to scrape:
sudo nano /etc/prometheus/prometheus.yml
global:
scrape_interval: 15s
evaluation_interval: 15s
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
- job_name: 'node'
static_configs:
- targets:
- 'localhost:9100'
- '192.168.1.50:9100' # NAS or second host - use your actual IPs
- '192.168.1.75:9100' # Pi, router, whatever else
Replace the IPs with your actual host addresses. Add as many targets as you have. Set ownership and create the systemd unit:
sudo chown prometheus:prometheus /etc/prometheus/prometheus.yml
sudo nano /etc/systemd/system/prometheus.service
[Unit]
Description=Prometheus
Wants=network-online.target
After=network-online.target
[Service]
User=prometheus
Group=prometheus
Type=simple
ExecStart=/usr/local/bin/prometheus \
--config.file=/etc/prometheus/prometheus.yml \
--storage.tsdb.path=/var/lib/prometheus/ \
--storage.tsdb.retention.time=30d \
--web.listen-address=0.0.0.0:9090
[Install]
WantedBy=multi-user.target
The --storage.tsdb.retention.time=30d flag keeps 30 days of data. For a home lab with five or six hosts, that’s typically 2-5 GB of disk. Adjust up or down as needed. Enable and start:
sudo systemctl daemon-reload
sudo systemctl enable prometheus
sudo systemctl start prometheus
sudo systemctl status prometheus
Open http://<monitoring-host-ip>:9090 in a browser. You should land on the Prometheus web UI. Go to Status → Targets and confirm all your node targets show state UP. If a target shows DOWN, the most common reasons are: firewall blocking port 9100, node_exporter not running on that host, or a wrong IP in the config.
Installing Grafana on the Monitoring Host
Grafana maintains an apt repository, which is the cleanest install method on Debian/Ubuntu:
sudo apt-get install -y apt-transport-https software-properties-common wget
sudo mkdir -p /etc/apt/keyrings/
wget -q -O - https://apt.grafana.com/gpg.key | gpg --dearmor | sudo tee /etc/apt/keyrings/grafana.gpg > /dev/null
echo "deb [signed-by=/etc/apt/keyrings/grafana.gpg] https://apt.grafana.com stable main" | sudo tee /etc/apt/sources.list.d/grafana.list
sudo apt-get update
sudo apt-get install grafana
This installs Grafana OSS – the fully open-source edition, no account required. Enable and start it:
sudo systemctl enable grafana-server
sudo systemctl start grafana-server
Grafana runs on port 3000 by default. Open http://<monitoring-host-ip>:3000. Default credentials are admin / admin. You’ll be prompted to change the password on first login – do that before you do anything else, even on a home network.
Connecting Prometheus and Importing a Dashboard
With Grafana open, add Prometheus as a data source:
- Go to Connections → Data sources in the left sidebar.
- Click Add data source and select Prometheus.
- In the URL field, enter
http://localhost:9090(since Prometheus is on the same host). - Leave everything else at defaults and click Save & test. You should see a green “Successfully queried the Prometheus API” confirmation.
Now import a pre-built dashboard. The community dashboard Node Exporter Full (ID 1860) is the one most home lab operators end up using long-term. It’s comprehensive without being cluttered, and it covers the metrics node_exporter actually exposes.
- Go to Dashboards → Import.
- Enter
1860in the “Import via grafana.com” field and click Load. - Select your Prometheus data source from the dropdown.
- Click Import.
You’ll land on a dashboard with panels for CPU usage, load average, RAM, swap, disk I/O, network throughput, and filesystem saturation. At the top, a dropdown lets you switch between hosts. What we found surprising when first setting this up is how immediately useful it is – within five minutes of import you’ll likely spot something you didn’t know about your own machines. A node sitting at 85% memory, a disk with unusually high write latency, a network interface you forgot was still active.
Hardening and Practical Considerations
A minimum viable setup is fine for getting started, but a few small additions make it production-worthy for a home context.
Firewall Rules
Port 9100 on each target should only be reachable from your monitoring host, not your whole network. On UFW:
sudo ufw allow from 192.168.1.10 to any port 9100 # replace with your monitoring host IP
sudo ufw deny 9100
Port 3000 (Grafana) and 9090 (Prometheus UI) should similarly be restricted to trusted LAN IPs or accessed through a VPN if you want remote access.
Data Residency
Everything in this stack runs locally. No metrics leave your network. For Canadians with concerns about where operational data lives – whether that’s under PIPEDA obligations or just personal preference – this is one of the concrete advantages of a self-hosted monitoring stack versus sending telemetry to a SaaS platform. Your data stays in your building.
Alerting
Prometheus includes an alertmanager component for routing alerts to email, Slack, or other channels. That’s a separate install, but once this base stack is stable it’s the natural next step. A simple alert rule for a node going down looks like this in /etc/prometheus/alert_rules.yml:
groups:
- name: node_alerts
rules:
- alert: NodeDown
expr: up{job="node"} == 0
for: 2m
labels:
severity: critical
annotations:
summary: "Node {{ $labels.instance }} is down"
Reference it in prometheus.yml under a rule_files: block and reload Prometheus.
Resource Footprint
On a modest VM – 2 vCPUs, 2 GB RAM – this full stack (Prometheus + Grafana + local node_exporter) idles at roughly 350-500 MB RAM combined. Grafana alone uses around 100-150 MB. Prometheus memory scales with the number of active time series and the scrape interval. If you have a large number of targets or short scrape intervals, watch the prometheus_tsdb_head_series metric inside Prometheus itself to get a feel for growth.
Where to Go From Here
Once this stack is running and you’ve watched it for a week or two, the next natural additions are straightforward. cAdvisor (port 8080) adds Docker container metrics using the same Prometheus scrape model – just add it as another target. blackbox_exporter lets you monitor HTTP endpoints and check whether services are actually responding, not just running. For Proxmox users, there’s a native Prometheus endpoint built into recent Proxmox VE versions that exposes VM and host metrics without any additional exporter installation.
The dashboard import workflow works the same for all of these. Grafana’s dashboard library has IDs for cAdvisor (14282), blackbox exporter (7587), and Proxmox (10347), among hundreds of others. The pattern is always the same: install an exporter, add a scrape target, import a dashboard.
From our experience, the real value of this setup isn’t the dashboards themselves – it’s having a 30-day history when something goes wrong. When a service falls over at 2 AM, the first question is always “was this gradual or sudden?” Prometheus answers that question cleanly, and Grafana makes the answer visible in about thirty seconds.
– Auburn AI editorial, Calgary AB
Related Auburn AI Products
Building a homelab or self-hosting content site? Auburn AI has practical kits:
- 500 Homelab and Self-Hosting Blog Titles ($27)
- Auburn AI Monitoring Stack ($37) – 6 production PowerShell scripts
- Podcast Automation Kit ($37)
- Browse all Auburn AI products
