How to Set Up Proxmox VE 9 on Old Enterprise Hardware: A Canadian Home Lab Walkthrough

How to Set Up Proxmox VE 9 on Old Enterprise Hardware: A Canadian Home Lab Walkthrough
Listen to this post

AI-narrated version of this post using a synthetic voice. Great for accessibility or listening while busy.

AI assistance: Drafted with AI assistance and edited by Auburn AI editorial.

Used enterprise gear keeps showing up on Canadian Kijiji and eBay.ca listings for surprisingly little money – a Dell PowerEdge R720 or HP ProLiant DL380p Gen8 that once sat in a corporate data centre can land in your basement for $150-$300 CAD. The hardware is overbuilt, dual-socket capable, and carries ECC RAM that consumer boards don’t offer. The catch is that none of it was designed with a quiet living room in mind, and the software stack it ran in production is almost certainly not what you want at home. Proxmox VE is the obvious choice for turning that iron into a personal virtualisation host – it’s open source, Debian-based, and genuinely well-documented. This walkthrough covers a real install on that class of machine, from BIOS prep through first VM boot, with the rough edges included.

What You’re Actually Getting Into

Proxmox VE is a Type 1 hypervisor built on top of Debian GNU/Linux. It ships KVM for full virtualisation and LXC for containers, managed through a browser-based interface on port 8006. As of Proxmox VE 9 (released mid-2025, built on Debian 13 “Trixie”), the minimum supported kernel is 6.8. That matters for older enterprise hardware because some older RAID controllers and out-of-band management drivers had spotty support in the 5.x kernel series – support has generally improved, but it’s worth verifying your specific HBA or RAID card against the Proxmox community wiki before you start.

The R720 specifically uses a dual-socket LGA2011 board supporting Intel Xeon E5-2600 v1 and v2 processors. With v2 chips (Ivy Bridge-EP), you get reasonable power efficiency relative to the core count – a pair of E5-2650 v2 CPUs gives you 16 cores / 32 threads total at a 95W TDP each. ECC RDIMM support goes up to 768 GB. For home lab use, 64-128 GB is plenty and costs almost nothing used.

One thing worth flagging for Canadian readers: if you’re running any workloads that touch personal information – even a self-hosted Nextcloud for family files – PIPEDA still applies. Keeping that data on-premises in Calgary or wherever you are is one approach to data residency, but you’re still responsible for reasonable security practices. That’s outside the scope of this post, but it’s worth not forgetting.

Pre-Install: BIOS and iDRAC Prep

Enterprise servers have out-of-band management controllers. On Dell it’s iDRAC, on HP it’s iLO. Before you touch the OS install, spend twenty minutes here. These controllers are genuinely useful – they let you mount a virtual ISO, view a remote console, and power-cycle the machine without being physically present. On older gear the default credentials are often still root / calvin for iDRAC 7. Change that immediately.

Update the iDRAC firmware if you can. Dell’s support site provides firmware bundles. iDRAC 7 on the R720 supports virtual media through a Java-based viewer or a more modern HTML5 path if you update to a recent enough firmware version. Getting the HTML5 console working saves a lot of pain with modern browsers blocking Java applets.

In the BIOS (System Setup → Processor Settings), make sure the following are enabled:

  • Virtualization Technology (VT-x) – required for KVM
  • VT for Direct I/O (VT-d) – required for PCIe passthrough later
  • SR-IOV – worth enabling now if you have a supported NIC and plan to use it
  • Hyperthreading – personal preference, leave it on for general workloads

For storage, decide now whether you want Proxmox to see individual disks or a RAID volume. The PERC H710/H710P controller that ships with many R720s does hardware RAID. Our reading suggests that for home lab use, passing individual disks through to Proxmox via HBA mode (also called IT mode or JBOD mode, depending on the controller) gives you more flexibility – ZFS can then manage the disks directly, which is usually preferable. Flashing a PERC to HBA mode is possible but it’s a separate process involving the LSI firmware, and it carries some risk. If you’d rather leave it in RAID mode, create a single RAID 0 volume per disk so Proxmox sees them individually through the RAID controller. Not ideal, but it works.

Creating the Install Media and Booting

Download the Proxmox VE 9 ISO from the official Proxmox downloads page. Verify the SHA256 checksum before writing it to USB. On Linux or macOS:

sha256sum proxmox-ve_9.0-1.iso

Compare against the checksum posted on the download page. Write to USB with dd on Linux/macOS or Rufus on Windows. For dd:

sudo dd if=proxmox-ve_9.0-1.iso of=/dev/sdX bs=4M status=progress conv=fsync

Replace /dev/sdX with your actual USB device – double-check with lsblk first. Writing to the wrong device here is a bad day.

On the R720, press F11 during POST to get the boot menu, or configure boot order through iDRAC’s virtual media if you’re mounting the ISO remotely. The Proxmox installer is graphical and straightforward. A few decisions matter:

  1. Target disk: Install Proxmox itself on a small, dedicated drive. A 120-240 GB SSD on the internal SD card slot or a cheap SATA SSD in a bay works fine. Don’t install it on the same pool you’ll use for VM storage.
  2. Filesystem for the OS disk: The installer offers ext4, XFS, ZFS RAID options. For a single OS drive, ext4 is fine. ZFS on the OS disk adds overhead and complexity that doesn’t pay off on a single drive.
  3. Network config: Set a static IP. Pick something outside your router’s DHCP range. The installer sets up a Linux bridge (vmbr0) automatically – this is what your VMs will use for network access.
  4. Hostname: Use a proper FQDN format even at home, e.g. pve01.home.lab. Proxmox clustering later will thank you.

First Boot: Subscription Nag and Repository Fix

Proxmox VE is free to use but it has an enterprise repository enabled by default that requires a paid subscription. Without a subscription key, apt update will throw a 401 error. This is the first thing to fix.

SSH into your new host (or use the web console at https://YOUR_IP:8006 and open a shell). Run:

# Disable the enterprise repo
echo "# disabled" > /etc/apt/sources.list.d/pve-enterprise.list

# Add the no-subscription repo (fine for home lab use)
echo "deb http://download.proxmox.com/debian/pve trixie pve-no-subscription" \
  > /etc/apt/sources.list.d/pve-no-subscription.list

# Also fix Ceph enterprise repo if present
echo "# disabled" > /etc/apt/sources.list.d/ceph.list

apt update && apt dist-upgrade -y

The no-subscription repository receives the same package versions as the enterprise repo, typically a week or two later. For a home lab, that lag is irrelevant.

While you’re in the shell, check that IOMMU is active – this is required for PCIe passthrough:

dmesg | grep -e DMAR -e IOMMU

You should see lines referencing DMAR and IOMMU enabled. If nothing comes back, revisit the VT-d setting in BIOS. You may also need to add intel_iommu=on to the kernel command line in /etc/default/grub, then run update-grub and reboot. On Proxmox 9 with a recent kernel this is usually detected automatically, but older hardware sometimes needs the explicit flag.

Storage Configuration: ZFS Pool for VM Disks

This is where things get interesting. If you have multiple drives available for VM storage – say, four 600 GB SAS drives that came with the server – ZFS is worth setting up. ZFS gives you copy-on-write snapshots, data integrity checking, and flexible pool management. Proxmox has native ZFS support baked in.

In the Proxmox web UI: Datacenter → your node → Disks → ZFS → Create: ZFS. Or via shell:

# List available disks
lsblk -d -o NAME,SIZE,MODEL

# Create a RAID-Z1 pool named vmdata across three disks
zpool create -f vmdata raidz1 /dev/sdb /dev/sdc /dev/sdd

# Enable compression (lz4 is fast and effective)
zfs set compression=lz4 vmdata

# Check status
zpool status vmdata

RAID-Z1 tolerates one drive failure, similar to RAID 5. For four drives, RAID-Z2 (two drive failure tolerance) makes more sense. What we found surprising the first time we set this up on server-grade SAS drives is how quickly the initial scrub finishes – consumer HDDs on the same RAID-Z config take several times longer.

Add the pool as a Proxmox storage target: Datacenter → Storage → Add → ZFS, select your pool name. Now VM disks and container volumes can live there.

For the R720’s internal SD card slot (if populated), treat it as OS-only. SD cards don’t handle the write load of VM storage gracefully and will fail faster than you expect.

Creating Your First VM

With storage configured, create a VM through the web UI: Create VM button in the top right. Walk through the wizard:

  • OS tab: Upload an ISO to local storage first (local → ISO Images → Upload), then select it here. A current Ubuntu Server 24.04 LTS ISO works well for a test VM.
  • System tab: For Linux VMs, leave the default BIOS (SeaBIOS) or switch to OVMF (UEFI) if your OS requires it. Enable the QEMU Guest Agent checkbox – you’ll install the agent inside the VM later.
  • Disks tab: Select your ZFS pool as storage. Use VirtIO SCSI for the disk interface – it’s faster than emulated IDE or SATA. 32 GB is enough for a test Ubuntu install.
  • CPU tab: For general VMs, the x86-64-v2-AES CPU type is a safe baseline on Xeon E5 v2 hardware. Using host CPU type passes through all host CPU flags, which is faster but means the VM can’t migrate to different hardware.
  • Memory tab: Allocate what you need. Enable ballooning for flexible memory reclamation across multiple VMs.
  • Network tab: Bridge to vmbr0, VirtIO network device.

Start the VM, open the console, and run through the OS install. After Ubuntu is up, install the QEMU guest agent:

sudo apt install qemu-guest-agent
sudo systemctl enable --now qemu-guest-agent

This lets Proxmox see the VM’s IP address in the web UI and enables consistent snapshots by quiescing the filesystem before snapshotting.

A Few Things Worth Knowing Before You Run Production Workloads

Enterprise servers are loud. An R720 at full fan speed will hit 65-75 dB(A). Dell’s iDRAC has fan control, and there are community scripts that set manual fan speed targets to bring noise down significantly at home lab thermal loads. Search the Proxmox forums for “R720 fan control” – the scripts use IPMI raw commands through iDRAC to override the automatic fan curve. Use them carefully; the automatic curve exists to protect the hardware.

Power consumption matters too. An idle R720 with two populated CPU sockets draws roughly 100-150 W depending on RAM and disk count. At Alberta’s residential electricity rates (roughly $0.18-0.22 CAD/kWh depending on your provider and rate class), that’s around $130-200 CAD per year at idle. Factor that into the total cost of ownership math before filling every RAM slot.

Backups: Proxmox has a built-in backup tool under Datacenter → Backup. Schedule nightly backups of your VMs to a separate storage location – an external USB drive or a second machine running a Proxmox backup server. The 3-2-1 rule applies at home just as much as anywhere else.

From our experience, the biggest time sink on first setup isn’t the OS install – it’s working out the storage layout and figuring out why IOMMU isn’t initialising. Budget an afternoon for the install and a weekend for getting comfortable with how Proxmox manages storage, networking, and backups. The official Proxmox documentation is thorough and worth reading in full, not just skimming for answers to immediate problems.

Old enterprise iron running Proxmox at home is genuinely practical – the hardware is durable, the software is capable, and the total outlay is modest – but the gap between “it boots” and “it runs reliably” is mostly filled by understanding what you’ve actually set up.

– Auburn AI editorial, Calgary AB


Related Auburn AI Products

Building a homelab or self-hosting content site? Auburn AI has practical kits:

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top