Post No. 2
Proxmox × NVIDIA Passthrough = Perfect?
By Wes — 10/07/25
Can you tell that I suck at titles? I would hope so because it’s obvious. But anyway—is Proxmox a good idea for your main workstation, homelab, or even your daily PC if you’re weird like me? In my opinion, YES.
I’ve done this on a junk 2012 Dell and now on my new beef-stack (is this even a word?) computer. If you bounce between multiple machines, dual-boot, or just want one box to do everything, Proxmox with GPU passthrough lets you use your hardware to the max. You can run little VMs for NAS and Plex 24/7 (like I am), or bigger Windows gaming VMs with a passed-through GPU. With proper passthrough, performance loss is typically low single digits compared to bare metal (GPU is the big variable), and for the games that don’t love running in a VM, QEMU gives you knobs for workarounds. If this doesn’t sound up your alley, you may not want to read the spiel that is to come—click off now.
Oh, you’re still here. Okay… let’s get started! Also, this is the start of videos based on posts being made. I will not have it done the second it posts, but I’ll edit the top with the link to the video when I do.
The build & the plan (a.k.a. what we’re actually doing)
You’ve got a system set up: new-ish Intel or AMD board and CPU, fast RAM (32 GB+ if you want multiple VMs at once), and a large boot disk—preferably 2 TB+ (I’m on a 4 TB NVMe 4.0). And yes: an NVIDIA card. Still mad from yesterday, sorry. But trust me, this will be smoooooth sailing.
High-level plan:
Install Proxmox 9 with UEFI, systemd-boot, and ZFS.
Flip the right firmware (BIOS/UEFI) switches: SVM/VT-x, IOMMU/VT-d, Above 4G Decoding, Resizable BAR.
Set kernel parameters for IOMMU.
Bind the GPU’s video and audio functions to vfio-pci early in boot.
Create a VM with modern chipset (q35) and UEFI firmware (OVMF), attach the GPU, install the guest OS and NVIDIA drivers.
Promote the GPU to Primary, set Display to none, and enjoy native output.
I’ll keep my commentary; I’ll also tell you why each setting matters so you can fix stuff without guesswork.
Firmware (BIOS/UEFI): the switches people forget
Enable these before you do anything else:
AMD: SVM (AMD-V) and IOMMU (AMD-Vi).
Intel: Intel VT-x and Intel VT-d.
PCIe features: Above 4G Decoding and Resizable BAR if available.
Why: VT-x/SVM turn on virtualization. VT-d/IOMMU is the “map PCIe devices to guests” part. Above 4G Decoding and Resizable BAR expand address space so big GPUs/HBA mappings aren’t cramped. This reduces crazy IOMMU group issues and BAR-related boot weirdness.
Proxmox 9 install (UEFI + systemd-boot + ZFS)
Write the installer to USB/DVD (Rufus on Windows or "dd if=proxmox-ve.iso of=/dev/sdX bs=4M oflag=sync status=progress" on Linux). Disable Secure Boot and TPM for the install; keep UEFI enabled. Boot → Graphical install → accept EULA.
Target Harddisk → Options → Filesystem: pick “zfs (RAID0)” for a single disk.
Careful: if you see multiple drives, make sure only the intended boot disk is assigned as Harddisk 0. If you leave another disk selected under a different slot, you will wipe it. Double-check on the summary page so you don’t delete your cat pictures.
Set locale, keyboard, password, and email. On the networking page, pick the NIC with the green link dot. I like reserving a DHCP lease in my router (nice and boring). Proceed with install.
On reboot, the console shows the Web UI URL (IP:port) and the shell. Log in at the shell as root and confirm networking with "ip a". From your other machine, SSH in with "ssh root@SERVER.IP.ADDRESS" (accept the fingerprint with yes).
Confirm bootloader & prep IOMMU (systemd-boot path)
Proxmox 9 on UEFI with ZFS defaults to systemd-boot. Confirm:
"proxmox-boot-tool status"
If you see UEFI/systemd-boot targets, good. (If you’re on GRUB somehow, I’ve got a GRUB section later.)
Now enable IOMMU at the kernel level:
"nano /etc/kernel/cmdline"
You’ll see something like:
root=ZFS=rpool/ROOT/pve-1 boot=zfs quiet
Append one of these at the end of the same line (don’t make a new line):
AMD: amd_iommu=on iommu=pt
Intel: intel_iommu=on iommu=pt
Apply and reboot:
"proxmox-boot-tool refresh"
"reboot"
Verify it took:
"dmesg | grep -e IOMMU -e DMAR"
AMD prints AMD-Vi; Intel prints DMAR. If you see nothing, your firmware toggles aren’t right, or the cmdline didn’t apply.
Why “iommu=pt”? It sets pass-through translation for devices not assigned to the guest and reduces overhead for the host in mixed workloads.
Find your GPU endpoints (video + audio)
List GPU and audio functions:
"lspci -nn | grep -i -E \"nvidia|audio|vga\""
Example (my dusty but reliable Quadro K620):
05:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM107GL [Quadro K620] [10de:13bb] (rev a2)
05:00.1 Audio device [0403]: NVIDIA Corporation GM107 High Definition Audio Controller [10de:0fbc] (rev a1)
We need both functions. Most modern boards put them in the same IOMMU group. If yours doesn’t, don’t panic—there’s ACS override (later), but try simple fixes first: different slot, toggling Above 4G/Resizable BAR, or disabling unused onboard controllers to unclutter groups.
Load VFIO early & bind the GPU to it
Tell the host to load VFIO modules on boot:
"tee /etc/modules-load.d/vfio.conf >/dev/null <<'EOF'
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd
EOF"
Now get the vendor:device IDs for your GPU endpoints:
"lspci -nn"
From my example: 10de:13bb (video) and 10de:0fbc (audio).
Create a binding rule:
"nano /etc/modprobe.d/vfio-pci-ids.conf"
Insert one line:
options vfio-pci ids=10de:13bb,10de:0fbc disable_vga=1
No spaces, comma-separated IDs. disable_vga=1 helps pull the VGA device early so the host doesn’t claim it.
Rebuild and refresh boot entries, then reboot:
"update-initramfs -u -k all || true"
"proxmox-boot-tool refresh"
"reboot"
After reboot, confirm the kernel bound vfio-pci to both functions:
"lspci -nnk -s 05:00.0"
"lspci -nnk -s 05:00.1"
Look for:
Kernel driver in use: vfio-pci
If you see nouveau or nvidia instead, skip down to Host GPU driver blacklisting and come back.
Why bind by ID instead of by slot? Vendor:device bindings survive slot shuffles and BIOS re-enumeration. It’s less fragile than hardcoding PCI addresses.
Create the VM & attach the GPU (the part you came for)
In the Proxmox Web UI:
Create VM
Machine: “q35” (modern PCIe topology matters for real GPUs)
Firmware: “OVMF (UEFI)”
SCSI Controller: “VirtIO SCSI single” (good default)
Disk: size to taste; check SSD emulation
CPU Type: “host” (exposes host features; helps perf and compatibility)
Cores/Threads: allocate with headroom for the host and background services
Memory: fixed allocation is safer for games/low-latency tasks
Don’t start after creation (we have edits to do).
Add the GPU
VM → Hardware → Add → PCI Device
Pick the video function (e.g., 05:00.0)
Advanced: check ROM-Bar and PCI-Express
Repeat Add → PCI Device for the audio function (e.g., 05:00.1) with the same options.
Install the guest OS
Mount your Windows or Linux ISO.
Windows: also mount the VirtIO ISO to install storage/NIC drivers either during setup (Load driver) or right after first boot.
Complete installation normally.
Install NVIDIA drivers inside the guest
Use the correct branch for your card generation. Newer RTX cards: current mainstream. Older cards (like my K620): legacy branch.
Make the real GPU primary (after drivers are happy)
Shut down the VM.
VM → Hardware → PCI Device (video) → Edit → check “Primary GPU.”
VM → Hardware → Display → Edit → set “Graphic card” to “none.”
Optional: Add → USB Device to pass through a keyboard/mouse for direct control on the physical monitor.
Power on. If everything’s right, you should get beautiful native resolution and refresh rate, straight from the GPU to your display. It’s perfect, isn’t it. You can repeat the same flow for other VMs if your hardware stack allows it.
Why those checkboxes matter (quick theory so you can fix stuff fast)
ROM-Bar: Exposes the device’s option ROM mapping to the guest; some GPUs/firmwares need it to initialize cleanly.
PCI-Express: Forces PCIe semantics; modern GPUs expect PCIe, not legacy PCI bridges.
Primary GPU + Display: none: Tells QEMU to hand off display ownership to the passed-through GPU and stop emulating a virtual display adapter. You won’t see the guest in the Proxmox console after this—expected.
Host GPU driver blacklisting (if the host won’t let go)
If lspci -nnk shows nvidia or nouveau instead of vfio-pci, add a blacklist:
"tee /etc/modprobe.d/blacklist-gpu.conf >/dev/null <<'EOF'
blacklist nvidia
blacklist nvidia_drm
blacklist nvidia_modeset
blacklist nouveau
options nouveau modeset=0
EOF"
Then:
"update-initramfs -u -k all || true"
"proxmox-boot-tool refresh"
"reboot"
Re-check binding with "lspci -nnk -s BUS:DEV.FUNC".
ACS override (only if your IOMMU groups are a mess)
If video and audio aren’t in the same group and slot/firmware tweaks didn’t fix it, you can widen groups:
Add pcie_acs_override=downstream,multifunction to the existing line in /etc/kernel/cmdline.
"proxmox-boot-tool refresh"
"reboot"
Note: ACS override reduces isolation; fine on a homelab box where you accept the tradeoff, but know what you’re doing.
Windows specifics, stability polish, and “it’s weird, but it works”
VirtIO everywhere you can: storage and network on VirtIO drivers are faster and lower CPU.
Don’t set Display = none too early: install NVIDIA drivers first, then promote to Primary + none.
Code 43 folklore: Modern Proxmox/QEMU plus Quadros rarely hit it. If you do on a GeForce, masking the hypervisor bit usually fixes it (e.g., adding CPU flags). One simple approach is adding a CPU arg:
"args: -cpu host,kvm=on"
Many users won’t need this at all—keep it in your back pocket.
MSI interrupts (noise reduction): After drivers are in, enabling MSI for the GPU/audio in Windows can reduce stutter. There are small tools to flip this in the registry; optional, not mandatory.
Sanity commands you’ll use ten times
Confirm IOMMU:
"dmesg | grep -e IOMMU -e DMAR"List NVIDIA + audio endpoints:
"lspci -nn | grep -i -E \"nvidia|audio|vga\""Verify driver binding:
"lspci -nnk -s 05:00.0" and "lspci -nnk -s 05:00.1"Show IOMMU groups (overview):
"find /sys/kernel/iommu_groups/ -type l | sort"Network reality check:
"ip a"Boot status (systemd-boot):
"proxmox-boot-tool status"Kernel cmdline edit + apply:
"nano /etc/kernel/cmdline" → "proxmox-boot-tool refresh" → "reboot"
If you used GRUB instead of systemd-boot (it happens)
Edit /etc/default/grub:
"nano /etc/default/grub"
Append amd_iommu=on iommu=pt or intel_iommu=on iommu=pt to GRUB_CMDLINE_LINUX_DEFAULT="..."
"update-grub"
"reboot"
The VFIO module list, ID binding, and VM steps are identical. Use "dmesg | grep -e IOMMU -e DMAR" to confirm it took.
Troubleshooting quick hits (so you don’t panic)
Black screen on first GPU boot:
Install NVIDIA drivers first, then set Primary + Display none. Check ROM-Bar and PCI-Express are ticked. Try cold booting the VM (shutdown → start, not reboot).Host keeps owning the GPU:
Confirm the vfio-pci binding, then apply the blacklist and rebuild initramfs.IDs wrong:
Re-copy from "lspci -nn". Use the square-bracketed [vendor:device] numbers, not the bus address.Performance meh:
CPU Type = host; give the VM enough cores and RAM; avoid the host swapping; keep your VM disk on fast storage; consider CPU pinning only after the basics are correct.
Why the Quadro K620 (and why Windows 7)?
I want direct display to a dedicated monitor on an older Windows 7 VM for a specific use case (and nostalgia). I promise I’ll explain the whole “2014 GPU installed, also goes to Windows 7 on the host as well” decision in the next post. The K620 behaves well with VFIO, which makes it perfect for demonstrating the process. Your modern RTX/Quadro follows the same steps.
Wrap
Assuming we’ve done everything right, you should see a beautiful desktop at the max refresh rate and resolution your monitor/TV can handle. It’s smooth, it’s repeatable, and you can clone the same pattern for more VMs. The sky is the limit.
Signing off for the 2nd time in this blog’s history. It’s Wes, and I hope that maybe I was able to help. See you next time!
Comments
Post a Comment