Tag Archives: PCI passthrough

Windows VM with GTX 1070 GPU passthrough in ProxMox 5

I started this blog four years ago to document my highly technical adventures – mainly so I could reproduce them later. One of my first articles dealt with GPU passthrough / virtualization. It was a complicated ordeal with Xen. Now that I’ve switched to KVM (ProxMox) I thought I’d give it another go. It’s still complicated but not nearly as much this time.

To get my Nvidia GTX 1070 GPU properly passed through to a Windows VM hosted by ProxMox 5 I simply followed this excellent guide written by sshaikh. I will summarize what I took from his guide to get my setup to work.

  1. Ensure VT-d is supported and enabled in the BIOS
  2. Enable IOMMU on the host
    1. append the following to the GRUB_CMDLINE_LINUX_DEFAULT line in /etc/default/grub
    2. Save your changes by running
  3. Blacklist NVIDIA & Nouveau kernel modules so they don’t get loaded at boot
    1. echo "blacklist nouveau" >> /etc/modprobe.d/blacklist.conf
      echo "blacklist nvidia" >> /etc/modprobe.d/blacklist.conf
    2. Save your changes by running
      update-initramfs -u
  4. Add the following lines to /etc/modules
  5. Determine the PCI address of your GPU
    1. Run
      lspci -v

      and look for your card. Mine was 01:00.0 & 01:00.1. You can omit the part after the decimal to include them both in one go – so in that case it would be 01:00

    2. Run lspci -n -s <PCI address> to obtain vendor IDs. Example :
      lspci -n -s 01:00
      01:00.0 0300: 10de:1b81 (rev a1)
      01:00.1 0403: 10de:10f0 (rev a1)
  6. Assign your GPU to vfio driver using the IDs obtained above. Example:
    echo "options vfio-pci ids=10de:1b81,10de:10f0" > /etc/modprobe.d/vfio.conf
  7. Reboot the host
  8. Create your Windows VM using the UEFI bios hardware option (not the deafoult seabios) but do not start it yet. Modify /etc/pve/qemu-server/<vmid>.conf and ensure the following are in the file. Create / modify existing entries as necessary.
    bios: ovmf
    machine: q35
    cpu: host,hidden=1
    numa: 1
  9. Install Windows, including VirtIO drivers. Be sure to enable Remote desktop.
  10. Pass through the GPU.
    1. Modify /etc/pve/qemu-server/<vmid>.conf and add
      hostpci0: <device address>,x-vga=on,pcie=1. Example

      hostpci0: 01:00,x-vga=on,pcie=1
  11. Profit.


Code 43

I received the dreaded code 43 error after installing CUDA drivers. The workaround was to add hidden=1 to the CPU option of the VM:

cpu: host,hidden=1

Blue screening when launching certain games

Heroes of the Storm and Starcraft II would consistently blue screen on me with the following error:


The fix as outlined here was to create /etc/modprobe.d/kvm.conf and add the parameter “options kvm ignore_msrs=1”

echo "options kvm ignore_msrs=1" > /etc/modprobe.d/kvm.conf

GPU optimization:

Give as many CPUs as the host (in my case 8) and then enable NUMA for the CPU. This appeared to make my GTX 1070 perform better in the VM – near native performance.

Migrate from Xenserver to Proxmox

I was dismayed to see Citrix’s recent announcement about Xenserver 7.3 removing several key features from the free version. Xenserver’s free features are the reason I switched over to them in the first place back in 2014. Xenserver has been rock solid; I haven’t had any complaints until now. Their removal of xenmotion and migration in the free version forced me to look elsewhere for my virtualization needs.

I’ve settled on ProxMox, which is KVM based. Their documentation is excellent and it has all the features I need – for free. I’m also in love with their web based management – no more Windows fat client!

Below are my notes on how I successfully migrated all my Xenserver VMs over to the ProxMox Virtual Environment (PVE).

  • Any changes to network interfaces, such as bringing them up, require a reboot of the host
  • If you have an existing ISO share, you can create a directory called  “template” in your ISO repository folder, then inside symlink “iso” back to your ISO folder. Proxmox looks inside template/iso for ISO images for whatever storage you configure.
  • Do not create your ProxMox host with ZFS unless you have tons of RAM. If you don’t have enough RAM you will run into huge CPU load times making the system unresponsive in cases of high disk load, such as VM copies / backups. More reading here.

Cluster of two:

ProxMox’s clustering is a bit different – better, in my opinion. No more master, slave dynamic – ever node is a master. Important reading: https://pve.proxmox.com/wiki/Proxmox_VE_4.x_Cluster

If you have two node cluster, like I do, it creates some problems, though. If one goes down, the other can’t do anything to the pool (create VM, backup) until it comes back up. In my situation I have one primary host that is up all the time and I bring the secondary host up only when I want to do maintenance on the first.

In that specific situation you can still designate a “master” of sorts by increasing the number of quorum votes it gets from 1 to 2.  That way when the secondary node is down, the primary node can still do cluster operations because the default number of votes to stay quorate is 2. See here for more reading on the subject.

On either host (they must both be up and in the cluster for this to work)

vi /etc/pve/corosync.conf

Find your primary server in the nodelist settings and change

quorum_votes: 2

Also find the quorum section and add expected_votes: 2

Make sure to increment config_version number (bottom of the file.) Now if your secondary is down you can still operate the primary.

Migrating VMs

I migrated my Xen VMs to KVM by creating VMs with identical specs in PVE, copying the VHD files from the Xen host to the new PVE host, running qemu-img to convert them to RAW format, and then using dd to copy the raw information over to corresponding empty VM  disks. Depending on the OS of the VM there was some after-copy tweaking I also had to do.

From shared storage

Grab the VHD file (quiesce any snapshots away first) of each xen VM and convert them to raw format

qemu-img convert <VHD_FILE_NAME>.vhd -O raw <RAW_FILE_NAME>.raw

Create a new VM with identical configuration, especially disk size. Go to the hardware tab and take note of the name of the disk. For example, one of mine was:


The interesting part is between local-zfs and discard=on, namely vm-100-disk-1. This is the name of the disk we want to overwrite with data from our Xenserver VM’s disk.

Next figure out the full path of this disk on your proxmox host

find / -name vm-100-disk-1*

The result in my case was /dev/zvol/rpool/data/vm-100-disk-1

Take the name and put it in the following command to complete the process:

dd if=<RAW_FILE_NAME>.raw of=/dev/zvol/rpool/data/vm-100-disk-1 bs=16M

Once that’s done you can delete your .vhd and .raw files.

From local / LVM storage

In case your Xen VMs are stored in LVM device format instead of a VHD file, get UUID of storage by doing xe vdi-list and finding the name of the hard disk from the VM you want. It’s helpful to rename the hard disks to something easy to spot. I chose the word migrate.

xe vdi-list|grep -B3 migrate
uuid ( RO) : a466ae1b-80c7-4ef2-91a3-5c1ba1f6fc2f
 name-label ( RW):  migrate

Once you have the UUID of the drive, you can use lvscan to find the full LVM device path of that disk:

lvscan|grep a466ae1b-80c7-4ef2-91a3-5c1ba1f6fc2f
 inactive '/dev/VG_XenStorage-1ada0a08-7e6d-a5b6-d0b4-515e251c0c75/VHD-a466ae1b-80c7-4ef2-91a3-5c1ba1f6fc2f' [10.03 GiB] inherit

Shut down the corresponding VM and reactivate its logical volume (xen deactivates LVMs if the VM is shut off:

lvchange -ay <full /dev/VG_XenStorage path discovered above>

Now that we have the full LVM path and the volume is active, we can use dd over SSH to transfer the image to our proxmox server:

sudo dd if=<full /dev/VG/Xenstorage path discovered above> | ssh <IP_OF_PROXMOX_SERVER> dd of=<LOCATION_ON_PROXMOX_THAT_HAS_ENOUGH_SPACE>/<NAME_OF_VDI_FILE>.vhd

then follow vhd -> raw -> dd to proxmox drive process described in the From Shared Storage section.

Post-Migration tweaks

For the most part Debian-based systems moved over perfectly without any needed tweaks; Some VMs changed interface names due to network device changes. eth0 turned into ens8. I had to modify /etc/network/interfaces to change eth0 to ens8 to get virtio networking working.


All my CentOS VMs failed to boot after migration due to a lack of virtio disk drivers in the initial RAM disk. The fix is to change the disk hardware to IDE mode (they boot fine this way) and then modify the initrd of each affected host:

sudo dracut --add-drivers "virtio_pci virtio_blk virtio_scsi virtio_net virtio_ring virtio" -f -v /boot/initramfs-`uname -r`.img `uname -r`
sudo sh -c "echo 'add_drivers+=\" virtio_pci virtio_blk virtio_scsi virtio_net virtio_ring virtio \"' >> /etc/dracut.conf"
sudo shutdown -h now

Once that’s done you can detach the hard disk and re-attach it back as SCSI (virtio) mode. Don’t forget to modify the options and change the boot order from ide0 to scsi0

Arch Linux

One of my Arch VMs had UUID configured which complicated things. The root device UUID changes in KVM virtio vs IDE mode. The easiest way to fix it is to boot this VM into an Arch install CD. Mount the root partition and then run arch-chroot /mnt/sda1. Once in the chroot runpacman -Sy kernel to reinstall the kernel and generate appropriate kernel modules.

mount /dev/sda1 /mnt
arch-chroot /mnt
pacman -Sy kernel

Also make sure to modify /etc/fstab to reflect appropriate device id or UUID (xen used /dev/xvda1, kvm /dev/sda1)


Create your Windows VM using non-virtio drivers (default settings in PVE.) Obtain the latest windows virtio drivers here and extract them somewhere memorable. Switch everything but the disk over to Virtio in the VM’s hardware config and reboot the VM. Go into device manager and point to extracted driver location for each unknown device.

To get Virtio disk to work, add a new disk to the VM of any size and SCSI (virtio) type. Boot the Windows VM and install drivers for that drive. Then shut down, remove that second drive, detach the primary drive and change to virtio SCSI. It should then come up with full virtio drivers.

All hosts

KVM has a guest agent like xenserver does called qemu-agent. Turn it on in VM options and install qemu-guest-agent in your guest. This KVM a bit more insight into your host.

Determine which VMs need guest agent installed:

qm agent $id ping

If nothing is returned, it means qemu-agent is working. You can test all your VMs at once with this one-liner (change your starting and finishing VM IDs as appropriate)

for id in {100..114}; do echo $id; qm agent $id ping; done

This little one-liner will output the VM ID it’s trying to ping and will return any errors it finds. No errors means everything is working.

Disable support nag

PVE has a support model and will nag you at each login. If you don’t like this you can change it like so (the line number might be different depending on which version you’re running:

vi +850 /usr/share/pve-manager/js/pvemanagerlib.js

Modify the line if (data.status !== ‘Active’); change it to

if (false)


Remove a failed node

See here: https://pve.proxmox.com/wiki/Proxmox_VE_4.x_Cluster#Remove_a_cluster_node

systemctl stop pvestatd.service
systemctl stop pvedaemon.service
systemctl stop pve-cluster.service
rm -r /etc/corosync/*
rm -r /var/lib/pve-cluster/*

Quorum never establishes / takes forever

I had a really strange issue where I was able to establish quorum with a second node, but after a reboot quorum never happened again. I re-installed that second node and re-joined it several times but I never got past the “waiting for quorum….” stage.

After much research I came across this article which explained what was happening. Corosync uses multicast to establish cluster quorum. Many switches (including mine) have a feature called IGMP snooping, which, without an IGMP querier, essentially means multicast never happens. Sure enough, after logging into my switches and disabling IGMP snooping, quorum was instantly established. The article above says this is not recommended, but in my small home lab it hasn’t produced any ill effects. Your mileage may vary. You can also configure your cluster to use unicast instead.

USB Passthrough not working properly

With Xenserver I was able to pass through the USB controller of my host to the guest (a JMICRON USB to ATAATAPI bridge holding a 4 disk bay.) I ran into issues with PVE, though. Using the GUI to pass the USB device did not work. Manually adding PCI passthrough directives (hostpci0: 00:14.1) didn’t work. I finally found on a little nugget on the PCI Passthrough page about how you can simply pass the entire device and not the function like I had in Xenserver. So instead of doing hostpci0: 00:14.1, I simply did hostpci0: 00:14 . That  helped a little bit, but I was still unable to fully use these drives simultaneously.

My solution was eventually to abandon PCI passthrough altogether in favor of just passing individual disks to the guest as outlined here.

Find the ID of the desired disks by issuing ls -l /dev/disk/by-id. You only need to know the UUIDs of the disks, not the partitions. Then modify the KVM config of your desired host (mine was located at /etc/pve/qemu-server/101.conf) and a new line for each disk, adjusting scsi device numbers and UUIDs to match:

scsi5: /dev/disk/by-id/scsi-SATA_ST5000VN000-1H4_Z111111

With that direct disk access everything is working splendidly in my FreeNAS VM.

PCI Passthrough in Xenserver 7 “Dundee”

I’ve recently upgraded to the latest version of Citrix Xenserver 7 (codenamed “Dundee”.) 7 is based on CentOS 7 and has a massive amount of changes under the hood. One such change was how they handle PCI Passthrough.

It took some time to figure PCI Passthrough out. 7 uses grub instead of extlinux for the bootloader. It appears to be grub2 but they don’t use the standard update-grub tool, rather you simply edit the config file and do nothing else.

After much searching I found this post which led me in the right direction. In Xenserver 7, for pci passthrough support you must do the following:

  • Prepare the VM for PCI passthrough (this part hasn’t changed)
    xe vm-param-set other-config:pci=0/0000:B:D.f uuid=<vm uuid>
  • Modify /boot/grub/grub.cfg and append the following to the end of the module2 line (if you boot from EFI the file to modify is /boot/efi/EFI/xenserver/grub.cfg)
  • Reboot

You will now be able to pass through hardware to your virtual machines in Xenserver 7. Hooray.

Fix Xen VGA Passthrough in Linux Mint 17.1

I wrote in my last post about how I upgraded from Linux Mint 16 to 17.1. I thought everything went smoothly, but it turns out one feature did break: VGA passthrough via Xen. For the past year or so I’ve had a Windows 8.1 gaming VM with direct access to my video card. It’s worked out nicely in Linux Mint 16 but broke completely in 17.1.

I followed the advice of powerhouse on the Linux Mint forums on how to get things up and running, but it wasn’t quite enough. After much banging of my head against the wall I read on the Xen mailing list that there was a regression in VGA passthrough functionality with Xen 4.4.1, which is the version of Xen Mint 17.1 uses.

I finally came to a solution to my problem today – upgrade to Xen 4.5. I couldn’t find any pre-built packages for Ubuntu 14.04 (the base of Mint 17.1) so I ended up compiling Xen 4.5 from source. Below is what I did to make it all work.

Fix broken symlink for /usr/lib/xen-default

sudo rm /usr/lib/xen-default
sudo ln -s /usr/lib/xen-4.4/ /usr/lib/xen-default

Update the DomU CFG file

A couple things needed tweaking. Here is my working cfg:

memory = '8192'
name = 'win8.1'
vif = [ 'mac=3a:82:47:2a:51:20,bridge=xenbr0,model=virtio' ]
disk = [ 'phy:/dev/mapper/desktop--xen-Win8.1,xvda,w' ]
device_model_version = 'qemu-xen-traditional'
pci=[ '01:00.0', '01:00.1' , '00:1d.0' ]
on_xend_stop = "shutdown"

For some, that’s all they had to do. For me, I had to do a few more things.

Compile Xen 4.5

This step was thanks to two different sites, this one and this one.

Install necessary packages

sudo apt-get install build-essential bcc bin86 gawk bridge-utils iproute libcurl3 libcurl4-openssl-dev bzip2 module-init-tools transfig tgif texinfo texlive-latex-base texlive-latex-recommended texlive-fonts-extra texlive-fonts-recommended pciutils-dev mercurial libjpeg-dev make gcc libc6-dev-i386 zlib1g-dev python python-dev python-twisted libncurses5-dev patch libvncserver-dev libsdl-dev libpixman-1-dev iasl libbz2-dev e2fslibs-dev git-core uuid-dev ocaml ocaml-findlib libx11-dev bison flex xz-utils libyajl-dev gettext markdown libaio-dev pandoc

Checkout Xen source

git clone git://xenbits.xen.org/xen.git xen-4.5.0
cd xen-4.5.0
git checkout RELEASE-4.5.0

Build from source

./configure --libdir=/usr/lib
 make world -j8

When I tried this the make failed with this error:

/usr/include/linux/errno.h:1:23: fatal error: asm/errno.h: No such file or directory
 #include <asm/errno.h>

The fix (thanks to askubuntu)  was to install linux-libc-dev and make a symlink for it:

sudo apt-get install linux-libc-dev
sudo ln -s /usr/include/asm-generic /usr/include/asm

It then compiled successfully.

Install freshly compiled Xen 4.5

sudo make install
sudo update-rc.d xencommons defaults
sudo update-rc.d xendomains defaults
sudo ldconifg

Set grub to boot from new Xen kernel

sudo update-grub
sudo vim /etc/default/grub

Edit GRUB_DEFAULT to match wherever update-grub put your new Xen kernel (in my case it was the second entry, so my GRUB_DEFAULT=1), then run update-grub again

sudo update-grub


Success at last. Enjoy your VM gaming once more with Xen 4.5.

PCI passthrough with Xenserver 6.2

PCI passthrough is a great way to mix virtualization with bare metal hardware. It allows you to pass physical hardware to virtual machines. In order to do PCI passthrough you will need compatible hardware (a CPU and chipset that support it.) Intel’s nomenclature for this is VT-d; AMD’s is IOMMU. It’s difficult (although not impossible) to get consumer level hardware that supports this. It’s much easier to obtain with server grade hardware.

Why would you want to pass physical hardware to virtual machines? In my case, it’s to turn a single system into a super server. Instead of having separate physical systems for NAS, gaming, and TV recording (my three uses) you can have one physical system do all three. While this is possible with one single OS, it’s much easier to manage these functions if they are in their own separate OS (especially if you’re using appliance VMs such as FreeNAS.) PCI Passthrough allows you to get the best of both worlds – better security by isolating functions, easier backup/restore, and physical hardware access.

Citrix Xenserver 6.2 supports PCI passthrough beautifully. A great comprehensive guide on how to configure PCI passthrough can be found here.

Xenserver 6.2 no longer requires any configuration beforehand to get PCI passthrough to work. To pass a device to a VM all you need to do is obtain its the bus, device, function (B:D.F) via lspci, then pass that through to the VM.

<several lines deleted>
06:00.0 Ethernet controller: Atheros Communications AR8131 Gigabit Ethernet (rev c0)

The B:D.F of the above device (a network adapter) is 06:00.0. To then pass this device to a virtual machine we use the xe vm-param-set command with the other-config:pci= parameter, adding 0/0000: to the beginning of the B:D.F, then specifying the UUID of the VM in question.

xe vm-param-set other-config:pci=0/0000:06:00.0 uuid=db4c64e1-44ce-f9f3-3236-0d86df260249

If the VM is running when you issue that command, make sure to shut down (not reboot) the VM, then start it up again.

To add multiple devices to the same VM, simply separate each B:D.F with a comma, like so:

xe vm-param-set other-config:pci=0/0000:06:00.0, 0/0000:07:00.0 uuid=db4c64e1-44ce-f9f3-3236-0d86df260249

Sometimes if you pass multiple PCI devices to a single VM only one of those devices is recognized by the VM. If that is the case, try passing the B:D.F of each piece of hardware in a different order

If you ever want to remove a hardware mapping to a VM, issue the following:

xe vm-param-clear param-name=other-config uuid=<UUID of VM>

There is still a case where you want to modify Xenserver’s configuration with regard to PCI passthrough. On occasion you will have hardware that you do not want the hypervisor to ever know about (in the above example, the hypervisor can use the hardware until you power on a VM that has passthrough enabled for it.)

In my case, I don’t want the hypervisor to ever see the storage controller I’m passing to my NAS VM. I found this out the hard way. If you don’t modify your xenserver configuration to ignore storage controllers that you then pass through to a VM, the entire hypervisor will completely lock up if you happen to reboot that VM. Why? Because when that VM reboots it releases the storage controller back to the hypervisor, which promptly enumerates and re-names all of its attached drives. It often leads to a case of re-naming /dev/sda, promptly “losing” the root device, and kernel panicking.

So, if you are passing things you never want the hypervisor to see, you need to modify its boot configuration to “hide” those devices from it. Edit /boot/extlinux.conf and append pciback.hide=(B:D.F) to the Linux command line, right after the splash parameter

vi /boot/extlinux.conf 
<navigate to right after the word splash>
<esc> :wq
extlinux -i /boot

The above example excludes two devices. Multiple devices simply go next to each other in their own parenthesis, but the format is the same if you only passing a single device.

Reboot the hypervisor, and you are good to go. You can now pass hardware through to VMs to your heart’s content.

FreeNAS PCI Passthrough dev_taste error message

After getting my xenified FreeNAS up and running I noticed an oddity with disk reporting. When I pulled up the reports tab I noticed ada0 never showed any activity, despite my knowing that disk is doing plenty.

The mystery became greater when I noticed these error messages in my logs:

g_dev_taste: make_dev_p() failed (gp->name=ada0, error=17)

After some research I discovered here that disks passed through to a VM via Xen’s PCI Passthrough function present themselves to FreeBSD in a peculiar manner. In particular, the first disk in the passthrough array presents itself as ada0, despite the boot disk also having the name of ada0. With two disks named ada0 it’s a tossup on which one shows up in reporting, not to mention the strange errors above.

The fix is to add BSD parameter to not start disk numbering at ada0. For FreeNAS, you do this via the tunables section (System / Tunables / Add Tunable.) Add the following tunable:

variable: hint.ada.0.at
Value: scbus100
Comment: ada0 PCI passthrough fix
Enabled: true

Once that is configured, reboot FreeNAS. You will now have proper reporting of all your passthrough disks and the strange dev_taste errors will be gone.