Category Archives: OS

Migrate from Xenserver to Proxmox

I was dismayed to see Citrix’s recent announcement about Xenserver 7.3 removing several key features from the free version. Xenserver’s free features are the reason I switched over to them in the first place back in 2014. Xenserver has been rock solid; I haven’t had any complaints until now. Their removal of xenmotion and migration in the free version forced me to look elsewhere for my virtualization needs.

I’ve settled on ProxMox, which is KVM based. Their documentation is excellent and it has all the features I need – for free. I’m also in love with their web based management – no more Windows fat client!

Below are my notes on how I successfully migrated all my Xenserver VMs over to the ProxMox Virtual Environment (PVE).

  • Any changes to network interfaces, such as bringing them up, require a reboot of the host
  • If you have an existing ISO share, you can create a directory called  “template” in your ISO repository folder, then inside symlink “iso” back to your ISO folder. Proxmox looks inside template/iso for ISO images for whatever storage you configure.
  • Do not create your ProxMox host with ZFS unless you have tons of RAM. If you don’t have enough RAM you will run into huge CPU load times making the system unresponsive in cases of high disk load, such as VM copies / backups. More reading here.

Cluster of two:

ProxMox’s clustering is a bit different – better, in my opinion. No more master, slave dynamic – ever node is a master. Important reading: https://pve.proxmox.com/wiki/Proxmox_VE_4.x_Cluster

If you have two node cluster, like I do, it creates some problems, though. If one goes down, the other can’t do anything to the pool (create VM, backup) until it comes back up. In my situation I have one primary host that is up all the time and I bring the secondary host up only when I want to do maintenance on the first.

In that specific situation you can still designate a “master” of sorts by increasing the number of quorum votes it gets from 1 to 2.  That way when the secondary node is down, the primary node can still do cluster operations because the default number of votes to stay quorate is 2. See here for more reading on the subject.

On either host (they must both be up and in the cluster for this to work)

vi /etc/pve/corosync.conf

Find your primary server in the nodelist settings and change

quorum_votes: 2

Also find the quorum section and add expected_votes: 2

Make sure to increment config_version number (bottom of the file.) Now if your secondary is down you can still operate the primary.

Migrating VMs

I migrated my Xen VMs to KVM by creating VMs with identical specs in PVE, copying the VHD files from the Xen host to the new PVE host, running qemu-img to convert them to RAW format, and then using dd to copy the raw information over to corresponding empty VM  disks. Depending on the OS of the VM there was some after-copy tweaking I also had to do.

From shared storage

Grab the VHD file (quiesce any snapshots away first) of each xen VM and convert them to raw format

qemu-img convert <VHD_FILE_NAME>.vhd -O raw <RAW_FILE_NAME>.raw

Create a new VM with identical configuration, especially disk size. Go to the hardware tab and take note of the name of the disk. For example, one of mine was:

local-zfs:vm-100-disk-1,discard=on,size=40G

The interesting part is between local-zfs and discard=on, namely vm-100-disk-1. This is the name of the disk we want to overwrite with data from our Xenserver VM’s disk.

Next figure out the full path of this disk on your proxmox host

find / -name vm-100-disk-1*

The result in my case was /dev/zvol/rpool/data/vm-100-disk-1

Take the name and put it in the following command to complete the process:

dd if=<RAW_FILE_NAME>.raw of=/dev/zvol/rpool/data/vm-100-disk-1 bs=16M

Once that’s done you can delete your .vhd and .raw files.

From local / LVM storage

In case your Xen VMs are stored in LVM device format instead of a VHD file, get UUID of storage by doing xe vdi-list and finding the name of the hard disk from the VM you want. It’s helpful to rename the hard disks to something easy to spot. I chose the word migrate.

xe vdi-list|grep -B3 migrate
uuid ( RO) : a466ae1b-80c7-4ef2-91a3-5c1ba1f6fc2f
 name-label ( RW):  migrate

Once you have the UUID of the drive, you can use lvscan to find the full LVM device path of that disk:

lvscan|grep a466ae1b-80c7-4ef2-91a3-5c1ba1f6fc2f
 inactive '/dev/VG_XenStorage-1ada0a08-7e6d-a5b6-d0b4-515e251c0c75/VHD-a466ae1b-80c7-4ef2-91a3-5c1ba1f6fc2f' [10.03 GiB] inherit

Shut down the corresponding VM and reactivate its logical volume (xen deactivates LVMs if the VM is shut off:

lvchange -ay <full /dev/VG_XenStorage path discovered above>

Now that we have the full LVM path and the volume is active, we can use dd over SSH to transfer the image to our proxmox server:

sudo dd if=<full /dev/VG/Xenstorage path discovered above> | ssh <IP_OF_PROXMOX_SERVER> dd of=<LOCATION_ON_PROXMOX_THAT_HAS_ENOUGH_SPACE>/<NAME_OF_VDI_FILE>.vhd

then follow vhd -> raw -> dd to proxmox drive process described in the From Shared Storage section.

Post-Migration tweaks

For the most part Debian-based systems moved over perfectly without any needed tweaks; Some VMs changed interface names due to network device changes. eth0 turned into ens8. I had to modify /etc/network/interfaces to change eth0 to ens8 to get virtio networking working.

CentOS

All my CentOS VMs failed to boot after migration due to a lack of virtio disk drivers in the initial RAM disk. The fix is to change the disk hardware to IDE mode (they boot fine this way) and then modify the initrd of each affected host:

sudo dracut --add-drivers "virtio_pci virtio_blk virtio_scsi virtio_net virtio_ring virtio" -f -v /boot/initramfs-`uname -r`.img `uname -r`
sudo sh -c "echo 'add_drivers+=\" virtio_pci virtio_blk virtio_scsi virtio_net virtio_ring virtio \"' >> /etc/dracut.conf"
sudo shutdown -h now

Once that’s done you can detach the hard disk and re-attach it back as SCSI (virtio) mode. Don’t forget to modify the options and change the boot order from ide0 to scsi0

Arch Linux

One of my Arch VMs had UUID configured which complicated things. The root device UUID changes in KVM virtio vs IDE mode. The easiest way to fix it is to boot this VM into an Arch install CD. Mount the root partition and then run arch-chroot /mnt/sda1. Once in the chroot runpacman -Sy kernel to reinstall the kernel and generate appropriate kernel modules.

mount /dev/sda1 /mnt
arch-chroot /mnt
pacman -Sy kernel

Also make sure to modify /etc/fstab to reflect appropriate device id or UUID (xen used /dev/xvda1, kvm /dev/sda1)

Windows

Create your Windows VM using non-virtio drivers (default settings in PVE.) Obtain the latest windows virtio drivers here and extract them somewhere memorable. Switch everything but the disk over to Virtio in the VM’s hardware config and reboot the VM. Go into device manager and point to extracted driver location for each unknown device.

To get Virtio disk to work, add a new disk to the VM of any size and SCSI (virtio) type. Boot the Windows VM and install drivers for that drive. Then shut down, remove that second drive, detach the primary drive and change to virtio SCSI. It should then come up with full virtio drivers.

All hosts

KVM has a guest agent like xenserver does called qemu-agent. Turn it on in VM options and install qemu-guest-agent in your guest. This KVM a bit more insight into your host.

Determine which VMs need guest agent installed:

qm agent $id ping

If nothing is returned, it means qemu-agent is working. You can test all your VMs at once with this one-liner (change your starting and finishing VM IDs as appropriate)

for id in {100..114}; do echo $id; qm agent $id ping; done

This little one-liner will output the VM ID it’s trying to ping and will return any errors it finds. No errors means everything is working.

Disable support nag

PVE has a support model and will nag you at each login. If you don’t like this you can change it like so (the line number might be different depending on which version you’re running:

vi +850 /usr/share/pve-manager/js/pvemanagerlib.js

Modify the line if (data.status !== ‘Active’); change it to

if (false)

Troubleshooting

Remove a failed node

See here: https://pve.proxmox.com/wiki/Proxmox_VE_4.x_Cluster#Remove_a_cluster_node

systemctl stop pvestatd.service
systemctl stop pvedaemon.service
systemctl stop pve-cluster.service
rm -r /etc/corosync/*
rm -r /var/lib/pve-cluster/*
reboot

Quorum never establishes / takes forever

I had a really strange issue where I was able to establish quorum with a second node, but after a reboot quorum never happened again. I re-installed that second node and re-joined it several times but I never got past the “waiting for quorum….” stage.

After much research I came across this article which explained what was happening. Corosync uses multicast to establish cluster quorum. Many switches (including mine) have a feature called IGMP snooping, which, without an IGMP querier, essentially means multicast never happens. Sure enough, after logging into my switches and disabling IGMP snooping, quorum was instantly established. The article above says this is not recommended, but in my small home lab it hasn’t produced any ill effects. Your mileage may vary. You can also configure your cluster to use unicast instead.

USB Passthrough not working properly

With Xenserver I was able to pass through the USB controller of my host to the guest (a JMICRON USB to ATAATAPI bridge holding a 4 disk bay.) I ran into issues with PVE, though. Using the GUI to pass the USB device did not work. Manually adding PCI passthrough directives (hostpci0: 00:14.1) didn’t work. I finally found on a little nugget on the PCI Passthrough page about how you can simply pass the entire device and not the function like I had in Xenserver. So instead of doing hostpci0: 00:14.1, I simply did hostpci0: 00:14 . That  helped a little bit, but I was still unable to fully use these drives simultaneously.

My solution was eventually to abandon PCI passthrough altogether in favor of just passing individual disks to the guest as outlined here.

Find the ID of the desired disks by issuing ls -l /dev/disk/by-id. You only need to know the UUIDs of the disks, not the partitions. Then modify the KVM config of your desired host (mine was located at /etc/pve/qemu-server/101.conf) and a new line for each disk, adjusting scsi device numbers and UUIDs to match:

scsi5: /dev/disk/by-id/scsi-SATA_ST5000VN000-1H4_Z111111

With that direct disk access everything is working splendidly in my FreeNAS VM.

Fix icedtea Cannot grant permissions to unsigned jars error

I banged my head on a wall for a while before I finally found a fix to this one. OpenJDK8 has new security features that break compatibility with the IPMI interfaces of my older servers. The problem in my case stemmed from the fact that the java applet is signed, just with an algorhythm that JDK8 blacklists. So, I had to remove MD5 from the blacklisted algorhythms to get this to work. Thanks to this site for guidance on how to do this.

Per that site, this is what I did to fix the issue:

Find the java.security file. In my case it is located in /usr/lib/jvm/java-8-openjdk-amd64/jre/lib/security/java.security

Then find the row:

1
jdk.jar.disabledAlgorithms=MD2, MD5, RSA keySize < 1024

Comment it out, copy it, delete the MD5 string.

1
2
#jdk.jar.disabledAlgorithms=MD2, MD5, RSA keySize < 1024
jdk.jar.disabledAlgorithms=MD2, RSA keySize < 1024

Installing Linux Mint 18.3 with NVIDIA GTX 1070

I became very frustrated when trying to install the latest Linux Mint on my desktop, which contains an NVIDIA GTX 1070 graphics card. No matter what I tried I couldn’t even get the live CD environment to show up. It would stay at text, and even play the login sound, but no matter what I pressed I couldn’t get anything to come up on the display.

After much digging I came across an Ubuntu forum post which directed me to this manual describing different boot-time options. I had read somewhere that you want nomodeset to be enabled, but that didn’t cut it. Finally after reading the Ubuntu options I found the second half – vga=791

So, to install (and run for the first time before installing NVIDIA drivers) Linux Mint on a machine that has an NVIDIA Geforce GTX 1070, you have to edit the grub startup options (by pressing esc / tab) and append the following to the kernel line:

nomodeset vga=791

Also, if you manually partition the installation, make sure that the /boot partition is EXT2. I had first installed it as EXT4 but ran into strange problems.. restarting and making it EXT2 made those problems go away.

Nextcloud External Files SMB not working – Empty Response from Server

I beat my head against the wall for hours trying to figure out why external storage wasn’t working with my Nextcloud instance after I migrated it over to CentOS 7. All I kept getting was a very unhelpful

There was an error with message: Empty response from the server.

I installed all the libraries multiple times. I tried different versions of smbclient and php-smbclient but the error kept happening! Eventually I decided to check the samba logs of my samba server. Sure enough:

 create_connection_session_info failed: NT_STATUS_ACCESS_DENIED

The username and password I was using was correct. I read on some forum (sorry, no link) to put something in the Workgroup field. Voila! As soon as I populated the workgroup field in Nextcloud for my SMB shares, they all worked!

Make Java run on privileged ports in CentOS 7

I recently gnashed my teeth at trying to get java to directly bind to port 443 instead of using nginx to proxy to a java application I had to use. I was surprised at the complication of finding the solution, but I eventually did thanks to the following sites:

https://superuser.com/questions/710253/allow-non-root-process-to-bind-to-port-80-and-443/892391

https://github.com/kaitoy/pcap4j/issues/63

First, determine the full path of your current java install:

sudo update-alternatives --config java

In my CentOS 7 install, the java binary was located here:

/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.151-1.b12.el7_4.x86_64/jre/bin/java

Next, use setcap to configure java to be able to bind to port 443:

sudo setcap CAP_NET_BIND_SERVICE=+eip /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.151-1.b12.el7_4.x86_64/jre/bin/java

Now, test to make sure java works:

java -version

java: error while loading shared libraries: libjli.so: cannot open shared object file: No such file or directory

The above error means that after setting setcap, it breaks how java looks for its library to run. To fix this, we need to symlink the library it’s looking for into /usr/lib, then run ldconfig

sudo ln -s /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.151-1.b12.el7_4.x86_64/jre/lib/amd64/jli/libjli.so /usr/lib/
sudo ldconfig

Now test Java again:

java -version

It took longer than I like to admit to get this working, but it it does indeed work this way.

gvfs-mount doesn’t use kerberos ticket fix

I had a very frustrating issue when I switched my work desktop from Linux Mint to CentOS 7. My network drives no longer would mount using a kerberos ticket. Before, when I would initialize a kerberos ticket I could mount network shares without any kind of username or password prompt:

kinit
gvfs-mount smb://server/share

With CentOS 7, though, the above process would produce a username and password prompt. After much searching I came across this forum that contained the answer: append your fully qualified domain name to the server. Now the process is like this:

kinit
gvfs-mount smb://server.full.fqdn.name/share

and it worked!

VMWare Horizon View Mac client USB Smartcard passthrough

I came across a need to pass through a USB smartcard device to a VM using the VMware Horizon View client for Mac OS. My smart card reader would not show up in the list of devices to redirect to the VM. After doing some research I came across this document which outlines the commands I needed to run:

sudo defaults write com.vmware.viewusb IncludeFamily smart-card
sudo defaults write com.vmware.viewusb AllowSmartcard Enable

You can verify which settings have been applied with the following command:

sudo defaults read com.vmware.viewusb

Success! The client now sees my smartcard reader as an option to pass through to a VM guest.

 

Fix no bluetooth devices found in Linux Mint

I had a peculiar issue today where I suddenly lost the ability to see any bluetooth devices on my Linux Mint 18.2 desktop utilizing a Plugable USB Bluetooth adapter.

The fix involved checking kernel messages for anything insightful. In my case this is what led me to the solution:

[ 608.988353] Bluetooth: hci0: BCM: Patch brcm/BCM20702A1-0a5c-21e8.hcd not found
[ 609.156320] Bluetooth: hci0: BCM: chip id 63
[ 609.172330] Bluetooth: hci0: LPP-3389-WIN
[ 609.173313] Bluetooth: hci0: BCM20702A1 (001.002.014) build 1764
[ 609.173347] bluetooth hci0: Direct firmware load for brcm/BCM20702A1-0a5c-21e8.hcd failed with error -2

After some googling I finally came across the solution here. The fix is to download the firmware for your bluetooth adapter and place it in the place the bluetooth kernel module expects it to be in, then to reload the bluetooth kernel module.

sudo mkdir -p /lib/firmware/brcm
sudo wget https://s3.amazonaws.com/plugable/bin/fw-0a5c_21e8.hcd -O /lib/firmware/brcm/BCM20702A1-0a5c-21e8.hcd
sudo rmmod btusb bnep bluetooth btrtl btintel bnep btbcm
sudo modprobe btusb bnep bluetooth btrtl btintel bnep btbcm

That did the trick! You can also reboot your machine instead of removing / re-loading the kernel modules and it will accomplish the same thing.

Fix no sound in Wine

Lately I’ve been doing 100% of my gaming in Linux. The latest versions of wine in Arch Linux have been fantastic (for the most part.) I recently installed a game called Gauntlet (a windows-only steam game.) For some reason I had no sound. Sound worked fine in other Wine games, just not this one.

After much digging I found this post on the Arch Linux forums which fixed my issue. The issue was not having the proper 32bit sound libraries installed. The fix was as simple as:

sudo pacman -Sy lib32-alsa-plugins lib32-libpulse lib32-openal

Success!