Category Archives: Virtualization

Posts about hypervisors and virtualization

Convert xenserver .xva file to raw disk image

What if you want to migrate a VM that’s been living on Citrix Xenserver to a different linux machine running vanilla Xen? The process isn’t as straightforward as you might think. Fortunately thanks to Eriklax over at github there is a fairly easy way to convert xenserver’s .xva virtual machines to other formats, via xva-img.

The first step is to download and install xva-img from github.

wget https://github.com/eriklax/xva-img/archive/master.zip
unzip master.zip
cd xva-img-master
cmake .
sudo make install

When trying to compile this on my Linux Mint Cinnamon machine I ran into the following errors:

CMake Error: your CXX compiler: "/usr/bin/c++" was not found.   Please set CMAKE_CXX_COMPILER to a valid compiler path or name.
xva-img-master/src/sha1.cpp:20:25: fatal error: openssl/sha.h: No such file or directory
 #include <openssl/sha.h>

I had to install the build-essential and libssl-dev packages in order to successfully compile and install xva-img.

Now that it’s installed, create a directory and extract your .xva file into it.

mkdir my-virtual-machine 
tar -xf <.xva file> -C my-virtual-machine 
chmod -R 755 my-virtual-machine

Once that’s finished (it might take a while – it took over an hour for me) the last step is to convert the extracted directories into a raw disk file.

Note:  when you extract your VM tar creates subfolders for each hard disk attached to the VM. You will have to run this command for each Ref folder that was generated as part of the image extraction process.

xva-img -p disk-export my-virtual-machine/Ref\:1/ disk.raw

It took a while for some reason, but it did eventually generate the desired image.

Now that I have a raw disk image I can transfer it to an LVM partition for use with xen:

sudo dd if=win8.1.img of=/dev/desktop-xen/Win8.1 bs=64M

Success.

Convert xenserver installation to software RAID-1

Update 2/28/2015:  I have a newer article explaining how to do this in Xenserver 6.5.


 

After having a hard drive nearly die on me and threaten to obliterate the VMs living on it I realized it would be a good idea to have my xenserver installation live on a RAID array.

Following this guide I was able to successfully migrate my running xenserver installation to a software based RAID 1, with a few tweaks. In my case I wanted to migrate from a single old drive to two newer ones.

Below are the steps I took to accomplish this.

Partition the new drives

This assumes that your current drive resides on /dev/sda, and your two new drives are /dev/sdb and /dev/sdc.

sgdisk -p /dev/sda
sgdisk --zap-all /dev/sdb
sgdisk --zap-all /dev/sdc
sgdisk --mbrtogpt --clear /dev/sdb
sgdisk --mbrtogpt --clear /dev/sdc
sgdisk --new=1:34:8388641 /dev/sdb
sgdisk --new=1:34:8388641 /dev/sdc
sgdisk --typecode=1:fd00 /dev/sdb
sgdisk --typecode=1:fd00 /dev/sdc
sgdisk --attributes=1:set:2 /dev/sdb
sgdisk --attributes=1:set:2 /dev/sdc
sgdisk --new=2:8388642:16777249 /dev/sdb
sgdisk --new=2:8388642:16777249 /dev/sdc
sgdisk --typecode=2:fd00 /dev/sdb
sgdisk --typecode=2:fd00 /dev/sdc

The third partition (VM storage) had to be tweaked a bit since these are larger drives than the current xenserver installation. I simply used gdisk instead of sgdisk for this task.

gdisk /dev/sdb
n #create new partition
<enter> #accept defaults for partition number, first, and last sectors
<enter>
<enter>
t #select partition type
3 #select partition number 3
fd00  #set for raid
w   #write changes to disk

Repeat above steps for the other disk (/dev/sdc in my case)

Create the RAID arrays for each partition

mdadm --create /dev/md0 --level=1 --raid-devices=2  /dev/sdb1 /dev/sdc1
mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sdb2 /dev/sdc2
mdadm --create /dev/md2 --level=1 --raid-devices=2 /dev/sdb3 /dev/sdc3

Watch array build (optional)

cat /proc/mdstat

Alternatively you can use the watch command to get a real time update of the raid build:

watch -n 1 cat /proc/mdstat

Format & mount the array

mkfs.ext3 /dev/md0
mount /dev/md0 /mnt

Copy the root filesystem to the new array

cp -vxpr / /mnt

Install bootloader on the new disks

mount --bind /dev /mnt/dev
mount -t sysfs none /mnt/sys
mount -t proc none /mnt/proc
chroot /mnt /sbin/extlinux --install /boot
dd if=/mnt/usr/share/syslinux/gptmbr.bin of=/dev/sdb
dd if=/mnt/usr/share/syslinux/gptmbr.bin of=/dev/sdc

Generate new initrd image

chroot /mnt
mkinitrd -v -f --theme=/usr/share/splash --without-multipath /boot/initrd-`uname -r`.img `uname -r`
exit

Modify boot file

Edit /mnt/boot/extlinux.conf and replace every mention of the old root filesystem (root=LABEL=xxx) with root=/dev/md0.

vi /mnt/boot/extlinux.conf
:%s/LABEL=<root label>/\/dev\/md0/
:wq

Reboot

Keep the old drive in, but make sure to boot from either one of the member drives of your new array.

Create storage repository

Create new local storage repository with the new RAID array similar to here.

xe sr-create content-type=user device-config:device=/dev/md2 host-uuid=<UUID of xenserver host> name-label="RAID-1" shared=false type=lvm

Migrate VMs / disks

Migrate any disk images or VMs living on the old drive to the new array.

If these VMs / disks are not powered on or being used, it is as simple as pulling up xencenter, right clicking on the VM and clicking move then  select new storage repository.

If the VMs are online you can live migrate them to a different xenserver, then live migrate them back to the proper storage repository.

Remove old storage repository

Following instructions found here.
Note: In my case the transfer returned a strange error but was still successful. I had to restart the XAPI toolstack in order for it to let me remove the old storage repository.

xe sr-list name-label="<name of SR to remove>"
xe pbd-list sr-uuid=<UUID of SR above>
xe pbd-unplug uuid=<UUID of pbd above>
xe sr-forget uuid=<UUID of SR>

Final reboot

Shutdown, disconnect the old drive, and boot back up from the new array. Success.

Configure e-mail alerts (optional)

Now that you have a working RAID array you might want to receive e-mail alerts if there are problems with the array.

First, build an mdadm.conf

mdadm --detail --scan > /etc/mdadm.conf

Modify mdadm.conf to add your desired e-mail address for notifications

sed -i '1i MAILADDR <e-mail address>' /etc/mdadm.conf

Thanks to this site for the sed -i 1i trick.

Lastly, enable the mdadm monitoring service. I found via this site that this is fairly easy to do.  Simply enter these two commands:

service mdmonitor start
chkconfig mdmonitor on

Xenserver uses ssmtp to send e-mail. You can follow this guide on how to set it up for SSL if you happen to have an ISP that blocks port 25 (as I do.) Otherwise modify /etc/ssmtp/ssmtp.conf to suit your needs.

You can generate a test event from mdadm to make sure e-mail is configured properly:

mdadm --monitor --test /dev/md0 --oneshot

To get e-mail alerts to work right I had to ensure that FromLineOverride was NOT set to yes (default). I also had to add this line to /etc/ssmtp/revaliases:

root:<e-mail address being sent from>


Update 02/03/2015:  A commenter made me realize I forgot a step – copying the Control Domain OS to the new Raid array. I’ve added that step above, after the “Format & Mount the array” section.

Update 02/17/2015: If you are using Xenserver 6.5 you might come across the following error message when trying to create RAID arrays:

mdadm: unexpected failure opening /dev/md0

If this happens, load the md kernel driver like so:

modprobe md

It should then let you create your arrays.

Xenserver – The uploaded patch file is invalid

It has been six months since I’ve applied any patches to my Citrix Xenserver hypervisor. Shame on me for not checking for updates. The thing has been humming along without any issues so it was easy to forget about.

In trying to install xenserver patches today I kept getting this error message no matter what I tried:

The uploaded patch file is invalid

After deleting everything I could (including files hanging out in /var/patch) I realized that I was simply Doing It Wrong™. D’oh!

When applying xenserver updates, the expected file extension is .xsupdate. I had been trying to xe patch-upload the downloaded zip file, whereas I was supposed to have extracted those zips before trying to upload them.  This quick little line unzipped all my patch ZIP files for me in one swoop:

find *.zip -exec unzip {} \;

Once everything was unzipped I was able to upload and apply the resulting .xsupdate files without issue.

FreeNAS on Xenserver with PVHVM support

In my current home setup I have a single server performing many functions thanks to Citrix Xenserver 6.2 and PCI Passthrough. This single box is my firewall, webservers, and NAS. My primary motivation for this is power savings – I didn’t want to have more than one box up 24/7 but still wanted all those separate services, some of which are software appliances that aren’t very customizable.

My current NAS setup is a simple Debian Wheezy virtual machine with the on-board SATA controller from the motherboard passed through to it. The VM runs a six drive software RAID 6 using mdadm and LVM volume management on top of it. Lately, though, I have become concerned with data integrity and my use of commodity drives. It prompted me to investigate ZFS as a replacement for my current setup. ZFS has many features, but the one I’m most interested in is its ability to detect and correct any and all corrupted files / blocks. This will put my mind at ease when it comes to the thousands of files that I have which are accessed infrequently.

I decided to try out FreeNAS, a NAS appliance which utilizes ZFS. After searching on forums it quickly became clear that the people at FreeNAS are not too keen on virtualizing their software. There is very little help to be had there in getting it to work in virtual environments. In the case of Xenserver, FreeNAS does work out of the box but it is considerably slower than bare metal due to its lack of support of Xen HVM drivers.

Fortunately, a friendly FreeNAS user posted a link to his blog outlining how he compiled FreeNAS to work with Xen. Since Xenserver uses Xen (it’s in the name, after all) I was able to use his re-compiled ISO (I was too lazy to compile my own) to test in Xenserver.

There are some bugs to get around to get this to work, though. Wired dad’s xenified FreeNAS doesn’t appear to like to boot in Xenserver, at least out of the box. It begins to boot but then hangs indefinitely on the following error:

run_interrupt_drive_hooks: still waiting after 60 seconds for xenbusb_nop_confighook_cb

This is the result of a bug in the version of qemu Xenserver uses. The bug causes BSD kernels to really not like the DVD virtual device in the VM and refuse to boot. The solution is to remove the virtual DVD drive. How, then, do you install FreeNAS without a DVD drive?

It turns out that all the FreeNAS installer does is extract an image file to your target drive. That file is an .xz file inside the ISO. To get wired dad’s FreeNAS Xen image to work in Xenserver, one must extract that .xz file from the ISO, expand it to an .img file, and then apply that .img file to the Xenserver virtual machine’s hard disk. The following commands can be run on the Xenserver host machine to accomplish this.

  1. Create a virtual machine with a 2GB hard drive.
  2. Mount the FreeNAS-xen ISO in loopback mode to get at the necessary file
    mkdir temp
    mount -o loop FreeNAS-9.2.1.5-RELEASE-xen-x64.iso temp/
  3. Extract the IMG file from the freeNAS ISO
    xzcat ~/temp/FreeNAS-x64.img.xz | dd of=FreeNAS_x64.img bs=64k
    

    Note that the IMG file is 2GB in size, which is larger than can sit in the root drive of a default install Xenserver. Make sure you extract this file somewhere that has enough space.

  4. Import that IMG file into the virtual disk you created with your VM in step 1.
    cd ..
    xe vdi-import uuid=<UUID of the 2GB disk created in step 1> filename=FreeNAS_x64.img
    

    This results in an error:

    The server failed to handle your request, due to an internal error.  The given message may give details useful for debugging the problem.
    message: Caught exception: VDI_IO_ERROR: [ Device I/O errors ]
    

    This error can be safely ignored – it did indeed copy the necessary files.
    Note: To obtain the UUID of the 2GB disk you created in step 1, run the “xe vdi-list” command and look for the name of the disk.

  5. Remove the DVD drive from the virtual machine. From Xencenter:
    Shutdown the VM
    Mount xs-toos.iso
    Run this command in a command prompt:

    xe vm-cd-remove uuid=<UUID of VM> cd-name=xs-tools.iso
  6. Profit!

There is one aspect I haven’t gotten to work yet, and that is Xenserver Tools integration. The important bit – paravirtualized networking – has been achieved so once I get more time I will investigate xenserver tools further.

Create local storage in Xenserver

For some reason the default installation of Xenserver on one of my machines did not create a local storage repository. I think it might be due to my having installed over an existing installation of Xenserver and the installer got confused.

I tried manually creating a storage repository by running the following command:

xe sr-create content-type=user device-config:device=/dev/disk/by-id/scsi-SATA_WDC_WD3200AAJS-_WD-WMAV2C718714-part3 host-uuid=9f8ddd87-0e83-4322-8150-810d2b365d37 name-label="Local Storage" shared=false type=lvm

Alas, it resulted in an error:

Error code: SR_BACKEND_FAILURE_55
Error parameters: , Logical Volume partition creation error [opterr=error is 5],

After much googling I came across this page, which has the explanation. Apparently you need to create an LVM physical volume on the desired partition by running the following command:

pvcreate /dev/disk/by-id/scsi-SATA_WDC_WD3200AAJS-_WD-WMAV2C718714-part3

WARNING: software RAID md superblock detected on /dev/disk/by-id/scsi-SATA_WDC_WD3200AAJS-_WD-WMAV2C718714-part3. Wipe it? [y/n] y

It seems the installer noticed an md superblock on this partition and freaked out, hence no local storage. Agreeing to wipe it created the storage repository. One last step: making it the default repository:

xe pool-param-set uuid=<pool UUID> default-SR=<SR UUID>

You can get the pool UUID by running: xe pool-list

Done.


Edit: 10/09/2014

I recently came across a new error message when trying to add a local repository:

The SR operation cannot be performed because a device underlying the SR is in use by the host.

Google searches didn’t reveal much. After a while I realized what was wrong: I had omitted the host-uuid: option. This option is required when you are a part of a pool, but not when you have a standalone xenserver. So, if your xenserver is a member of a pool, don’t forget the host-uuid parameter.

Manually apply patches to Citrix Xenserver

Citrix Xenserver has many features, all of which are now free as of Xenserver 6.2. XenCenter, however, still expects a support license to use some of its features. One of those features is applying patches. Fortunately it’s easily done via the command line. Their site has documentation on how to do this. Below are my “cliff notes”

  1. xe patch-upload file-name=<filename>
    Note: .xsupdate is the extension of xenserver updates
  2. Wait a moment, then copy the UUID that it outputs
  3. xe host-list
  4. xe patch-apply uuid=<UUID copied from patch-upload>  host-uuid=<host UUID as out put from xe host-list>

If you’re in a pool, instead of xe patch-apply, you can do xe patch-pool-apply <UUID> to apply the patch to all pool members.

Xen HVM domU doesn’t synchronize with dom0 clock

After much research I’ve discovered that Xen does not synchronize the clock between dom0 and its HVM domUs. This poses a problem when you implement S3 sleep. Upon resume,  dom0 realizes how much time has passed but none of the domUs do. I realized this after a few days of successfully putting my Xen machine to sleep with running DomU virtual machines

The DomU in my case is a Windows 8.1 virtual machine. At first I thought that the standard Windows time service would take care of any clock discrepancies – it doesn’t. If your clock gets too far behind it simply refuses to update. My solution to this problem is two fold:

  1. Configure Windows to use my NTP server for clock updates
  2. Force Windows to check with the NTP server every minute and update its clock accordingly.

Fortunately the later Windows versions have an NTP client built in. Simply open an administrator command prompt and issue two commands:

w32tm /config /syncfromflags:manual /manualpeerlist:<hostname>

schtasks /create /sc minute /mo 1 /tn "NTP clock update" /tr "%WINDIR%\system32\w32tm.exe /resync /force" /RU SYSTEM

The first command configures your system with your NTP server of choice. Replace <hostname> with your desired hostname or IP address, minus the brackets. The second command creates a task which executes a command to force an NTP check every minute as the SYSTEM user (non-privileged users get an access denied message.) You can do it all with a GUI but the command line is so much more efficient 🙂

It works perfectly. My DomU now automatically checks if it has the correct time – very important if you ever put your dom0 to sleep while DomUs are running.

Put Xen dom0 to sleep with active pci passthrough VMs

Thanks to Xen 4.3, it is now possible to suspend / resume dom0 while domUs are running. Unfortunately, if you have a VM actively using pci passthrough, the whole machine completely locks up about 10 seconds after resuming from S3 sleep.

As Xen is a lot more geared to servers I realize I might be an edge case; However, I would really like to be able to suspend my entire machine to S3 with VMs actively using PCI passthrough (in my case, a video card and USB controller). For quite some time I thought I was out of luck. After learning about the hot swapping capabilities of Xen 4.2+ and its pciback driver, I thought I would take another whack at it.

My solution is to create a custom script which detaches all PCI passthrough devices on the VM before going to sleep. That same script would re-attach those devices to my VM on resume.

My dom0 is currently Linux Mint 16 so I placed the resulting script in the /etc/pm/sleep.d/ directory and named it 20_win8.1 . It works like a charm! I can suspend and resume to my heart’s content without having to worry about if I remembered to shut down my VM first.

My script is below. Be sure to modify it for the BDF of your devices and the name of your VM(s) if you decide to use it.

#!/bin/bash
#Sleep / hibernate script for Xen with active DomUs using PCI Passthrough
#This script is necessary to avoid freezing of dom0 on resume for Xen 4.3
#Modified 08/19/2014

#Name of the VM we're passing PCI things to
VM="win8.1"

#B:D.F of PCI devices passed through to VM
VIDCARD="01:00.0"
VIDCARDAUDIO="01:00.1"
FRONTUSB="00:1d.0"

#xen attach/detach commands. Replace with xm if you're using that toolstack instead
ATTACH="xl pci-attach"
DETACH="xl pci-detach"

case "$1" in
    hibernate|suspend)
        $DETACH $VM $VIDCARD
        $DETACH $VM $VIDCARDAUDIO
        $DETACH $VM $FRONTUSB
        ;;
    thaw|resume)
        $ATTACH $VM $VIDCARD
        $ATTACH $VM $VIDCARDAUDIO
        $ATTACH $VM $FRONTUSB
        ;;
    *)
        ;;
esac
exit $?

Hotplug devices between Xen dom0, domU, and back again

In my experiments with Xen to make dual booting obsolete,  I’ve come across a need to hotplug PCI devices between dom0 and domU; Specifically, the SATA controller that my DVD-RW drive is connected to.

My DVD drive supports Lightscribe, which unfortunately is not nearly as strong in Linux as it is in Windows. You can get it to work but the label maker program is extremely basic. If I want to burn a lightscribe disc and have it look at all pretty it requires Windows.

The way I was doing PCI passthrough before was pretty inconvenient. It involved editing /etc/xen/pciback.conf and adding the bus:device.function (BDF) of the device I want to pass. This causes that device to be claimed by the pciback driver at boot time.

That’s all and well and good for the virtual machine, but what if you want your dom0 to use that device? You would have to remove the device from pciback.conf and reboot the machine.

As of Xen 4.2 there is now a better way.  You can have the pciback driver claim a device and return it to its original driver at any time without having to reboot.  The four magic commands are:

xl pci-assignable-add <BDF>
xl pci-attach <domain id / name> <BDF>
xl pci-detach <domain id / name> <BDF>
xl pci-assignable-remove -r <BDF>

The -r in pci-assignable-remove is necessary – it instructs xen to load the original driver that was loaded before we invoked pci-assignable-add. If you are using the xm toolstack instead, simply replace xl with xm.

Detaching from Dom0 and attaching to DomU

In my case I enter the following into a console whenever I want my Windows 8.1 virtual machine to have physical control of my DVD drive:


sudo xl pci-assignable-add 03:00.0
sudo xl pci-attach win8.1 03:00.0


Windows specific issues

It should have been as simple as that; Unfortunately, I ran into a road block. For some reason on the first try, Windows detected the drive but wouldn’t load any drivers for it (it thought none were necessary)

Screenshot from 2014-08-17 15:08:08
(this screenshot was taken when I was using a hard drive for troubleshooting, but the issue was the same with the DVD drive)

I tried ejecting the SATA controller and scanning for new devices as described on various forums, but that didn’t seem to work. The fix for me was to reboot the VM. Rebooting caused the PCI device to detach, so after the VM finished rebooting I had to re-issue “sudo xl pci-attach win8.1 03:00.0” to attach it again.  Triumph!

Screenshot from 2014-08-17 15:20:17

I tried to make the second pci-attach command unnecessary by adding pci=03:00.0 to my virtual machine’s configuration file, but since I was passing a storage controller it kept trying to boot from drives attached to that controller instead of the virtual machine’s hard drive. I tinkered around with the config file for a while to try and get it to boot from the VMs hard drive again but couldn’t get it to work.

Since everything works by simply issuing pci-attach twice I gave up and just moved on. In one final bout of tinkering I discovered that if you issue pci-attach right after you boot the VM but before the OS finishes loading, it works on the first try. So the moral of the story here is Microsoft weirdness requires you to jump through some minor hoops to get this to work.

Returning to Dom0

When I want my dom0 to have the drive back I issue the following:


sudo xl pci-detach win8.1 03:00.0
sudo xl pci-assignable-remove -r 03:00.0


No complications here, although there is a funny bug. The file manager used in Linux Mint 16 gets confused and keeps adding CD ROM entries each time I pass the drive back and forth, but everything still works – it’s just a visual bug.

The drive is now accessible by dom0 once again. Success!

Screenshot from 2014-08-17 15:47:31

 

 

 

Xenserver and clock drift

When it comes to a virtual machine’s clock my experience with other virtualization solutions has been that it’s automatically synchronized with the host machine. I didn’t notice until recently that this is not the case with Citrix Xenserver – at least when it comes to PVHVM machines.

I tried installing openntpd on each of my VMs and setting it to my internal NTP server (which in turn synchronizes with the web.) After a few days I was frustrated to see that the servers were still not in sync – some were minutes behind while others were inexplicably minutes ahead. Some of this might have to do with my experiments on live migrating these VMs a while back.

At any rate, it was clear that openntpd failed to do the job. Some research revealed that there is a bug where it reports adjusting the clock when the real status was an error. That little bug cost me an hour or two of digging and troubleshooting. Very frustrating.

I switched to plain old ntp instead and the problem was resolved within moments.

clock

Moral of the story: Make sure you have a proper NTP setup for each of your VMs if you’re going to use Citrix Xenserver.